• Title/Summary/Keyword: 성능 변수

Search Result 3,819, Processing Time 0.033 seconds

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

Testing for Measurement Invariance of Fashion Brand Equity (패션브랜드 자산 측정모델의 등치테스트에 관한 연구)

  • Kim Haejung;Lim Sook Ja;Crutsinger Christy;Knight Dee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.28 no.12 s.138
    • /
    • pp.1583-1595
    • /
    • 2004
  • Simon and Sullivan(l993) estimated that clothing and textile related brand equity had the highest magnitude comparing any other industry category. It reflects that fashion brands reinforce the symbolic, social values and emotional characteristics being different from generic brands. Recently, Kim and Lim(2002) developed a fashion brand equity scale to measure a brand's psychometric properties. However, they suggested that additional psychometric tests were needed to compare the relative magnitude of each brand's equity. The purpose of this study was to recognize the psychometric constructs of fashion brand equity and validate Kim and Lim's fashion brand equity scale using the measurement invariance test of cross-group comparison. First, we identified the constructs of fashion brand equity using confirmatory factor analysis through structural equation modeling. Second, we compared the relative magnitude of two brands' equity using the measurement invariance test of multi-group simultaneous factor analysis. Data were collected at six major universities in Seoul, Korea. There were 696 usable surveys for data analysis. The results showed that fashion brand equity was comprised of 16 items representing six dimensions: customer-brand resonance, customer feeling, customer judgment, brand imagery, brand performance and brand awareness. Also, we could support the measurement invariance of two brands' equities by configural and metric invariance tests. There were significant differences in five constructs' mean values. The greatest difference was in customer feeling; the smallest, in customer judgment.

Anisotrpic radar crosshole tomography and its applications (이방성 레이다 시추공 토모그래피와 그 응용)

  • Kim Jung-Ho;Cho Seong-Jun;Yi Myeong-Jong
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2005.09a
    • /
    • pp.21-36
    • /
    • 2005
  • Although the main geology of Korea consists of granite and gneiss, it Is not uncommon to encounter anisotropy Phenomena in crosshole radar tomography even when the basement is crystalline rock. To solve the anisotropy Problem, we have developed and continuously upgraded an anisotropic inversion algorithm assuming a heterogeneous elliptic anisotropy to reconstruct three kinds of tomograms: tomograms of maximum and minimum velocities, and of the direction of the symmetry axis. In this paper, we discuss the developed algorithm and introduce some case histories on the application of anisotropic radar tomography in Korea. The first two case histories were conducted for the construction of infrastructure, and their main objective was to locate cavities in limestone. The last two were performed In a granite and gneiss area. The anisotropy in the granite area was caused by fine fissures aligned in the same direction, while that in the gneiss and limestone area by the alignment of the constituent minerals. Through these case histories we showed that the anisotropic characteristic itself gives us additional important information for understanding the internal status of basement rock. In particular, the anisotropy ratio defined by the normalized difference between maximum and minimum velocities as well as the direction of maximum velocity are helpful to interpret the borehole radar tomogram.

  • PDF

Criteria of Evaluating Clothing and Web Service on Internet Shopping Mall Related to Consumer Involvement (인터넷 쇼핑몰 이용자의 소비자 관여에 따른 의류제품 및 웹 서비스 평가기준에 관한 연구)

  • Lee, Kyung-Hoon;Park, Jae-Ok
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.30 no.12 s.159
    • /
    • pp.1747-1758
    • /
    • 2006
  • Rapid development of the information technology has influenced on the changes in every sector of human environments. One prominent change in retail market is an increase of electronic stores, which has prompted practical and research interest in the product and store attributes that include consumer to purchase products from the electronic shopping. Therefore many marketers are paying much attention to the criteria of evaluating clothing and web service on internet shopping malls. The purpose of this study is to examine differences of clothing and web service criteria of consumer groups (High-Involvement & High-Ability, Low-Involvement & High-Ability, High-Involvement & Low-Ability, and Low-Involvement & Low-Ability) who are classified into consumer involvement and internet use ability. The subjects of this study were 305 people aged between 19 and 39s, living in Seoul and Gyeonggi-do area, and having experiences in buying products on the internet shopping. Statistical analyses used for this study were the frequency, percentage, factor analysis, ANOVA and Duncan test. The results of this study were as follows: Regarded on the criteria of evaluating clothing, the low different groups had significant differences in the esthetic, the quality performance and the extrinsic criterion. Both HIHA group and HILA group showed the similar results. They considered every criterion of evaluating clothing more important, compared with other groups. Regarded on the criteria of evaluating web service related to the low different groups, there were significant differences in the factors related to the shopping mall reliance, the product, the satisfaction after purchase, and the promotion and policy criterion. Both HIHA group and HILA group showed the similar results as well. They considered every criterion of evaluating web service more important, compared with other groups. In conclusion, HI groups perceive relatively more dangerous factors which can be occurred during internet shopping. Therefore, internet shopping malls need to provide clothing that can satisfy the HI groups as well as make efforts to remove the dangerous factors on the internet.

Estimation of surface nitrogen dioxide mixing ratio in Seoul using the OMI satellite data (OMI 위성자료를 활용한 서울 지표 이산화질소 혼합비 추정 연구)

  • Kim, Daewon;Hong, Hyunkee;Choi, Wonei;Park, Junsung;Yang, Jiwon;Ryu, Jaeyong;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.2
    • /
    • pp.135-147
    • /
    • 2017
  • We, for the first time, estimated daily and monthly surface nitrogen dioxide ($NO_2$) volume mixing ratio (VMR) using three regression models with $NO_2$ tropospheric vertical column density (OMIT-rop $NO_2$ VCD) data obtained from Ozone Monitoring Instrument (OMI) in Seoul in South Korea at OMI overpass time (13:45 local time). First linear regression model (M1) is a linear regression equation between OMI-Trop $NO_2$ VCD and in situ $NO_2$ VMR, whereas second linear regression model (M2) incorporates boundary layer height (BLH), temperature, and pressure obtained from Atmospheric Infrared Sounder (AIRS) and OMI-Trop $NO_2$ VCD. Last models (M3M & M3D) are a multiple linear regression equations which include OMI-Trop $NO_2$ VCD, BLH and various meteorological data. In this study, we determined three types of regression models for the training period between 2009 and 2011, and the performance of those regression models was evaluated via comparison with the surface $NO_2$ VMR data obtained from in situ measurements (in situ $NO_2$ VMR) in 2012. The monthly mean surface $NO_2$ VMRs estimated by M3M showed good agreements with those of in situ measurements(avg. R = 0.77). In terms of the daily (13:45LT) $NO_2$ estimation, the highest correlations were found between the daily surface $NO_2$ VMRs estimated by M3D and in-situ $NO_2$ VMRs (avg. R = 0.55). The estimated surface $NO_2$ VMRs by three modelstend to be underestimated. We also discussed the performance of these empirical modelsfor surface $NO_2$ VMR estimation with respect to otherstatistical data such asroot mean square error (RMSE), mean bias, mean absolute error (MAE), and percent difference. This present study shows a possibility of estimating surface $NO_2$ VMR using the satellite measurement.

Theoretical Study on Optimal Conditions for Absorbent Regeneration in CO2 Absorption Process (이산화탄소 흡수 공정에서 흡수액 최적 재생 조건에 대한 이론적 고찰)

  • Park, Sungyoul
    • Korean Chemical Engineering Research
    • /
    • v.50 no.6
    • /
    • pp.1002-1007
    • /
    • 2012
  • The considerable portion of energy demand has been satisfied by the combustion of fossil fuel and the consequent $CO_2$ emission was considered as a main cause of global warming. As a technology option for $CO_2$ emission mitigation, absorption process has been used in $CO_2$ capture from large scale emission sources. To set up optimal operating parameters in $CO_2$ absorption and solvent regeneration units are important for the better performance of the whole $CO_2$ absorption plant. Optimal operating parameters are usually selected through a lot of actual operation data. However theoretical approach are also useful because the arbitrary change of process parameters often limited for the stability of process operation. In this paper, a theoretical approach based on vapor-liquid equilibrium was proposed to estimate optimal operating conditions of $CO_2$ absorption process. Two $CO_2$ absorption processes using 12 wt% aqueous $NH_3$ solution and 20 wt% aqueous MEA solution were investigated in this theoretical estimation of optimal operating conditions. The results showed that $CO_2$ loading of rich absorbent should be kept below 0.4 in case of 12 wt% aqueous $NH_3$ solution for $CO_2$ absorption but there was no limitation of $CO_2$ loading in case of 20 wt% aqueous MEA solution for $CO_2$ absorption. The optimal regeneration temperature was determined by theoretical approach based on $CO_2$ loadings of rich and lean absorbent, which determined to satisfy the amount of absorbed $CO_2$. The amount of heating medium at optimal regeneration temperature is also determined to meet the difference of $CO_2$ loading between rich and lean absorbent. It could be confirmed that the theoretical approach, which accurately estimate the optimal regeneration conditions of lab scale $CO_2$ absorption using 12 wt% aqueous $NH_3$ solution could estimate those of 20 wt% aqueous MEA solution and could be used for the design and operation of $CO_2$ absorption process using chemical absorbent.

Building battery deterioration prediction model using real field data (머신러닝 기법을 이용한 납축전지 열화 예측 모델 개발)

  • Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.243-264
    • /
    • 2018
  • Although the worldwide battery market is recently spurring the development of lithium secondary battery, lead acid batteries (rechargeable batteries) which have good-performance and can be reused are consumed in a wide range of industry fields. However, lead-acid batteries have a serious problem in that deterioration of a battery makes progress quickly in the presence of that degradation of only one cell among several cells which is packed in a battery begins. To overcome this problem, previous researches have attempted to identify the mechanism of deterioration of a battery in many ways. However, most of previous researches have used data obtained in a laboratory to analyze the mechanism of deterioration of a battery but not used data obtained in a real world. The usage of real data can increase the feasibility and the applicability of the findings of a research. Therefore, this study aims to develop a model which predicts the battery deterioration using data obtained in real world. To this end, we collected data which presents change of battery state by attaching sensors enabling to monitor the battery condition in real time to dozens of golf carts operated in the real golf field. As a result, total 16,883 samples were obtained. And then, we developed a model which predicts a precursor phenomenon representing deterioration of a battery by analyzing the data collected from the sensors using machine learning techniques. As initial independent variables, we used 1) inbound time of a cart, 2) outbound time of a cart, 3) duration(from outbound time to charge time), 4) charge amount, 5) used amount, 6) charge efficiency, 7) lowest temperature of battery cell 1 to 6, 8) lowest voltage of battery cell 1 to 6, 9) highest voltage of battery cell 1 to 6, 10) voltage of battery cell 1 to 6 at the beginning of operation, 11) voltage of battery cell 1 to 6 at the end of charge, 12) used amount of battery cell 1 to 6 during operation, 13) used amount of battery during operation(Max-Min), 14) duration of battery use, and 15) highest current during operation. Since the values of the independent variables, lowest temperature of battery cell 1 to 6, lowest voltage of battery cell 1 to 6, highest voltage of battery cell 1 to 6, voltage of battery cell 1 to 6 at the beginning of operation, voltage of battery cell 1 to 6 at the end of charge, and used amount of battery cell 1 to 6 during operation are similar to that of each battery cell, we conducted principal component analysis using verimax orthogonal rotation in order to mitigate the multiple collinearity problem. According to the results, we made new variables by averaging the values of independent variables clustered together, and used them as final independent variables instead of origin variables, thereby reducing the dimension. We used decision tree, logistic regression, Bayesian network as algorithms for building prediction models. And also, we built prediction models using the bagging of each of them, the boosting of each of them, and RandomForest. Experimental results show that the prediction model using the bagging of decision tree yields the best accuracy of 89.3923%. This study has some limitations in that the additional variables which affect the deterioration of battery such as weather (temperature, humidity) and driving habits, did not considered, therefore, we would like to consider the them in the future research. However, the battery deterioration prediction model proposed in the present study is expected to enable effective and efficient management of battery used in the real filed by dramatically and to reduce the cost caused by not detecting battery deterioration accordingly.

Bond Characteristics and Splitting Bond Stress on Steel Fiber Reinforced Reactive Powder Concrete (강섬유로 보강된 반응성 분체 콘크리트의 부착특성과 쪼갬인장강도)

  • Choi, Hyun-Ki;Bae, Baek-Il;Choi, Chang-Sik
    • Journal of the Korea Concrete Institute
    • /
    • v.26 no.5
    • /
    • pp.651-660
    • /
    • 2014
  • Structural members using ultra high strength concrete which usually used with steel fiber is designed with guidelines based on several investigation of SF-RPC(steel fiber reinforced reactive powder concrete). However, there are not clear design method yet. Especially, SF-RPC member should be casted with steam(90 degree delicious) and members with SF-RPC usually used with precast members. Although the most important design parameter is development method between SF-RPC and steel reinforcement(rebar), there are no clear design method in the SF-RPC member design guidelines. There are many controversial problems on safety and economy. Therefore, in order to make design more optimum safe design, in this study, we investigated bond stress between steel rebar and SF-RPC according to test. Test results were compared with previously suggested analysis method. Test was carried out with direct pull out test using variables of compressive strength of concrete, concrete cover and inclusion ratio of steel fiber. According to test results, bond stress between steel rebar and SF-RPC increased with increase of compressive strength of concrete and concrete cover. Increasing rate of bond stress were decrease with increase of compressive strength of SF-RPC and concrete cover significantly. 1% volume fraction inclusion of steel fiber increase the bond stress between steel rebar and SF-RPC with two times but 2% volume fraction cannot affect the bond stress significantly. There are no exact or empirical equations for evaluation of SF-RPC bond stress. In order to make safe bond design of SF-RPC precast members, previously suggested analysis method for bond stress by Tepfers were evaluated. This method have shown good agreement with test results, especially for steel fiber reinforced RPC.