• Title/Summary/Keyword: Information bias

Search Result 1,614, Processing Time 0.026 seconds

Software Reliability Growth Modeling in the Testing Phase with an Outlier Stage (하나의 이상구간을 가지는 테스팅 단계에서의 소프트웨어 신뢰도 성장 모형화)

  • Park, Man-Gon;Jung, Eun-Yi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2575-2583
    • /
    • 1998
  • The productionof the highly relible softwae systems and theirs performance evaluation hae become important interests in the software industry. The software evaluation has been mainly carried out in ternns of both reliability and performance of software system. Software reliability is the probability that no software error occurs for a fixed time interval during software testing phase. These theoretical software reliability models are sometimes unsuitable for the practical testing phase in which a software error at a certain testing stage occurs by causes of the imperfect debugging, abnornal software correction, and so on. Such a certatin software testing stage needs to be considered as an outlying stage. And we can assume that the software reliability does not improve by means of muisance factor in this outlying testing stage. In this paper, we discuss Bavesian software reliability growth modeling and estimation procedure in the presence of an imidentitied outlying software testing stage by the modification of Jehnski Moranda. Also we derive the Bayes estimaters of the software reliability panmeters by the assumption of prior information under the squared error los function. In addition, we evaluate the proposed software reliability growth model with an unidentified outlying stage in an exchangeable model according to the values of nuisance paramether using the accuracy, bias, trend, noise metries as the quantilative evaluation criteria through the compater simulation.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

An Empirical Study on Korean Stock Market using Firm Characteristic Model (한국주식시장에서 기업특성모형 적용에 관한 실증연구)

  • Kim, Soo-Kyung;Park, Jong-Hae;Byun, Young-Tae;Kim, Tae-Hyuk
    • Management & Information Systems Review
    • /
    • v.29 no.2
    • /
    • pp.1-25
    • /
    • 2010
  • This study attempted to empirically test the determinants of stock returns in Korean stock market applying multi-factor model proposed by Haugen and Baker(1996). Regression models were developed using 16 variables related to liquidity, risk, historical price, price level, and profitability as independent variables and 690 stock monthly returns as dependent variable. For the statistical analysis, the data were collected from the Kis Value database and the tests of forecasting power in this study minimized various possible bias discussed in the literature as possible. The statistical results indicated that: 1) Liquidity, one-month excess return, three-month excess return, PER, ROE, and volatility of total return affect stock returns simultaneously. 2) Liquidity, one-month excess return, three-month excess return, six-month excess return, PSR, PBR, ROE, and EPS have an antecedent influence on stock returns. Meanwhile, realized returns of decile portfolios increase in proportion to predicted returns. This results supported previous study by Haugen and Baker(1996) and indicated that firm-characteristic model can better predict stock returns than CAPM. 3) The firm-characteristic model has better predictive power than Fama-French three-factor model, which indicates that a portfolio constructed based on this model can achieve excess return. This study found that expected return factor models are accurate, which is consistent with other countries' results. There exists a surprising degree of commonality in the factors that are most important in determining the expected returns among different stocks.

  • PDF

Estimation of Surface Solar Radiation using Ground-based Remote Sensing Data on the Seoul Metropolitan Area (수도권지역의 지상기반 원격탐사자료를 이용한 지표면 태양에너지 산출)

  • Jee, Joon-Bum;Min, Jae-Sik;Lee, Hankyung;Chae, Jung-Hoon;Kim, Sangil
    • Journal of the Korean earth science society
    • /
    • v.39 no.3
    • /
    • pp.228-240
    • /
    • 2018
  • Solar energy is calculated using meteorological (14 station), ceilometer (2 station) and microwave radiometer (MWR, 7 station)) data observed from the Weather Information Service Engine (WISE) on the Seoul metropolitan area. The cloud optical thickness and the cloud fraction are calculated using the back-scattering coefficient (BSC) of the ceilometer and liquid water path of the MWR. The solar energy on the surface is calculated using solar radiation model with cloud fraction from the ceilometer and the MWR. The estimated solar energy is underestimated compared to observations both at Jungnang and Gwanghwamun stations. In linear regression analysis, the slope is less than 0.8 and the bias is negative which is less than $-20W/m^2$. The estimated solar energy using MWR is more improved (i.e., deterministic coefficient (average $R^2=0.8$) and Root Mean Square Error (average $RMSE=110W/m^2$)) than when using ceilometer. The monthly cloud fraction and solar energy calculated by ceilometer is greater than 0.09 and lower than $50W/m^2$ compared to MWR. While there is a difference depending on the locations, RMSE of estimated solar radiation is large over $50W/m^2$ in July and September compared to other months. As a result, the estimation of a daily accumulated solar radiation shows the highest correlation at Gwanghwamun ($R^2=0.80$, RMSE=2.87 MJ/day) station and the lowest correlation at Gooro ($R^2=0.63$, RMSE=4.77 MJ/day) station.

Analysis of Internet Biology Study Sites and Guidelines for Constructing Educational Homepages (인터넷상의 고등학교 생물 학습사이트 비교분석 및 웹사이트 구축방안에 관한 연구)

  • Kim, Joo-Hyun;Sung, Jung-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.4
    • /
    • pp.779-795
    • /
    • 2002
  • Internet, a world wide network of computers, is considered as a sea of information because it allows people to share information beyond the barriors of time and space. However, in spite of the unmeasurable potential applications of the internet, its use in the field of biology education has been extremely limited mainly due to the scarcity of good biology-related sites. In order to provide useful guidelines for constructing user-friendly study sites, which can help high school students with different intellectual levels to study biology, comparative studies were performed on selected educational sites. Initially, hundreds of related sites were examined, and, subsequently, four distinct sites were selected not only because they are well organized, but also because each is unique in its contents. Also, a survey was carried out against the users of each site. The survey results indicated that the high school students regard the web-based biology study tools as effective teaching methods although there might be some bias in criteria for selecting target sites. In addition to the detailed biology topics and the related biology informations, multimedia data including pictures, animations and movies are found to be one of the important ingredients for desirable biology study sites. Thus, the inclusion of multimedia components should also be considered when developing a systematic biology study site. Overall, the role of the cyber space is expected to become more and more important. Since the development of the user-satisfied and self-guided sites require interdisciplinary collaborational efforts which should be made to promote extensive communication among teachers, education professionals, and computer engineers. Furthermore, the introduction of good biology study sites to the students by their teachers is also important factor for the successful web-based education.

Estimation of Forest Productivity for Post-Wild-fire Restoration in East Coastal Areas (동해안 산불피해지 복구를 위한 산림생산력의 추정)

  • Koo, Kyo-Sang;Lee, Myung-Jong;Shin, Man-Yong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.1
    • /
    • pp.36-44
    • /
    • 2010
  • In order to rehabilitate forest sites damaged by wildfire via natural or artificial restoration, it is important to determine right tree species, which can acclimate to biogeoclimatic environment at the sites. The objectives of this study were to develop site index equation of different tree species for estimating forest productivity and to provide information on species selection for post-wildfire restoration. Site index equation was developed based on environmental information from wildfire damaged areas in Gangneung, Goseong, Donghae, and Samcheok, where were located in east coastal areas of South Korea. Despite the small numbers (4~5) of environmental variables used for the development of the site index equations, statistical analysis (e.g. mean difference, standard deviation of difference, and standard error of difference) showed relatively low bias and variation, suggesting that those equations can provide relatively high capability of estimation and practical applicability with high effectiveness. The small numbers of the variables enabled the model to be applied in a wide range of usages including determination of appropriate tree species for post-wildfire restoration. The estimation of forest site productivity showed the possibility of large distribution in east coastal region as the best site for Korean ash (Fraxinus rhynchophylla) and original oak (Quercus variabilis) that can be used for firebreak in the region. These results imply that damages by forest fire can be reduced significantly by replacing existing pure coniferous forests in the area with ones dominated by broad-leaved deciduous stands, which can play an important role as fire break and/or prevent a transition from surface fire to crown fire.