• Title/Summary/Keyword: Demand Performance

Search Result 3,089, Processing Time 0.028 seconds

Performance Evaluation of Hydrocyclone Filter for Treatment of Micro Particles in Storm Runoff (Hydrocyclone Filter 장치를 이용한 강우유출수내 미세입자 제거특성 분석)

  • Lee, Jun-Ho;Bang, Ki-Woong;Hong, Sung-Chul
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.31 no.11
    • /
    • pp.1007-1018
    • /
    • 2009
  • Hydrocyclone is widely used in industry, because of its simplicity in design, high capacity, low maintenance and operational cost. The separation action of a hydrocyclone treating particulate slurry is a consequence of the swirling flow that produces a centrifugal force on the fluid and suspended particles. In spite of hydrocyclone have many advantage, the application for treatment of urban stormwater case study were rare. We conducted a laboratory scale study on treatable potential of micro particles using hydrocyclone filter (HCF) that was a combined modified hydrocyclone with perlite filter cartridge. Since it was not easy to use actual storm water in the scaled-down hydraulic model investigations, it was necessary to reproduce ranges of particles sizes with synthetic materials. The synthesized storm runoff was made with water and addition of particles; ion exchange resin, road sediment, commercial area manhole sediment, and silica gel particles. Experimental studies have been carried out about the particle separation performance of HCF-open system and HCF-closed system. The principal structural differences of these HCFs are underflow zone structure and vortex finder. HCF was made of acryl resin with 120 mm of diameter hydrocyclone and 250 mm of diameter filter chamber and overall height of 800 mm. To determine the removal efficiency for various influent concentrations of suspended solids (SS) and chemical oxygen demand (COD), tests were performed with different operational conditions. The operated maximum of surface loading rate was about 700 $m^3/m^2$/day for HCF-open system, and 1,200 $m^3/m^2$/day for HCF-closed system. It was found that particle removal efficiency for the HCF-closed system is better than the HCF-open system under same surface loading rate. Results showed that SS removal efficiency with the HCF-closed system improved by about 8~20% compared with HCF-open system. The average removal efficiency difference for HCF-closed system between measurement and CFD particle tracking simulation was about 4%.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Development of Intelligent Severity of Atopic Dermatitis Diagnosis Model using Convolutional Neural Network (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 아토피피부염 중증도 진단 모델 개발)

  • Yoon, Jae-Woong;Chun, Jae-Heon;Bang, Chul-Hwan;Park, Young-Min;Kim, Young-Joo;Oh, Sung-Min;Jung, Joon-Ho;Lee, Suk-Jun;Lee, Ji-Hyun
    • Management & Information Systems Review
    • /
    • v.36 no.4
    • /
    • pp.33-51
    • /
    • 2017
  • With the advent of 'The Forth Industrial Revolution' and the growing demand for quality of life due to economic growth, needs for the quality of medical services are increasing. Artificial intelligence has been introduced in the medical field, but it is rarely used in chronic skin diseases that directly affect the quality of life. Also, atopic dermatitis, a representative disease among chronic skin diseases, has a disadvantage in that it is difficult to make an objective diagnosis of the severity of lesions. The aim of this study is to establish an intelligent severity recognition model of atopic dermatitis for improving the quality of patient's life. For this, the following steps were performed. First, image data of patients with atopic dermatitis were collected from the Catholic University of Korea Seoul Saint Mary's Hospital. Refinement and labeling were performed on the collected image data to obtain training and verification data that suitable for the objective intelligent atopic dermatitis severity recognition model. Second, learning and verification of various CNN algorithms are performed to select an image recognition algorithm that suitable for the objective intelligent atopic dermatitis severity recognition model. Experimental results showed that 'ResNet V1 101' and 'ResNet V2 50' were measured the highest performance with Erythema and Excoriation over 90% accuracy, and 'VGG-NET' was measured 89% accuracy lower than the two lesions due to lack of training data. The proposed methodology demonstrates that the image recognition algorithm has high performance not only in the field of object recognition but also in the medical field requiring expert knowledge. In addition, this study is expected to be highly applicable in the field of atopic dermatitis due to it uses image data of actual atopic dermatitis patients.

  • PDF

A Study of the Relationship between Personality Traits and Job Satisfaction of Community Health Practitioners in a Rural Area (일부 보건진료원의 성격특성과 직무만족도에 관한 연구)

  • Lee, Soon-Ryae;Park, Sang-Hag
    • Journal of agricultural medicine and community health
    • /
    • v.24 no.2
    • /
    • pp.331-350
    • /
    • 1999
  • This study was attempted to examine relationship between personality traits and job satisfaction of community health practitioners(CHPs) working in remote rural area in order to suggest some methods to enhance their lob performance and the degrees of job satisfaction. The General Personality Test and the revised version of Job Satisfaction Questionnaire were administered to 200 of 348 CHPs in the Kwangju-Chonnam area and then the percentages, means, standard deviations and Pearson's correlation coefficients of these data were obtained, ANOVA and logistic analysis were used. The results of study were as follows : 1. CHPs without religion were more satisfied with their salary than those with religion. 2. CHPs who hoped for continuous education showed higher scores than the others on necessary job, professional pride and autonomy. Those who chose for independent job showed higher scores than the others on both necessary job and professional pride. Those who hope for long duration showed higher scores than the others on both necessary job and professional pride. Those who were satisfied with the present occupation showed higher scores than the others on pay satisfaction, necessary job, professional pride, interaction, autonomy and demand from organization. 3. Their autonomy scores differed significantly according to work status, both interaction and autonomy scores did so according to the fields of the past job in CHP, and their autonomy scores according to location of clinics. Their interaction scores differed significantly according to the frequency of home visits per mouth, both the degrees of salary satisfaction and professional pride scores did so according to the frequency of counseling education per mouth, and their professional pride scores did so according to total income per year. 4. The levels of their responsibility and self-confidence showed the highest of all personality traits variables. 5. The professional pride score of CHPs showed the highest of all job satisfaction variables. 6. Dominance were mostly correlated with autonomy and responsibility were mostly associated with professional pride. Both emotional stability and self-confidence were mostly related necessary job. In conclusion, religion, location of clinics, clinical experience, opportunity for education, dominance, self-confidence, the duration of services hoped for, satisfaction with the present occupation, the field of past job and administrative affairs were found to be the important factors in the degrees of their job satisfaction. Therefore, the methods to consider these variables will be necessary to develop for enhancing the efficiency of their Job performance and the degrees of job satisfaction.

  • PDF

Development and Application of Multi-Functional Floating Wetland Island for Improving Water Quality (수질정화를 위한 다기능 인공식물섬의 개발과 적용)

  • Yoon, Younghan;Lim, Hyun Man;Kim, Weon Jae;Jung, Jin Hong;Park, Jae-Roh
    • Ecology and Resilient Infrastructure
    • /
    • v.3 no.4
    • /
    • pp.221-230
    • /
    • 2016
  • Multi-functional floating wetland island (mFWI) was developed in order to prevent algal bloom and to improve water quality through several unit purification processes. A test bed was applied in the stagnant watershed in an urban area, from the summer to the winter season. For the advanced treatment, an artificial phosphorus adsorption/filtration medium was applied with micro-bubble generation, as well as water plants for nutrient removal. It appeared that the efficiency of chemical oxygen demand (COD) and total phosphorus (T-P) removal was higher in the warmer season (40.9%, 45.7%) than in the winter (15.9%, 20.0%), and the removal performance (suspended solid, chlorophyll a) in each process differs according to seasonal variation; micro-bubble performed better (33.1%, 39.2%) in the summer, and the P adsorption/filtration and water plants performed better (76.5%, 59.5%) in the winter season. From the results, it was understood that the mFWI performance was dependent upon the pollutant loads in different seasons and unit processes, and thus it requires continuous monitoring under various conditions to evaluate the functions. In addition, micro-bubbles helped prevent the formation of anaerobic zones in the lower part of the floating wetland. This resulted in the water circulation to form a new healthy aquatic ecosystem in the surrounding environment, which confirmed the positive influence of mFWI.

A study of Establishment on Radiomap that Utilizes the Mobile device Indoor Positioning DB based on Wi-Fi (Wi-Fi 기반 모바일 디바이스 실내측위 DB를 활용한 라디오맵 구축에 관한 연구)

  • Jeong, In Hun;Kim, Chong Mun;Choi, Yun Soo;Kim, Sang Bong;Lee, Yun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.3
    • /
    • pp.57-69
    • /
    • 2014
  • As of 2013, Korean population density is 505 persons per $1km^2$ and is ranked 3rd place in the most densely populated countries exception of city-states. It shows clearly the population is concentrated in the city area. To fulfil this urban concentration population demand, the enlargement and complexation of buildings, subway and other underground spaces connection tendency has been intensified, and it is need to construct the indoor spatial information DB as well as the accurate indoor surveying DB to promote people's safety and social welfare. In this study, Sadang station and Incheon National Airport were aimed for the construction of Wi-Fi AP location DB and RadioMap DB by collecting the indoor AP raw datas by using mobile device and those collected results were ran through the process of verification, supplementation, and analyzation. To evaluate the performance of constructed DB, 10 points in Incheon Airport- 3rd flr in block A, and 9 points in Sadang station-B1 were selected and calculated the estimated points and ran evaluation experiment using survey positioning error, which is distance between real position and the estimated position. The result shows that Incheon international airport's average and standard deviation was separately 17.81m, 17.79m and Sadang station's average and standard deviation was separately 22.64m, 23.74m. In Sadang station's case, the areas near the exit has low performance of surveying position due to fewer visible AP points than other areas. As total datas were examined except those position, it was verified that the user's location was mapping close position in surveying positioning by using constructed DB. It means that constructed DB contains correct Wi-Fi AP locations and radio wave patterns in object region, so it is considered that the indoor spatial information service based on constructed DB would be available.

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

Mapping Categories of Heterogeneous Sources Using Text Analytics (텍스트 분석을 통한 이종 매체 카테고리 다중 매핑 방법론)

  • Kim, Dasom;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.193-215
    • /
    • 2016
  • In recent years, the proliferation of diverse social networking services has led users to use many mediums simultaneously depending on their individual purpose and taste. Besides, while collecting information about particular themes, they usually employ various mediums such as social networking services, Internet news, and blogs. However, in terms of management, each document circulated through diverse mediums is placed in different categories on the basis of each source's policy and standards, hindering any attempt to conduct research on a specific category across different kinds of sources. For example, documents containing content on "Application for a foreign travel" can be classified into "Information Technology," "Travel," or "Life and Culture" according to the peculiar standard of each source. Likewise, with different viewpoints of definition and levels of specification for each source, similar categories can be named and structured differently in accordance with each source. To overcome these limitations, this study proposes a plan for conducting category mapping between different sources with various mediums while maintaining the existing category system of the medium as it is. Specifically, by re-classifying individual documents from the viewpoint of diverse sources and storing the result of such a classification as extra attributes, this study proposes a logical layer by which users can search for a specific document from multiple heterogeneous sources with different category names as if they belong to the same source. Besides, by collecting 6,000 articles of news from two Internet news portals, experiments were conducted to compare accuracy among sources, supervised learning and semi-supervised learning, and homogeneous and heterogeneous learning data. It is particularly interesting that in some categories, classifying accuracy of semi-supervised learning using heterogeneous learning data proved to be higher than that of supervised learning and semi-supervised learning, which used homogeneous learning data. This study has the following significances. First, it proposes a logical plan for establishing a system to integrate and manage all the heterogeneous mediums in different classifying systems while maintaining the existing physical classifying system as it is. This study's results particularly exhibit very different classifying accuracies in accordance with the heterogeneity of learning data; this is expected to spur further studies for enhancing the performance of the proposed methodology through the analysis of characteristics by category. In addition, with an increasing demand for search, collection, and analysis of documents from diverse mediums, the scope of the Internet search is not restricted to one medium. However, since each medium has a different categorical structure and name, it is actually very difficult to search for a specific category insofar as encompassing heterogeneous mediums. The proposed methodology is also significant for presenting a plan that enquires into all the documents regarding the standards of the relevant sites' categorical classification when the users select the desired site, while maintaining the existing site's characteristics and structure as it is. This study's proposed methodology needs to be further complemented in the following aspects. First, though only an indirect comparison and evaluation was made on the performance of this proposed methodology, future studies would need to conduct more direct tests on its accuracy. That is, after re-classifying documents of the object source on the basis of the categorical system of the existing source, the extent to which the classification was accurate needs to be verified through evaluation by actual users. In addition, the accuracy in classification needs to be increased by making the methodology more sophisticated. Furthermore, an understanding is required that the characteristics of some categories that showed a rather higher classifying accuracy of heterogeneous semi-supervised learning than that of supervised learning might assist in obtaining heterogeneous documents from diverse mediums and seeking plans that enhance the accuracy of document classification through its usage.

Innovation Technology Development & Commercialization Promotion of R&D Performance to Domestic Renewable Energy (신재생에너지 기술혁신 개발과 R&D성과 사업화 촉진 방안)

  • Lee, Yong-Seok;Rho, Do-Hwan
    • Journal of Korea Technology Innovation Society
    • /
    • v.12 no.4
    • /
    • pp.788-818
    • /
    • 2009
  • Renewable energy refers to solar energy, biomass energy, hydrogen energy, wind power, fuel cell, coal liquefaction and vaporization, marine energy, waste energy, and liquidity fuel made out of byproduct of geothermal heat, hydrogen and coal; it excludes energy based on coal, oil, nuclear energy and natural gas. Developed countries have recognized the importance of these energies and thus have set the mid to long term plans to develop and commercialize the technology and supported them with drastic political and financial measures. Considering the growing recognition to the field, it is necessary to analysis up-to-now achievement of the government's related projects, in the standards of type of renewable energy, management of sectional goals, and its commercialization. Korean government is chiefly following suit the USA and British policies of developing and distributing renewable energy. However, unlike Japan which is in the lead role in solar rays industry, it still lacks in state-directed support, participation of enterprises and social recognition. The research regarding renewable energy has mainly examinedthe state of supply of each technology and suitability of specific region for applying the technology. The evaluation shows that the research has been focused on supply and demand of renewable as well as general energy and solution for the enhancement of supply capacity in certain area. However, in-depth study for commercialization and the increase of capacity in industry followed by development of the technology is still inadequate. 'Cost-benefit model for each energy source' is used in analysis of technology development of renewable energy and quantitative and macro economical effects of its commercialization in order to foresee following expand in related industries and increase in added value. First, Investment on the renewable energy technology development is in direct proportion both to the product and growth, but product shows slightly higher index under the same amount of R&D investment than growth. It indicates that advance in technology greatly influences the final product, the energy growth. Moreover, while R&D investment on renewable energy product as well as the government funds included in the investment have proportionate influence on the renewable energy growth, private investment in the total amount invested has reciprocal influence. This statistic shows that research and development is mainly driven by government funds rather than private investment. Finally, while R&D investment on renewable energy growth affects proportionately, government funds and private investment shows no direct relations, which indicates that the effects of research and development on renewable energy do not affect government funds or private investment. All of the results signify that although it is important to have government policy in technology development and commercialization, private investment and active participation of enterprises are the key to the success in the industry.

  • PDF

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.