• Title/Summary/Keyword: Artificial neural Networks (ANN)

Search Result 375, Processing Time 0.022 seconds

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Identifying the Key Success Factors of Massively Multiplayer Online Role Playing Game Design using Artificial Neural Networks (인공신경망을 이용한 MMORPG 설계의 핵심성공요인 식별)

  • Jung, Hoi-Il;Park, Il-Soon;Ahn, Hyun-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.17 no.1
    • /
    • pp.23-38
    • /
    • 2012
  • Massive Multiplayer Online Role Playing Games(MMORPGs) headed by some Korean game companies such as NC Soft, NHN, and Nexon have exploded in recent years. However, it becomes one of the major challenges for the MMORPG developers to design their games to appeal to gamers since only a few MMORPGs succeed whereas they require a huge amount of initial investment. Under this background, our study derives the major elements for designing MMORPG from the literature, and identifies the ones critical to the users' satisfaction and their willingness to pay among the derived elements. Though most previous studies on the design elements of MMORPG have used analytic hierarchy process(AHP), our study adopts artificial neural network(ANN) as the tool for identifying key success factors in designing MMORPG. The results of our study show that the elements of the game contents quality have a bigger effect on the user's satisfaction, whereas the ones of the value-added systems have a bigger effect on the user's willingness to pay. They also show that user interface affects both the user's satisfaction and willingness to pay most. These results imply that the strategies for the development of MMORPG should be aligned with its goal and market penetration strategy. They also imply that the satisfaction and revenue generation from MMORPG cannot be achieved without convenient and easy control environment. It is expected that the new findings of our study would be useful forthe developers or publishers of MMORPGs to build their own business strategies.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

STANDARDISATION OF NIR INSTRUMENTS, INFLUENCE OF THE CALIBRATION METHODS AND THE SIZE OF THE CLONING SET

  • Dardenne, Pierre;Cowe, Ian-A.;Berzaghi, Paolo;Flinn, Peter-C.;Lagerholm, Martin;Shenk, John-S.;Westerhaus, Mark-O.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1121-1121
    • /
    • 2001
  • A previous study (Berzaghi et al., 2001) evaluated the performance of 3 calibration methods, modified partial least squares (MPLS), local PLS (LOCAL) and artificial neural networks (ANN) on the prediction of the chemical composition of forages, using a large NIR database. The study used forage samples (n=25,977) from Australia, Europe (Belgium, Germany, Italy and Sweden) and North America (Canada and U.S.A) with reference values for moisture, crude protein and neutral detergent fibre content. The spectra of the samples were collected using 10 different Foss NIR Systems instruments, only some of which had been standardized to one master instrument. The aim of the present study was to evaluate the behaviour of these different calibration methods when predicting the same samples measured on different instruments. Twenty-two sealed samples of different kind of forages were measured in duplicate on seven instruments (one master and six slaves). Three sets of near infrared spectra (1100 to 2500nm) were created. The first set consisted of the spectra in their original form (unstandardized); the second set was created using a single sample standardization (Clone1); the third was created using a multiple sample procedure (Clone6). WinISI software (Infrasoft International Inc., Port Mathilda, PA, USA) was used to perform both types of standardization, Clone1 is just a photometric offset between a “master” instrument and the “slave” instrument. Clone6 modifies both the X-axis through a wavelength adjustment and the Y-axis through a simple regression wavelength by wavelength. The Clone1 procedure used one sample spectrally close to the centre of the population. The six samples used in Clone 6 were selected to cover the range of spectral variation in the sample set. The remaining fifteen samples were used to evaluate the performances of the different models. The predicted values for dry matter, protein and neutral detergent fibre from the master Instrument were considered as “reference Y values” when computing the statistics RMSEP, SEPC, R, Bias, Slope, mean GH (global Mahalanobis distance) and mean NH (neighbourhood Mahalanobis distance) for the 6 slave instruments. From the results we conclude that i) all the calibration techniques gave satisfactory results after standardization. Without standardization the predicted data from the slaves would have required slope and bias correction to produce acceptable statistics. ii) Standardization reduced the errors for all calibration methods and parameters tested, reducing not only systematic biases but also random errors. iii) Standardization removed slope effects that were significantly different from 1.0 in most of the cases. iv) Clone1 and Clone6 gave similar results except for NDF where Clone6 gave better RMSEP values than Clone1. v) GH and NH were reduced by half even with very large data sets including unstandardized spectra.

  • PDF

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.