• Title/Summary/Keyword: artificial intelligence-based model

Search Result 1,215, Processing Time 0.026 seconds

Model Type Inference Attack against AI-Based NIDS (AI 기반 NIDS에 대한 모델 종류 추론 공격)

  • Yoonsoo An;Dowan Kim;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.5
    • /
    • pp.875-884
    • /
    • 2024
  • The proliferation of IoT networks has led to an increase in cyber attacks, highlighting the importance of Network Intrusion Detection Systems (NIDS). To overcome the limitations of traditional NIDS and cope with more sophisticated cyber attacks, there is a trend towards integrating artificial intelligence models into NIDS. However, AI-based NIDS are vulnerable to adversarial attacks, which exploit the weaknesses of algorithm. Model Type Inference Attack is one of the types of attacks that infer information inside the model. This paper proposes an optimized framework for Model Type Inference attacks against NIDS models, applying more realistic assumptions. The proposed method successfully trained an attack model to infer the type of NIDS models with an accuracy of approximately 0.92, presenting a new security threat to AI-based NIDS and emphasizing the importance of developing defence method against such attacks.

Degradation Quantification Method and Degradation and Creep Life Prediction Method for Nickel-Based Superalloys Based on Bayesian Inference (베이지안 추론 기반 니켈기 초합금의 열화도 정량화 방법과 열화도 및 크리프 수명 예측의 방법)

  • Junsang, Yu;Hayoung, Oh
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.15-26
    • /
    • 2023
  • The purpose of this study is to determine the artificial intelligence-based degradation index from the image of the cross-section of the microstructure taken with a scanning electron microscope of the specimen obtained by the creep test of DA-5161 SX, a nickel-based superalloy used as a material for high-temperature parts. It proposes a new method of quantification and proposes a model that predicts degradation based on Bayesian inference without destroying components of high-temperature parts of operating equipment and a creep life prediction model that predicts Larson-Miller Parameter (LMP). It is proposed that the new degradation indexing method that infers a consistent representative value from a small amount of images based on the geometrical characteristics of the gamma prime phase, a nickel-base superalloy microstructure, and the prediction method of degradation index and LMP with information on the environmental conditions of the material without destroying high-temperature parts.

Path Loss Prediction Using an Ensemble Learning Approach

  • Beom Kwon;Eonsu Noh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.1-12
    • /
    • 2024
  • Predicting path loss is one of the important factors for wireless network design, such as selecting the installation location of base stations in cellular networks. In the past, path loss values were measured through numerous field tests to determine the optimal installation location of the base station, which has the disadvantage of taking a lot of time to measure. To solve this problem, in this study, we propose a path loss prediction method based on machine learning (ML). In particular, an ensemble learning approach is applied to improve the path loss prediction performance. Bootstrap dataset was utilized to obtain models with different hyperparameter configurations, and the final model was built by ensembling these models. We evaluated and compared the performance of the proposed ensemble-based path loss prediction method with various ML-based methods using publicly available path loss datasets. The experimental results show that the proposed method outperforms the existing methods and can predict the path loss values accurately.

DNA (Data, Network, AI) Based Intelligent Information Technology (DNA (Data, Network, AI) 기반 지능형 정보 기술)

  • Youn, Joosang;Han, Youn-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.247-249
    • /
    • 2020
  • In the era of the 4th industrial revolution, the demand for convergence between ICT technologies is increasing in various fields. Accordingly, a new term that combines data, network, and artificial intelligence technology, DNA (Data, Network, AI) is in use. and has recently become a hot topic. DNA has various potential technology to be able to develop intelligent application in the real world. Therefore, this paper introduces the reviewed papers on the service image placement mechanism based on the logical fog network, the mobility support scheme based on machine learning for Industrial wireless sensor network, the prediction of the following BCI performance by means of spectral EEG characteristics, the warning classification method based on artificial neural network using topics of source code and natural language processing model for data visualization interaction with chatbot, related on DNA technology.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Development of Deep Learning Ensemble Modeling for Cryptocurrency Price Prediction : Deep 4-LSTM Ensemble Model (암호화폐 가격 예측을 위한 딥러닝 앙상블 모델링 : Deep 4-LSTM Ensemble Model)

  • Choi, Soo-bin;Shin, Dong-hoon;Yoon, Sang-Hyeak;Kim, Hee-Woong
    • Journal of Information Technology Services
    • /
    • v.19 no.6
    • /
    • pp.131-144
    • /
    • 2020
  • As the blockchain technology attracts attention, interest in cryptocurrency that is received as a reward is also increasing. Currently, investments and transactions are continuing with the expectation and increasing value of cryptocurrency. Accordingly, prediction for cryptocurrency price has been attempted through artificial intelligence technology and social sentiment analysis. The purpose of this paper is to develop a deep learning ensemble model for predicting the price fluctuations and one-day lag price of cryptocurrency based on the design science research method. This paper intends to perform predictive modeling on Ethereum among cryptocurrencies to make predictions more efficiently and accurately than existing models. Therefore, it collects data for five years related to Ethereum price and performs pre-processing through customized functions. In the model development stage, four LSTM models, which are efficient for time series data processing, are utilized to build an ensemble model with the optimal combination of hyperparameters found in the experimental process. Then, based on the performance evaluation scale, the superiority of the model is evaluated through comparison with other deep learning models. The results of this paper have a practical contribution that can be used as a model that shows high performance and predictive rate for cryptocurrency price prediction and price fluctuations. Besides, it shows academic contribution in that it improves the quality of research by following scientific design research procedures that solve scientific problems and create and evaluate new and innovative products in the field of information systems.

A Study on the Improvement of Accuracy of Cardiomegaly Classification Based on InceptionV3 (InceptionV3 기반의 심장비대증 분류 정확도 향상 연구)

  • Jeong, Woo Yeon;Kim, Jung Hun
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.45-51
    • /
    • 2022
  • The purpose of this study is to improve the classification accuracy compared to the existing InceptionV3 model by proposing a new model modified with the fully connected hierarchical structure of InceptionV3, which showed excellent performance in medical image classification. The data used for model training were trained after data augmentation on a total of 1026 chest X-ray images of patients diagnosed with normal heart and Cardiomegaly at Kyungpook National University Hospital. As a result of the experiment, the learning classification accuracy and loss of the InceptionV3 model were 99.57% and 1.42, and the accuracy and loss of the proposed model were 99.81% and 0.92. As a result of the classification performance evaluation for precision, recall, and F1 score of Inception V3, the precision of the normal heart was 78%, the recall rate was 100%, and the F1 score was 88. The classification accuracy for Cardiomegaly was 100%, the recall rate was 78%, and the F1 score was 88. On the other hand, in the case of the proposed model, the accuracy for a normal heart was 100%, the recall rate was 92%, and the F1 score was 96. The classification accuracy for Cardiomegaly was 95%, the recall rate was 100%, and the F1 score was 97. If the chest X-ray image for normal heart and Cardiomegaly can be classified using the model proposed based on the study results, better classification will be possible and the reliability of classification performance will gradually increase.

Generation of Daily High-resolution Sea Surface Temperature for the Seas around the Korean Peninsula Using Multi-satellite Data and Artificial Intelligence (다종 위성자료와 인공지능 기법을 이용한 한반도 주변 해역의 고해상도 해수면온도 자료 생산)

  • Jung, Sihun;Choo, Minki;Im, Jungho;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.707-723
    • /
    • 2022
  • Although satellite-based sea surface temperature (SST) is advantageous for monitoring large areas, spatiotemporal data gaps frequently occur due to various environmental or mechanical causes. Thus, it is crucial to fill in the gaps to maximize its usability. In this study, daily SST composite fields with a resolution of 4 km were produced through a two-step machine learning approach using polar-orbiting and geostationary satellite SST data. The first step was SST reconstruction based on Data Interpolate Convolutional AutoEncoder (DINCAE) using multi-satellite-derived SST data. The second step improved the reconstructed SST targeting in situ measurements based on light gradient boosting machine (LGBM) to finally produce daily SST composite fields. The DINCAE model was validated using random masks for 50 days, whereas the LGBM model was evaluated using leave-one-year-out cross-validation (LOYOCV). The SST reconstruction accuracy was high, resulting in R2 of 0.98, and a root-mean-square-error (RMSE) of 0.97℃. The accuracy increase by the second step was also high when compared to in situ measurements, resulting in an RMSE decrease of 0.21-0.29℃ and an MAE decrease of 0.17-0.24℃. The SST composite fields generated using all in situ data in this study were comparable with the existing data assimilated SST composite fields. In addition, the LGBM model in the second step greatly reduced the overfitting, which was reported as a limitation in the previous study that used random forest. The spatial distribution of the corrected SST was similar to those of existing high resolution SST composite fields, revealing that spatial details of oceanic phenomena such as fronts, eddies and SST gradients were well simulated. This research demonstrated the potential to produce high resolution seamless SST composite fields using multi-satellite data and artificial intelligence.

A Study on Development of Collaborative Problem Solving Prediction System Based on Deep Learning: Focusing on ICT Factors (딥러닝 기반 협력적 문제 해결력 예측 시스템 개발 연구: ICT 요인을 중심으로)

  • Lee, Youngho
    • Journal of The Korean Association of Information Education
    • /
    • v.22 no.1
    • /
    • pp.151-158
    • /
    • 2018
  • The purpose of this study is to develop a system for predicting students' collaborative problem solving ability based on the ICT factors of PISA 2015 that affect collaborative problem solving ability. The PISA 2015 computer-based collaborative problem-solving capability evaluation included 5,581 students in Korea. As a research method, correlation analysis was used to select meaningful variables. And the collaborative problem solving ability prediction model was created by using the deep learning method. As a result of the model generation, we were able to predict collaborative problem solving ability with about 95% accuracy for the test data set. Based on this model, a collaborative problem solving ability prediction system was designed and implemented. This research is expected to provide a new perspective on applying big data and artificial intelligence in decision making for ICT input and use in education.

Study on Inference and Search for Development of Diagnostic Ontology in Oriental Medicine (한의진단 Ontology 구축을 위한 추론과 탐색에 관한 연구)

  • Park, Jong-Hyun
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.23 no.4
    • /
    • pp.745-750
    • /
    • 2009
  • The goal of this study is to examine on reasoning and search for construction of diagnosis ontology as a knowledge base of diagnosis expert system in oriental medicine. Expert system is a field of artificial intelligence. It is a system to acquire information with diverse reasoning methods after putting expert's knowledge in computer systematically. A typical model of expert system consists of knowledge base and reasoning & explanatory structure offering conclusion with the knowledge. To apply ontology as knowledge base to expert system practically, consideration on reasoning and search should be together. Therefore, this study compared and examined reasoning, search with diagnosis process in oriental medicine. Reasoning is divided into Rule-based reasoning and Case-based reasoning. The former is divided into Forward chaining and Backward chaining. Because of characteristics of diagnosis, sometimes Forward chaining or backward chaining are required. Therefore, there are a lot of cases that Hybrid chaining is effective. Case-based reasoning is a method to settle a problem in the present by comparing with the past cases. Therefore, it is suitable to diagnosis fields with abundant cases. Search is sorted into Breadth-first search, Depth-first search and Best-first search, which have respectively merits and demerits. To construct diagnosis ontology to be applied to practical expert system, reasoning and search to reflect diagnosis process and characteristics should be considered.