• Title/Summary/Keyword: artificial intelligence-based model

Search Result 1,215, Processing Time 0.024 seconds

A study on machine learning-based defense system proposal through web shell collection and analysis (웹쉘 수집 및 분석을 통한 머신러닝기반 방어시스템 제안 연구)

  • Kim, Ki-hwan;Shin, Yong-tae
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.87-94
    • /
    • 2022
  • Recently, with the development of information and communication infrastructure, the number of Internet access devices is rapidly increasing. Smartphones, laptops, computers, and even IoT devices are receiving information and communication services through Internet access. Since most of the device operating environment consists of web (WEB), it is vulnerable to web cyber attacks using web shells. When the web shell is uploaded to the web server, it is confirmed that the attack frequency is high because the control of the web server can be easily performed. As the damage caused by the web shell occurs a lot, each company is responding to attacks with various security devices such as intrusion prevention systems, firewalls, and web firewalls. In this case, it is difficult to detect, and in order to prevent and cope with web shell attacks due to these characteristics, it is difficult to respond only with the existing system and security software. Therefore, it is an automated defense system through the collection and analysis of web shells based on artificial intelligence machine learning that can cope with new cyber attacks such as detecting unknown web shells in advance by using artificial intelligence machine learning and deep learning techniques in existing security software. We would like to propose about. The machine learning-based web shell defense system model proposed in this paper quickly collects, analyzes, and detects malicious web shells, one of the cyberattacks on the web environment. I think it will be very helpful in designing and building a security system.

Automatic Collection of Production Performance Data Based on Multi-Object Tracking Algorithms (다중 객체 추적 알고리즘을 이용한 가공품 흐름 정보 기반 생산 실적 데이터 자동 수집)

  • Lim, Hyuna;Oh, Seojeong;Son, Hyeongjun;Oh, Yosep
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Recently, digital transformation in manufacturing has been accelerating. It results in that the data collection technologies from the shop-floor is becoming important. These approaches focus primarily on obtaining specific manufacturing data using various sensors and communication technologies. In order to expand the channel of field data collection, this study proposes a method to automatically collect manufacturing data based on vision-based artificial intelligence. This is to analyze real-time image information with the object detection and tracking technologies and to obtain manufacturing data. The research team collects object motion information for each frame by applying YOLO (You Only Look Once) and DeepSORT as object detection and tracking algorithms. Thereafter, the motion information is converted into two pieces of manufacturing data (production performance and time) through post-processing. A dynamically moving factory model is created to obtain training data for deep learning. In addition, operating scenarios are proposed to reproduce the shop-floor situation in the real world. The operating scenario assumes a flow-shop consisting of six facilities. As a result of collecting manufacturing data according to the operating scenarios, the accuracy was 96.3%.

How to build an AI Safety Management Chatbot Service based on IoT Construction Health Monitoring (IoT 건축시공 건전성 모니터링 기반 AI 안전관리 챗봇서비스 구축방안)

  • Hwi Jin Kang;Sung Jo Choi;Sang Jun Han;Jae Hyun Kim;Seung Ho Lee
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.106-116
    • /
    • 2024
  • Purpose: This paper conducts IoT and CCTV-based safety monitoring to analyze accidents and potential risks occurring at construction sites, and detect and analyze risks such as falls and collisions or abnormalities and to establish a system for early warning using devices like a walkie-talkie and chatbot service. Method: A safety management service model is presented through smart construction technology case studies at the construction site and review a relevant literature analysis. Result: According to 'Construction Accident Statistics,' in 2021, there were 26,888 casualties in the construction industry, accounting for 26.3% of all reported accidents. Fatalities in construction-related accidents amounted to 417 individuals, representing 50.5% of all industrial accident-related deaths. This study suggests implementing AI chatbot services for construction site safety management utilizing IoT-based health monitoring technologies in smart construction practices. Construction sites where stakeholders such as workers participate were demonstrated by implementing an artificial intelligence chatbot system by selecting major risk areas within the workplace, such as scaffolding processes, openings, and access to hazardous machinery. Conclusion: The possibility of commercialization was confirmed by receiving more than 90 points in the satisfaction survey of participating workers regarding the empirical results of the artificial intelligence chatbot service at construction sites.

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.

Performance Evaluation of Price-based Input Features in Stock Price Prediction using Tensorflow (텐서플로우를 이용한 주가 예측에서 가격-기반 입력 피쳐의 예측 성능 평가)

  • Song, Yoojeong;Lee, Jae Won;Lee, Jongwoo
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.11
    • /
    • pp.625-631
    • /
    • 2017
  • The stock price prediction for stock markets remains an unsolved problem. Although there have been various overtures and studies to predict the price of stocks scientifically, it is impossible to predict the future precisely. However, stock price predictions have been a subject of interest in a variety of related fields such as economics, mathematics, physics, and computer science. In this paper, we will study fluctuation patterns of stock prices and predict future trends using the Deep learning. Therefore, this study presents the three deep learning models using Tensorflow, an open source framework in which each learning model accepts different input features. We expand the previous study that used simple price data. We measured the performance of three predictive models increasing the number of priced-based input features. Through this experiment, we measured the performance change of the predictive model depending on the price-based input features. Finally, we compared and analyzed the experiment result to evaluate the impact of the price-based input features in stock price prediction.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

The prediction of the stock price movement after IPO using machine learning and text analysis based on TF-IDF (증권신고서의 TF-IDF 텍스트 분석과 기계학습을 이용한 공모주의 상장 이후 주가 등락 예측)

  • Yang, Suyeon;Lee, Chaerok;Won, Jonggwan;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.237-262
    • /
    • 2022
  • There has been a growing interest in IPOs (Initial Public Offerings) due to the profitable returns that IPO stocks can offer to investors. However, IPOs can be speculative investments that may involve substantial risk as well because shares tend to be volatile, and the supply of IPO shares is often highly limited. Therefore, it is crucially important that IPO investors are well informed of the issuing firms and the market before deciding whether to invest or not. Unlike institutional investors, individual investors are at a disadvantage since there are few opportunities for individuals to obtain information on the IPOs. In this regard, the purpose of this study is to provide individual investors with the information they may consider when making an IPO investment decision. This study presents a model that uses machine learning and text analysis to predict whether an IPO stock price would move up or down after the first 5 trading days. Our sample includes 691 Korean IPOs from June 2009 to December 2020. The input variables for the prediction are three tone variables created from IPO prospectuses and quantitative variables that are either firm-specific, issue-specific, or market-specific. The three prospectus tone variables indicate the percentage of positive, neutral, and negative sentences in a prospectus, respectively. We considered only the sentences in the Risk Factors section of a prospectus for the tone analysis in this study. All sentences were classified into 'positive', 'neutral', and 'negative' via text analysis using TF-IDF (Term Frequency - Inverse Document Frequency). Measuring the tone of each sentence was conducted by machine learning instead of a lexicon-based approach due to the lack of sentiment dictionaries suitable for Korean text analysis in the context of finance. For this reason, the training set was created by randomly selecting 10% of the sentences from each prospectus, and the sentence classification task on the training set was performed after reading each sentence in person. Then, based on the training set, a Support Vector Machine model was utilized to predict the tone of sentences in the test set. Finally, the machine learning model calculated the percentages of positive, neutral, and negative sentences in each prospectus. To predict the price movement of an IPO stock, four different machine learning techniques were applied: Logistic Regression, Random Forest, Support Vector Machine, and Artificial Neural Network. According to the results, models that use quantitative variables using technical analysis and prospectus tone variables together show higher accuracy than models that use only quantitative variables. More specifically, the prediction accuracy was improved by 1.45% points in the Random Forest model, 4.34% points in the Artificial Neural Network model, and 5.07% points in the Support Vector Machine model. After testing the performance of these machine learning techniques, the Artificial Neural Network model using both quantitative variables and prospectus tone variables was the model with the highest prediction accuracy rate, which was 61.59%. The results indicate that the tone of a prospectus is a significant factor in predicting the price movement of an IPO stock. In addition, the McNemar test was used to verify the statistically significant difference between the models. The model using only quantitative variables and the model using both the quantitative variables and the prospectus tone variables were compared, and it was confirmed that the predictive performance improved significantly at a 1% significance level.

Bone Age Assessment Using Artificial Intelligence in Korean Pediatric Population: A Comparison of Deep-Learning Models Trained With Healthy Chronological and Greulich-Pyle Ages as Labels

  • Pyeong Hwa Kim;Hee Mang Yoon;Jeong Rye Kim;Jae-Yeon Hwang;Jin-Ho Choi;Jisun Hwang;Jaewon Lee;Jinkyeong Sung;Kyu-Hwan Jung;Byeonguk Bae;Ah Young Jung;Young Ah Cho;Woo Hyun Shim;Boram Bak;Jin Seong Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.11
    • /
    • pp.1151-1163
    • /
    • 2023
  • Objective: To develop a deep-learning-based bone age prediction model optimized for Korean children and adolescents and evaluate its feasibility by comparing it with a Greulich-Pyle-based deep-learning model. Materials and Methods: A convolutional neural network was trained to predict age according to the bone development shown on a hand radiograph (bone age) using 21036 hand radiographs of Korean children and adolescents without known bone development-affecting diseases/conditions obtained between 1998 and 2019 (median age [interquartile range {IQR}], 9 [7-12] years; male:female, 11794:9242) and their chronological ages as labels (Korean model). We constructed 2 separate external datasets consisting of Korean children and adolescents with healthy bone development (Institution 1: n = 343; median age [IQR], 10 [4-15] years; male: female, 183:160; Institution 2: n = 321; median age [IQR], 9 [5-14] years; male: female, 164:157) to test the model performance. The mean absolute error (MAE), root mean square error (RMSE), and proportions of bone age predictions within 6, 12, 18, and 24 months of the reference age (chronological age) were compared between the Korean model and a commercial model (VUNO Med-BoneAge version 1.1; VUNO) trained with Greulich-Pyle-based age as the label (GP-based model). Results: Compared with the GP-based model, the Korean model showed a lower RMSE (11.2 vs. 13.8 months; P = 0.004) and MAE (8.2 vs. 10.5 months; P = 0.002), a higher proportion of bone age predictions within 18 months of chronological age (88.3% vs. 82.2%; P = 0.031) for Institution 1, and a lower MAE (9.5 vs. 11.0 months; P = 0.022) and higher proportion of bone age predictions within 6 months (44.5% vs. 36.4%; P = 0.044) for Institution 2. Conclusion: The Korean model trained using the chronological ages of Korean children and adolescents without known bone development-affecting diseases/conditions as labels performed better in bone age assessment than the GP-based model in the Korean pediatric population. Further validation is required to confirm its accuracy.

Development of a Flooding Detection Learning Model Using CNN Technology (CNN 기술을 적용한 침수탐지 학습모델 개발)

  • Dong Jun Kim;YU Jin Choi;Kyung Min Park;Sang Jun Park;Jae-Moon Lee;Kitae Hwang;Inhwan Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.1-7
    • /
    • 2023
  • This paper developed a training model to classify normal roads and flooded roads using artificial intelligence technology. We expanded the diversity of learning data using various data augmentation techniques and implemented a model that shows good performance in various environments. Transfer learning was performed using the CNN-based Resnet152v2 model as a pre-learning model. During the model learning process, the performance of the final model was improved through various parameter tuning and optimization processes. Learning was implemented in Python using Google Colab NVIDIA Tesla T4 GPU, and the test results showed that flooding situations were detected with very high accuracy in the test dataset.