• Title/Summary/Keyword: Deep Learning System

Search Result 1,738, Processing Time 0.027 seconds

Deep learning framework for bovine iris segmentation

  • Heemoon Yoon;Mira Park;Hayoung Lee;Jisoon An;Taehyun Lee;Sang-Hee Lee
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.167-177
    • /
    • 2024
  • Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.

Cluster-based Deep One-Class Classification Model for Anomaly Detection

  • Younghwan Kim;Huy Kang Kim
    • Journal of Internet Technology
    • /
    • v.22 no.4
    • /
    • pp.903-911
    • /
    • 2021
  • As cyber-attacks on Cyber-Physical System (CPS) become more diverse and sophisticated, it is important to quickly detect malicious behaviors occurring in CPS. Since CPS can collect sensor data in near real time throughout the process, there have been many attempts to detect anomaly behavior through normal behavior learning from the perspective of data-driven security. However, since the CPS datasets are big data and most of the data are normal data, it has always been a great challenge to analyze the data and implement the anomaly detection model. In this paper, we propose and evaluate the Clustered Deep One-Class Classification (CD-OCC) model that combines the clustering algorithm and deep learning (DL) model using only a normal dataset for anomaly detection. We use auto-encoder to reduce the dimensions of the dataset and the K-means clustering algorithm to classify the normal data into the optimal cluster size. The DL model trains to predict clusters of normal data, and we can obtain logit values as outputs. The derived logit values are datasets that can better represent normal data in terms of knowledge distillation and are used as inputs to the OCC model. As a result of the experiment, the F1 score of the proposed model shows 0.93 and 0.83 in the SWaT and HAI dataset, respectively, and shows a significant performance improvement over other recent detectors such as Com-AE and SVM-RBF.

A Study on Security Event Detection in ESM Using Big Data and Deep Learning

  • Lee, Hye-Min;Lee, Sang-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.42-49
    • /
    • 2021
  • As cyber attacks become more intelligent, there is difficulty in detecting advanced attacks in various fields such as industry, defense, and medical care. IPS (Intrusion Prevention System), etc., but the need for centralized integrated management of each security system is increasing. In this paper, we collect big data for intrusion detection and build an intrusion detection platform using deep learning and CNN (Convolutional Neural Networks). In this paper, we design an intelligent big data platform that collects data by observing and analyzing user visit logs and linking with big data. We want to collect big data for intrusion detection and build an intrusion detection platform based on CNN model. In this study, we evaluated the performance of the Intrusion Detection System (IDS) using the KDD99 dataset developed by DARPA in 1998, and the actual attack categories were tested with KDD99's DoS, U2R, and R2L using four probing methods.

A Study on the License Plate Recognition Based on Direction Normalization and CNN Deep Learning (방향 정규화 및 CNN 딥러닝 기반 차량 번호판 인식에 관한 연구)

  • Ki, Jaewon;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.568-574
    • /
    • 2022
  • In this paper, direction normalization and CNN deep learning are used to develop a more reliable license plate recognition system. The existing license plate recognition system consists of three main modules: license plate detection module, character segmentation module, and character recognition module. The proposed system minimizes recognition error by adding a direction normalization module when a detected license plate is inclined. Experimental results show the superiority of the proposed method in comparison to the previous system.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Guideline on Security Measures and Implementation of Power System Utilizing AI Technology (인공지능을 적용한 전력 시스템을 위한 보안 가이드라인)

  • Choi, Inji;Jang, Minhae;Choi, Moonsuk
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.4
    • /
    • pp.399-404
    • /
    • 2020
  • There are many attempts to apply AI technology to diagnose facilities or improve the work efficiency of the power industry. The emergence of new machine learning technologies, such as deep learning, is accelerating the digital transformation of the power sector. The problem is that traditional power systems face security risks when adopting state-of-the-art AI systems. This adoption has convergence characteristics and reveals new cybersecurity threats and vulnerabilities to the power system. This paper deals with the security measures and implementations of the power system using machine learning. Through building a commercial facility operations forecasting system using machine learning technology utilizing power big data, this paper identifies and addresses security vulnerabilities that must compensated to protect customer information and power system safety. Furthermore, it provides security guidelines by generalizing security measures to be considered when applying AI.

Souce Code Identification Using Deep Neural Network (심층신경망을 이용한 소스 코드 원작자 식별)

  • Rhim, Jisu;Abuhmed, Tamer
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.373-378
    • /
    • 2019
  • Since many programming sources are open online, problems with reckless plagiarism and copyrights are occurring. Among them, source codes produced by repeated authors may have unique fingerprints due to their programming characteristics. This paper identifies each author by learning from a Google Code Jam program source using deep neural network. In this case, the original creator's source is to be vectored using a pre-processing instrument such as predictive-based vector or frequency-based approach, TF-IDF, etc. and to identify the original program source by learning by using a deep neural network. In addition a language-independent learning system was constructed using a pre-processing machine and compared with other existing learning methods. Among them, models using TF-IDF and in-depth neural networks were found to perform better than those using other pre-processing or other learning methods.

Damage detection in structures using modal curvatures gapped smoothing method and deep learning

  • Nguyen, Duong Huong;Bui-Tien, T.;Roeck, Guido De;Wahab, Magd Abdel
    • Structural Engineering and Mechanics
    • /
    • v.77 no.1
    • /
    • pp.47-56
    • /
    • 2021
  • This paper deals with damage detection using a Gapped Smoothing Method (GSM) combined with deep learning. Convolutional Neural Network (CNN) is a model of deep learning. CNN has an input layer, an output layer, and a number of hidden layers that consist of convolutional layers. The input layer is a tensor with shape (number of images) × (image width) × (image height) × (image depth). An activation function is applied each time to this tensor passing through a hidden layer and the last layer is the fully connected layer. After the fully connected layer, the output layer, which is the final layer, is predicted by CNN. In this paper, a complete machine learning system is introduced. The training data was taken from a Finite Element (FE) model. The input images are the contour plots of curvature gapped smooth damage index. A free-free beam is used as a case study. In the first step, the FE model of the beam was used to generate data. The collected data were then divided into two parts, i.e. 70% for training and 30% for validation. In the second step, the proposed CNN was trained using training data and then validated using available data. Furthermore, a vibration experiment on steel damaged beam in free-free support condition was carried out in the laboratory to test the method. A total number of 15 accelerometers were set up to measure the mode shapes and calculate the curvature gapped smooth of the damaged beam. Two scenarios were introduced with different severities of the damage. The results showed that the trained CNN was successful in detecting the location as well as the severity of the damage in the experimental damaged beam.

Comparison and analysis of prediction performance of fine particulate matter(PM2.5) based on deep learning algorithm (딥러닝 알고리즘 기반의 초미세먼지(PM2.5) 예측 성능 비교 분석)

  • Kim, Younghee;Chang, Kwanjong
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.3
    • /
    • pp.7-13
    • /
    • 2021
  • This study develops an artificial intelligence prediction system for Fine particulate Matter(PM2.5) based on the deep learning algorithm GAN model. The experimental data are closely related to the changes in temperature, humidity, wind speed, and atmospheric pressure generated by the time series axis and the concentration of air pollutants such as SO2, CO, O3, NO2, and PM10. Due to the characteristics of the data, since the concentration at the current time is affected by the concentration at the previous time, a predictive model for recursive supervised learning was applied. For comparative analysis of the accuracy of the existing models, CNN and LSTM, the difference between observation value and prediction value was analyzed and visualized. As a result of performance analysis, it was confirmed that the proposed GAN improved to 15.8%, 10.9%, and 5.5% in the evaluation items RMSE, MAPE, and IOA compared to LSTM, respectively.

Application of sequence to sequence learning based LSTM model (LSTM-s2s) for forecasting dam inflow (Sequence to Sequence based LSTM (LSTM-s2s)모형을 이용한 댐유입량 예측에 대한 연구)

  • Han, Heechan;Choi, Changhyun;Jung, Jaewon;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.3
    • /
    • pp.157-166
    • /
    • 2021
  • Forecasting dam inflow based on high reliability is required for efficient dam operation. In this study, deep learning technique, which is one of the data-driven methods and has been used in many fields of research, was manipulated to predict the dam inflow. The Long Short-Term Memory deep learning with Sequence-to-Sequence model (LSTM-s2s), which provides high performance in predicting time-series data, was applied for forecasting inflow of Soyang River dam. Various statistical metrics or evaluation indicators, including correlation coefficient (CC), Nash-Sutcliffe efficiency coefficient (NSE), percent bias (PBIAS), and error in peak value (PE), were used to evaluate the predictive performance of the model. The result of this study presented that the LSTM-s2s model showed high accuracy in the prediction of dam inflow and also provided good performance for runoff event based runoff prediction. It was found that the deep learning based approach could be used for efficient dam operation for water resource management during wet and dry seasons.