• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.023 seconds

Dropout Genetic Algorithm Analysis for Deep Learning Generalization Error Minimization

  • Park, Jae-Gyun;Choi, Eun-Soo;Kang, Min-Soo;Jung, Yong-Gyu
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.2
    • /
    • pp.74-81
    • /
    • 2017
  • Recently, there are many companies that use systems based on artificial intelligence. The accuracy of artificial intelligence depends on the amount of learning data and the appropriate algorithm. However, it is not easy to obtain learning data with a large number of entity. Less data set have large generalization errors due to overfitting. In order to minimize this generalization error, this study proposed DGA(Dropout Genetic Algorithm) which can expect relatively high accuracy even though data with a less data set is applied to machine learning based genetic algorithm to deep learning based dropout. The idea of this paper is to determine the active state of the nodes. Using Gradient about loss function, A new fitness function is defined. Proposed Algorithm DGA is supplementing stochastic inconsistency about Dropout. Also DGA solved problem by the complexity of the fitness function and expression range of the model about Genetic Algorithm As a result of experiments using MNIST data proposed algorithm accuracy is 75.3%. Using only Dropout algorithm accuracy is 41.4%. It is shown that DGA is better than using only dropout.

Study on the Surface Defect Classification of Al 6061 Extruded Material By Using CNN-Based Algorithms (CNN을 이용한 Al 6061 압출재의 표면 결함 분류 연구)

  • Kim, S.B.;Lee, K.A.
    • Transactions of Materials Processing
    • /
    • v.31 no.4
    • /
    • pp.229-239
    • /
    • 2022
  • Convolution Neural Network(CNN) is a class of deep learning algorithms and can be used for image analysis. In particular, it has excellent performance in finding the pattern of images. Therefore, CNN is commonly applied for recognizing, learning and classifying images. In this study, the surface defect classification performance of Al 6061 extruded material using CNN-based algorithms were compared and evaluated. First, the data collection criteria were suggested and a total of 2,024 datasets were prepared. And they were randomly classified into 1,417 learning data and 607 evaluation data. After that, the size and quality of the training data set were improved using data augmentation techniques to increase the performance of deep learning. The CNN-based algorithms used in this study were VGGNet-16, VGGNet-19, ResNet-50 and DenseNet-121. The evaluation of the defect classification performance was made by comparing the accuracy, loss, and learning speed using verification data. The DenseNet-121 algorithm showed better performance than other algorithms with an accuracy of 99.13% and a loss value of 0.037. This was due to the structural characteristics of the DenseNet model, and the information loss was reduced by acquiring information from all previous layers for image identification in this algorithm. Based on the above results, the possibility of machine vision application of CNN-based model for the surface defect classification of Al extruded materials was also discussed.

Deep learning based symbol recognition for the visually impaired (시각장애인을 위한 딥러닝기반 심볼인식)

  • Park, Sangheon;Jeon, Taejae;Kim, Sanghyuk;Lee, Sangyoun;Kim, Juwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.249-256
    • /
    • 2016
  • Recently, a number of techniques to ensure the free walking for the visually impaired and transportation vulnerable have been studied. As a device for free walking, there are such as a smart cane and smart glasses to use the computer vision, ultrasonic sensor, acceleration sensor technology. In a typical technique, such as techniques for finds object and detect obstacles and walking area and recognizes the symbol information for notice environment information. In this paper, we studied recognization algorithm of the selected symbols that are required to visually impaired, with the deep learning algorithm. As a results, Use CNN(Convolutional Nueral Network) technique used in the field of deep-learning image processing, and analyzed by comparing through experimentation with various deep learning architectures.

AI-Based Particle Position Prediction Near Southwestern Area of Jeju Island (AI 기법을 활용한 제주도 남서부 해역의 입자추적 예측 연구)

  • Ha, Seung Yun;Kim, Hee Jun;Kwak, Gyeong Il;Kim, Young-Taeg;Yoon, Han-Sam
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.34 no.3
    • /
    • pp.72-81
    • /
    • 2022
  • Positions of five drifting buoys deployed on August 2020 near southwestern area of Jeju Island and numerically predicted velocities were used to develop five Artificial Intelligence-based models (AI models) for the prediction of particle tracks. Five AI models consisted of three machine learning models (Extra Trees, LightGBM, and Support Vector Machine) and two deep learning models (DNN and RBFN). To evaluate the prediction accuracy for six models, the predicted positions from five AI models and one numerical model were compared with the observed positions from five drifting buoys. Three skills (MAE, RMSE, and NCLS) for the five buoys and their averaged values were calculated. DNN model showed the best prediction accuracy in MAE, RMSE, and NCLS.

Performance Improvement of Deep Clustering Networks for Multi Dimensional Data (다차원 데이터에 대한 심층 군집 네트워크의 성능향상 방법)

  • Lee, Hyunjin
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.952-959
    • /
    • 2018
  • Clustering is one of the most fundamental algorithms in machine learning. The performance of clustering is affected by the distribution of data, and when there are more data or more dimensions, the performance is degraded. For this reason, we use a stacked auto encoder, one of the deep learning algorithms, to reduce the dimension of data which generate a feature vector that best represents the input data. We use k-means, which is a famous algorithm, as a clustering. Sine the feature vector which reduced dimensions are also multi dimensional, we use the Euclidean distance as well as the cosine similarity to increase the performance which calculating the similarity between the center of the cluster and the data as a vector. A deep clustering networks combining a stacked auto encoder and k-means re-trains the networks when the k-means result changes. When re-training the networks, the loss function of the stacked auto encoder and the loss function of the k-means are combined to improve the performance and the stability of the network. Experiments of benchmark image ad document dataset empirically validated the power of the proposed algorithm.

Classification of Natural and Artificial Forests from KOMPSAT-3/3A/5 Images Using Deep Neural Network (심층신경망을 이용한 KOMPSAT-3/3A/5 영상으로부터 자연림과 인공림의 분류)

  • Baek, Won-Kyung;Lee, Yong-Suk;Park, Sung-Hwan;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.1965-1974
    • /
    • 2021
  • Satellite remote sensing approach can be actively used for forest monitoring. Especially, it is much meaningful to utilize Korea multi-purpose satellites, an independently operated satellite in Korea, for forest monitoring of Korea, Recently, several studies have been performed to exploit meaningful information from satellite remote sensed data via machine learning approaches. The forest information produced through machine learning approaches can be used to support the efficiency of traditional forest monitoring methods, such as in-situ survey or qualitative analysis of aerial image. The performance of machine learning approaches is greatly depending on the characteristics of study area and data. Thus, it is very important to survey the best model among the various machine learning models. In this study, the performance of deep neural network to classify artificial or natural forests was analyzed in Samcheok, Korea. As a result, the pixel accuracy was about 0.857. F1 scores for natural and artificial forests were about 0.917 and 0.433 respectively. The F1 score of artificial forest was low. However, we can find that the artificial and natural forest classification performance improvement of about 0.06 and 0.10 in F1 scores, compared to the results from single layered sigmoid artificial neural network. Based on these results, it is necessary to find a more appropriate model for the forest type classification by applying additional models based on a convolutional neural network.

Malicious Packet Detection Technology Using Machine Learning and Deep Learning (머신러닝과 딥러닝을 활용한 악성 패킷 탐지 기술 연구)

  • Byounguk An;JongChan Lee;JeSung Chi;Wonhyung Park
    • Convergence Security Journal
    • /
    • v.21 no.4
    • /
    • pp.109-115
    • /
    • 2021
  • Currently, with the development of 5G and IoT technology, it is being used in connection with the things used in real life through a network. However, attempts to use networked computers for malicious purposes are increasing, and attacks using malicious codes that infringe the confidentiality and integrity of user information are becoming more intelligent. As a countermeasure to this, research is being conducted on a method of detecting malicious packets using a security control system and AI technology, supervised learning. The cyber security control system is being operated inefficiently in terms of manpower and cost. In addition, in the era of the COVID-19 pandemic, remote work has increased, making it difficult to respond immediately. In addition, malicious code detection using the existing AI technology, supervised learning, does not detect variant malicious code, and has an inaccurate malicious code detection rate depending on the quantity and quality of data. Therefore, in this study, by converging malicious packet detection technologies through various machine learning and deep learning models, the accuracy of malicious packet detection is increased, the false positive rate and the false positive rate are reduced, and a new type of malicious packet can be efficiently detected when intrusion. We propose a malicious packet detection technology.

Performance Evaluation of a Machine Learning Model Based on Data Feature Using Network Data Normalization Technique (네트워크 데이터 정형화 기법을 통한 데이터 특성 기반 기계학습 모델 성능평가)

  • Lee, Wooho;Noh, BongNam;Jeong, Kimoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.4
    • /
    • pp.785-794
    • /
    • 2019
  • Recently Deep Learning technology, one of the fourth industrial revolution technologies, is used to identify the hidden meaning of network data that is difficult to detect in the security arena and to predict attacks. Property and quality analysis of data sources are required before selecting the deep learning algorithm to be used for intrusion detection. This is because it affects the detection method depending on the contamination of the data used for learning. Therefore, the characteristics of the data should be identified and the characteristics selected. In this paper, the characteristics of malware were analyzed using network data set and the effect of each feature on performance was analyzed when the deep learning model was applied. The traffic classification experiment was conducted on the comparison of characteristics according to network characteristics and 96.52% accuracy was classified based on the selected characteristics.

Prediction of high turbidity in rivers using LSTM algorithm (LSTM 모형을 이용한 하천 고탁수 발생 예측 연구)

  • Park, Jungsu;Lee, Hyunho
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.34 no.1
    • /
    • pp.35-43
    • /
    • 2020
  • Turbidity has various effects on the water quality and ecosystem of a river. High turbidity during floods increases the operation cost of a drinking water supply system. Thus, the management of turbidity is essential for providing safe water to the public. There have been various efforts to estimate turbidity in river systems for proper management and early warning of high turbidity in the water supply process. Advanced data analysis technology using machine learning has been increasingly used in water quality management processes. Artificial neural networks(ANNs) is one of the first algorithms applied, where the overfitting of a model to observed data and vanishing gradient in the backpropagation process limit the wide application of ANNs in practice. In recent years, deep learning, which overcomes the limitations of ANNs, has been applied in water quality management. LSTM(Long-Short Term Memory) is one of novel deep learning algorithms that is widely used in the analysis of time series data. In this study, LSTM is used for the prediction of high turbidity(>30 NTU) in a river from the relationship of turbidity to discharge, which enables early warning of high turbidity in a drinking water supply system. The model showed 0.98, 0.99, 0.98 and 0.99 for precision, recall, F1-score and accuracy respectively, for the prediction of high turbidity in a river with 2 hour frequency data. The sensitivity of the model to the observation intervals of data is also compared with time periods of 2 hour, 8 hour, 1 day and 2 days. The model shows higher precision with shorter observation intervals, which underscores the importance of collecting high frequency data for better management of water resources in the future.