• Title/Summary/Keyword: Deep Learning Dataset

Search Result 776, Processing Time 0.034 seconds

Semantic Segmentation of Clouds Using Multi-Branch Neural Architecture Search (멀티 브랜치 네트워크 구조 탐색을 사용한 구름 영역 분할)

  • Chi Yoon Jeong;Kyeong Deok Moon;Mooseop Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.143-156
    • /
    • 2023
  • To precisely and reliably analyze the contents of the satellite imagery, recognizing the clouds which are the obstacle to gathering the useful information is essential. In recent times, deep learning yielded satisfactory results in various tasks, so many studies using deep neural networks have been conducted to improve the performance of cloud detection. However, existing methods for cloud detection have the limitation on increasing the performance due to the adopting the network models for semantic image segmentation without modification. To tackle this problem, we introduced the multi-branch neural architecture search to find optimal network structure for cloud detection. Additionally, the proposed method adopts the soft intersection over union (IoU) as loss function to mitigate the disagreement between the loss function and the evaluation metric and uses the various data augmentation methods. The experiments are conducted using the cloud detection dataset acquired by Arirang-3/3A satellite imagery. The experimental results showed that the proposed network which are searched network architecture using cloud dataset is 4% higher than the existing network model which are searched network structure using urban street scenes with regard to the IoU. Also, the experimental results showed that the soft IoU exhibits the best performance on cloud detection among the various loss functions. When comparing the proposed method with the state-of-the-art (SOTA) models in the field of semantic segmentation, the proposed method showed better performance than the SOTA models with regard to the mean IoU and overall accuracy.

DeNERT: Named Entity Recognition Model using DQN and BERT

  • Yang, Sung-Min;Jeong, Ok-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.29-35
    • /
    • 2020
  • In this paper, we propose a new structured entity recognition DeNERT model. Recently, the field of natural language processing has been actively researched using pre-trained language representation models with a large amount of corpus. In particular, the named entity recognition, which is one of the fields of natural language processing, uses a supervised learning method, which requires a large amount of training dataset and computation. Reinforcement learning is a method that learns through trial and error experience without initial data and is closer to the process of human learning than other machine learning methodologies and is not much applied to the field of natural language processing yet. It is often used in simulation environments such as Atari games and AlphaGo. BERT is a general-purpose language model developed by Google that is pre-trained on large corpus and computational quantities. Recently, it is a language model that shows high performance in the field of natural language processing research and shows high accuracy in many downstream tasks of natural language processing. In this paper, we propose a new named entity recognition DeNERT model using two deep learning models, DQN and BERT. The proposed model is trained by creating a learning environment of reinforcement learning model based on language expression which is the advantage of the general language model. The DeNERT model trained in this way is a faster inference time and higher performance model with a small amount of training dataset. Also, we validate the performance of our model's named entity recognition performance through experiments.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Study on Image Use for Plant Disease Classification (작물의 병충해 분류를 위한 이미지 활용 방법 연구)

  • Jeong, Seong-Ho;Han, Jeong-Eun;Jeong, Seong-Kyun;Bong, Jae-Hwan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.343-350
    • /
    • 2022
  • It is worth verifying the effectiveness of data integration between data with different features. This study investigated whether the data integration affects the accuracy of deep neural network (DNN), and which integration method shows the best improvement. This study used two different public datasets. One public dataset was taken in an actual farm in India. And another was taken in a laboratory environment in Korea. Leaf images were selected from two different public datasets to have five classes which includes normal and four different types of plant diseases. DNN used pre-trained VGG16 as a feature extractor and multi-layer perceptron as a classifier. Data were integrated into three different ways to be used for the training process. DNN was trained in a supervised manner via the integrated data. The trained DNN was evaluated by using a test dataset taken in an actual farm. DNN shows the best accuracy for the test dataset when DNN was first trained by images taken in the laboratory environment and then trained by images taken in the actual farm. The results show that data integration between plant images taken in a different environment helps improve the performance of deep neural networks. And the results also confirmed that independent use of plant images taken in different environments during the training process is more effective in improving the performance of DNN.

Design of Deep Learning-based Tourism Recommendation System Based on Perceived Value and Behavior in Intelligent Cloud Environment (지능형 클라우드 환경에서 지각된 가치 및 행동의도를 적용한 딥러닝 기반의 관광추천시스템 설계)

  • Moon, Seok-Jae;Yoo, Kyoung-Mi
    • Journal of the Korean Applied Science and Technology
    • /
    • v.37 no.3
    • /
    • pp.473-483
    • /
    • 2020
  • This paper proposes a tourism recommendation system in intelligent cloud environment using information of tourist behavior applied with perceived value. This proposed system applied tourist information and empirical analysis information that reflected the perceptual value of tourists in their behavior to the tourism recommendation system using wide and deep learning technology. This proposal system was applied to the tourism recommendation system by collecting and analyzing various tourist information that can be collected and analyzing the values that tourists were usually aware of and the intentions of people's behavior. It provides empirical information by analyzing and mapping the association of tourism information, perceived value and behavior to tourism platforms in various fields that have been used. In addition, the tourism recommendation system using wide and deep learning technology, which can achieve both memorization and generalization in one model by learning linear model components and neural only components together, and the method of pipeline operation was presented. As a result of applying wide and deep learning model, the recommendation system presented in this paper showed that the app subscription rate on the visiting page of the tourism-related app store increased by 3.9% compared to the control group, and the other 1% group applied a model using only the same variables and only the deep side of the neural network structure, resulting in a 1% increase in subscription rate compared to the model using only the deep side. In addition, by measuring the area (AUC) below the receiver operating characteristic curve for the dataset, offline AUC was also derived that the wide-and-deep learning model was somewhat higher, but more influential in online traffic.

A Feasibility Study on Application of a Deep Convolutional Neural Network for Automatic Rock Type Classification (자동 암종 분류를 위한 딥러닝 영상처리 기법의 적용성 검토 연구)

  • Pham, Chuyen;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.30 no.5
    • /
    • pp.462-472
    • /
    • 2020
  • Rock classification is fundamental discipline of exploring geological and geotechnical features in a site, which, however, may not be easy works because of high diversity of rock shape and color according to its origin, geological history and so on. With the great success of convolutional neural networks (CNN) in many different image-based classification tasks, there has been increasing interest in taking advantage of CNN to classify geological material. In this study, a feasibility of the deep CNN is investigated for automatically and accurately identifying rock types, focusing on the condition of various shapes and colors even in the same rock type. It can be further developed to a mobile application for assisting geologist in classifying rocks in fieldwork. The structure of CNN model used in this study is based on a deep residual neural network (ResNet), which is an ultra-deep CNN using in object detection and classification. The proposed CNN was trained on 10 typical rock types with an overall accuracy of 84% on the test set. The result demonstrates that the proposed approach is not only able to classify rock type using images, but also represents an improvement as taking highly diverse rock image dataset as input.

Automated Vehicle Research by Recognizing Maneuvering Modes using LSTM Model (LSTM 모델 기반 주행 모드 인식을 통한 자율 주행에 관한 연구)

  • Kim, Eunhui;Oh, Alice
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.4
    • /
    • pp.153-163
    • /
    • 2017
  • This research is based on the previous research that personally preferred safe distance, rotating angle and speed are differentiated. Thus, we use machine learning model for recognizing maneuvering modes trained per personal or per similar driving pattern groups, and we evaluate automatic driving according to maneuvering modes. By utilizing driving knowledge, we subdivided 8 kinds of longitudinal modes and 4 kinds of lateral modes, and by combining the longitudinal and lateral modes, we build 21 kinds of maneuvering modes. we train the labeled data set per time stamp through RNN, LSTM and Bi-LSTM models by the trips of drivers, which are supervised deep learning models, and evaluate the maneuvering modes of automatic driving for the test data set. The evaluation dataset is aggregated of living trips of 3,000 populations by VTTI in USA for 3 years and we use 1500 trips of 22 people and training, validation and test dataset ratio is 80%, 10% and 10%, respectively. For recognizing longitudinal 8 kinds of maneuvering modes, RNN achieves better accuracy compared to LSTM, Bi-LSTM. However, Bi-LSTM improves the accuracy in recognizing 21 kinds of longitudinal and lateral maneuvering modes in comparison with RNN and LSTM as 1.54% and 0.47%, respectively.

Character Detection and Recognition of Steel Materials in Construction Drawings using YOLOv4-based Small Object Detection Techniques (YOLOv4 기반의 소형 물체탐지기법을 이용한 건설도면 내 철강 자재 문자 검출 및 인식기법)

  • Sim, Ji-Woo;Woo, Hee-Jo;Kim, Yoonhwan;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.391-401
    • /
    • 2022
  • As deep learning-based object detection and recognition research have been developed recently, the scope of application to industry and real life is expanding. But deep learning-based systems in the construction system are still much less studied. Calculating materials in the construction system is still manual, so it is a reality that transactions of wrong volumn calculation are generated due to a lot of time required and difficulty in accurate accumulation. A fast and accurate automatic drawing recognition system is required to solve this problem. Therefore, we propose an AI-based automatic drawing recognition accumulation system that detects and recognizes steel materials in construction drawings. To accurately detect steel materials in construction drawings, we propose data augmentation techniques and spatial attention modules for improving small object detection performance based on YOLOv4. The detected steel material area is recognized by text, and the number of steel materials is integrated based on the predicted characters. Experimental results show that the proposed method increases the accuracy and precision by 1.8% and 16%, respectively, compared with the conventional YOLOv4. As for the proposed method, Precision performance was 0.938. The recall was 1. Average Precision AP0.5 was 99.4% and AP0.5:0.95 was 67%. Accuracy for character recognition obtained 99.9.% by configuring and learning a suitable dataset that contains fonts used in construction drawings compared to the 75.6% using the existing dataset. The average time required per image was 0.013 seconds in the detection, 0.65 seconds in character recognition, and 0.16 seconds in the accumulation, resulting in 0.84 seconds.

Assessment of the Object Detection Ability of Interproximal Caries on Primary Teeth in Periapical Radiographs Using Deep Learning Algorithms (유치의 치근단 방사선 사진에서 딥 러닝 알고리즘을 이용한 모델의 인접면 우식증 객체 탐지 능력의 평가)

  • Hongju Jeon;Seonmi Kim;Namki Choi
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.50 no.3
    • /
    • pp.263-276
    • /
    • 2023
  • The purpose of this study was to evaluate the performance of a model using You Only Look Once (YOLO) for object detection of proximal caries in periapical radiographs of children. A total of 2016 periapical radiographs in primary dentition were selected from the M6 database as a learning material group, of which 1143 were labeled as proximal caries by an experienced dentist using an annotation tool. After converting the annotations into a training dataset, YOLO was trained on the dataset using a single convolutional neural network (CNN) model. Accuracy, recall, specificity, precision, negative predictive value (NPV), F1-score, Precision-Recall curve, and AP (area under curve) were calculated for evaluation of the object detection model's performance in the 187 test datasets. The results showed that the CNN-based object detection model performed well in detecting proximal caries, with a diagnostic accuracy of 0.95, a recall of 0.94, a specificity of 0.97, a precision of 0.82, a NPV of 0.96, and an F1-score of 0.81. The AP was 0.83. This model could be a valuable tool for dentists in detecting carious lesions in periapical radiographs.

Class Classification and Validation of a Musculoskeletal Risk Factor Dataset for Manufacturing Workers (제조업 노동자 근골격계 부담요인 데이터셋 클래스 분류와 유효성 검증)

  • Young-Jin Kang;;;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.49-59
    • /
    • 2023
  • There are various items in the safety and health standards of the manufacturing industry, but they can be divided into work-related diseases and musculoskeletal diseases according to the standards for sickness and accident victims. Musculoskeletal diseases occur frequently in manufacturing and can lead to a decrease in labor productivity and a weakening of competitiveness in manufacturing. In this paper, to detect the musculoskeletal harmful factors of manufacturing workers, we defined the musculoskeletal load work factor analysis, harmful load working postures, and key points matching, and constructed data for Artificial Intelligence(AI) learning. To check the effectiveness of the suggested dataset, AI algorithms such as YOLO, Lite-HRNet, and EfficientNet were used to train and verify. Our experimental results the human detection accuracy is 99%, the key points matching accuracy of the detected person is @AP0.5 88%, and the accuracy of working postures evaluation by integrating the inferred matching positions is LEGS 72.2%, NECT 85.7%, TRUNK 81.9%, UPPERARM 79.8%, and LOWERARM 92.7%, and considered the necessity for research that can prevent deep learning-based musculoskeletal diseases.