• Title/Summary/Keyword: deep Learning

Search Result 5,795, Processing Time 0.03 seconds

A Study on the Air Pollution Monitoring Network Algorithm Using Deep Learning (심층신경망 모델을 이용한 대기오염망 자료확정 알고리즘 연구)

  • Lee, Seon-Woo;Yang, Ho-Jun;Lee, Mun-Hyung;Choi, Jung-Moo;Yun, Se-Hwan;Kwon, Jang-Woo;Park, Ji-Hoon;Jung, Dong-Hee;Shin, Hye-Jung
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.11
    • /
    • pp.57-65
    • /
    • 2021
  • We propose a novel method to detect abnormal data of specific symptoms using deep learning in air pollution measurement system. Existing methods generally detect abnomal data by classifying data showing unusual patterns different from the existing time series data. However, these approaches have limitations in detecting specific symptoms. In this paper, we use DeepLab V3+ model mainly used for foreground segmentation of images, whose structure has been changed to handle one-dimensional data. Instead of images, the model receives time-series data from multiple sensors and can detect data showing specific symptoms. In addition, we improve model's performance by reducing the complexity of noisy form time series data by using 'piecewise aggregation approximation'. Through the experimental results, it can be confirmed that anomaly data detection can be performed successfully.

Hybrid All-Reduce Strategy with Layer Overlapping for Reducing Communication Overhead in Distributed Deep Learning (분산 딥러닝에서 통신 오버헤드를 줄이기 위해 레이어를 오버래핑하는 하이브리드 올-리듀스 기법)

  • Kim, Daehyun;Yeo, Sangho;Oh, Sangyoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.7
    • /
    • pp.191-198
    • /
    • 2021
  • Since the size of training dataset become large and the model is getting deeper to achieve high accuracy in deep learning, the deep neural network training requires a lot of computation and it takes too much time with a single node. Therefore, distributed deep learning is proposed to reduce the training time by distributing computation across multiple nodes. In this study, we propose hybrid allreduce strategy that considers the characteristics of each layer and communication and computational overlapping technique for synchronization of distributed deep learning. Since the convolution layer has fewer parameters than the fully-connected layer as well as it is located at the upper, only short overlapping time is allowed. Thus, butterfly allreduce is used to synchronize the convolution layer. On the other hand, fully-connecter layer is synchronized using ring all-reduce. The empirical experiment results on PyTorch with our proposed scheme shows that the proposed method reduced the training time by up to 33% compared to the baseline PyTorch.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

Deep Learning based Estimation of Depth to Bearing Layer from In-situ Data (딥러닝 기반 국내 지반의 지지층 깊이 예측)

  • Jang, Young-Eun;Jung, Jaeho;Han, Jin-Tae;Yu, Yonggyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.3
    • /
    • pp.35-42
    • /
    • 2022
  • The N-value from the Standard Penetration Test (SPT), which is one of the representative in-situ test, is an important index that provides basic geological information and the depth of the bearing layer for the design of geotechnical structures. In the aspect of time and cost-effectiveness, there is a need to carry out a representative sampling test. However, the various variability and uncertainty are existing in the soil layer, so it is difficult to grasp the characteristics of the entire field from the limited test results. Thus the spatial interpolation techniques such as Kriging and IDW (inverse distance weighted) have been used for predicting unknown point from existing data. Recently, in order to increase the accuracy of interpolation results, studies that combine the geotechnics and deep learning method have been conducted. In this study, based on the SPT results of about 22,000 holes of ground survey, a comparative study was conducted to predict the depth of the bearing layer using deep learning methods and IDW. The average error among the prediction results of the bearing layer of each analysis model was 3.01 m for IDW, 3.22 m and 2.46 m for fully connected network and PointNet, respectively. The standard deviation was 3.99 for IDW, 3.95 and 3.54 for fully connected network and PointNet. As a result, the point net deep learing algorithm showed improved results compared to IDW and other deep learning method.

Clinical applications and performance of intelligent systems in dental and maxillofacial radiology: A review

  • Nagi, Ravleen;Aravinda, Konidena;Rakesh, N;Gupta, Rajesh;Pal, Ajay;Mann, Amrit Kaur
    • Imaging Science in Dentistry
    • /
    • v.50 no.2
    • /
    • pp.81-92
    • /
    • 2020
  • Intelligent systems(i.e., artificial intelligence), particularly deep learning, are machines able to mimic the cognitive functions of humans to perform tasks of problem-solving and learning. This field deals with computational models that can think and act intelligently, like the human brain, and construct algorithms that can learn from data to make predictions. Artificial intelligence is becoming important in radiology due to its ability to detect abnormalities in radiographic images that are unnoticed by the naked human eye. These systems have reduced radiologists' workload by rapidly recording and presenting data, and thereby monitoring the treatment response with a reduced risk of cognitive bias. Intelligent systems have an important role to play and could be used by dentists as an adjunct to other imaging modalities in making appropriate diagnoses and treatment plans. In the field of maxillofacial radiology, these systems have shown promise for the interpretation of complex images, accurate localization of landmarks, characterization of bone architecture, estimation of oral cancer risk, and the assessment of metastatic lymph nodes, periapical pathologies, and maxillary sinus pathologies. This review discusses the clinical applications and scope of intelligent systems such as machine learning, artificial intelligence, and deep learning programs in maxillofacial imaging.

Proposal for Deep Learning based Character Recognition System by Virtual Data Generation (가상 데이터 생성을 통한 딥러닝 기반 문자인식 시스템 제안)

  • Lee, Seungju;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.275-278
    • /
    • 2020
  • In this paper, we proposed a deep learning based character recognition system through virtual data generation. In order to secure the learning data that takes the largest weight in supervised learning, virtual data was created. Also, after creating virtual data, data generalization was performed to cope with various data by using augmentation parameter. Finally, the learning data composition generated data by assigning various values to augmentation parameter and font parameter. Test data for measuring the character recognition performance was constructed by cropping the text area from the actual image data. The test data was augmented considering the image distortion that may occur in real environment. Deep learning algorithm uses YOLO v3 which performs detection in real time. Inference result outputs the final detection result through post-processing.

Implementation of Face Recognition Pipeline Model using Caffe (Caffe를 이용한 얼굴 인식 파이프라인 모델 구현)

  • Park, Jin-Hwan;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.5
    • /
    • pp.430-437
    • /
    • 2020
  • The proposed model implements a model that improves the face prediction rate and recognition rate through learning with an artificial neural network using face detection, landmark and face recognition algorithms. After landmarking in the face images of a specific person, the proposed model use the previously learned Caffe model to extract face detection and embedding vector 128D. The learning is learned by building machine learning algorithms such as support vector machine (SVM) and deep neural network (DNN). Face recognition is tested with a face image different from the learned figure using the learned model. As a result of the experiment, the result of learning with DNN rather than SVM showed better prediction rate and recognition rate. However, when the hidden layer of DNN is increased, the prediction rate increases but the recognition rate decreases. This is judged as overfitting caused by a small number of objects to be recognized. As a result of learning by adding a clear face image to the proposed model, it is confirmed that the result of high prediction rate and recognition rate can be obtained. This research will be able to obtain better recognition and prediction rates through effective deep learning establishment by utilizing more face image data.

Network Intrusion Detection System Using Feature Extraction Based on AutoEncoder in IOT environment (IOT 환경에서의 오토인코더 기반 특징 추출을 이용한 네트워크 침입탐지 시스템)

  • Lee, Joohwa;Park, Keehyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.12
    • /
    • pp.483-490
    • /
    • 2019
  • In the Network Intrusion Detection System (NIDS), the function of classification is very important, and detection performance depends on various features. Recently, a lot of research has been carried out on deep learning, but network intrusion detection system experience slowing down problems due to the large volume of traffic and a high dimensional features. Therefore, we do not use deep learning as a classification, but as a preprocessing process for feature extraction and propose a research method from which classifications can be made based on extracted features. A stacked AutoEncoder, which is a representative unsupervised learning of deep learning, is used to extract features and classifications using the Random Forest classification algorithm. Using the data collected in the IOT environment, the performance was more than 99% when normal and attack traffic are classified into multiclass, and the performance and detection rate were superior even when compared with other models such as AE-RF and Single-RF.