• Title/Summary/Keyword: Deep Learning System

Search Result 1,738, Processing Time 0.032 seconds

Object-aware Depth Estimation for Developing Collision Avoidance System (객체 영역에 특화된 뎁스 추정 기반의 충돌방지 기술개발)

  • Gyutae Hwang;Jimin Song;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.2
    • /
    • pp.91-99
    • /
    • 2024
  • Collision avoidance system is important to improve the robustness and functional safety of autonomous vehicles. This paper proposes an object-level distance estimation method to develop a collision avoidance system, and it is applied to golfcarts utilized in country club environments. To improve the detection accuracy, we continually trained an object detection model based on pseudo labels generated by a pre-trained detector. Moreover, we propose object-aware depth estimation (OADE) method which trains a depth model focusing on object regions. In the OADE algorithm, we generated dense depth information for object regions by utilizing detection results and sparse LiDAR points, and it is referred to as object-aware LiDAR projection (OALP). By using the OALP maps, a depth estimation model was trained by backpropagating more gradients of the loss on object regions. Experiments were conducted on our custom dataset, which was collected for the travel distance of 22 km on 54 holes in three country clubs under various weather conditions. The precision and recall rate were respectively improved from 70.5% and 49.1% to 95.3% and 92.1% after the continual learning with pseudo labels. Moreover, the OADE algorithm reduces the absolute relative error from 4.76% to 4.27% for estimating distances to obstacles.

Intelligent Railway Detection Algorithm Fusing Image Processing and Deep Learning for the Prevent of Unusual Events (철도 궤도의 이상상황 예방을 위한 영상처리와 딥러닝을 융합한 지능형 철도 레일 탐지 알고리즘)

  • Jung, Ju-ho;Kim, Da-hyeon;Kim, Chul-su;Oh, Ryum-duck;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.109-116
    • /
    • 2020
  • With the advent of high-speed railways, railways are one of the most frequently used means of transportation at home and abroad. In addition, in terms of environment, carbon dioxide emissions are lower and energy efficiency is higher than other transportation. As the interest in railways increases, the issue related to railway safety is one of the important concerns. Among them, visual abnormalities occur when various obstacles such as animals and people suddenly appear in front of the railroad. To prevent these accidents, detecting rail tracks is one of the areas that must basically be detected. Images can be collected through cameras installed on railways, and the method of detecting railway rails has a traditional method and a method using deep learning algorithm. The traditional method is difficult to detect accurately due to the various noise around the rail, and using the deep learning algorithm, it can detect accurately, and it combines the two algorithms to detect the exact rail. The proposed algorithm determines the accuracy of railway rail detection based on the data collected.

A Suggestion of the Direction of Construction Disaster Document Management through Text Data Classification Model based on Deep Learning (딥러닝 기반 분류 모델의 성능 분석을 통한 건설 재해사례 텍스트 데이터의 효율적 관리방향 제안)

  • Kim, Hayoung;Jang, YeEun;Kang, HyunBin;Son, JeongWook;Yi, June-Seong
    • Korean Journal of Construction Engineering and Management
    • /
    • v.22 no.5
    • /
    • pp.73-85
    • /
    • 2021
  • This study proposes an efficient management direction for Korean construction accident cases through a deep learning-based text data classification model. A deep learning model was developed, which categorizes five categories of construction accidents: fall, electric shock, flying object, collapse, and narrowness, which are representative accident types of KOSHA. After initial model tests, the classification accuracy of fall disasters was relatively high, while other types were classified as fall disasters. Through these results, it was analyzed that 1) specific accident-causing behavior, 2) similar sentence structure, and 3) complex accidents corresponding to multiple types affect the results. Two accuracy improvement experiments were then conducted: 1) reclassification, 2) elimination. As a result, the classification performance improved with 185.7% when eliminating complex accidents. Through this, the multicollinearity of complex accidents, including the contents of multiple accident types, was resolved. In conclusion, this study suggests the necessity to independently manage complex accidents while preparing a system to describe the situation of future accidents in detail.

Detection of Zebra-crossing Areas Based on Deep Learning with Combination of SegNet and ResNet (SegNet과 ResNet을 조합한 딥러닝에 기반한 횡단보도 영역 검출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • This paper presents a method to detect zebra-crossing using deep learning which combines SegNet and ResNet. For the blind, a safe crossing system is important to know exactly where the zebra-crossings are. Zebra-crossing detection by deep learning can be a good solution to this problem and robotic vision-based assistive technologies sprung up over the past few years, which focused on specific scene objects using monocular detectors. These traditional methods have achieved significant results with relatively long processing times, and enhanced the zebra-crossing perception to a large extent. However, running all detectors jointly incurs a long latency and becomes computationally prohibitive on wearable embedded systems. In this paper, we propose a model for fast and stable segmentation of zebra-crossing from captured images. The model is improved based on a combination of SegNet and ResNet and consists of three steps. First, the input image is subsampled to extract image features and the convolutional neural network of ResNet is modified to make it the new encoder. Second, through the SegNet original up-sampling network, the abstract features are restored to the original image size. Finally, the method classifies all pixels and calculates the accuracy of each pixel. The experimental results prove the efficiency of the modified semantic segmentation algorithm with a relatively high computing speed.

Single Image Super Resolution Based on Residual Dense Channel Attention Block-RecursiveSRNet (잔여 밀집 및 채널 집중 기법을 갖는 재귀적 경량 네트워크 기반의 단일 이미지 초해상도 기법)

  • Woo, Hee-Jo;Sim, Ji-Woo;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.429-440
    • /
    • 2021
  • With the recent development of deep convolutional neural network learning, deep learning techniques applied to single image super-resolution are showing good results. One of the existing deep learning-based super-resolution techniques is RDN(Residual Dense Network), in which the initial feature information is transmitted to the last layer using residual dense blocks, and subsequent layers are restored using input information of previous layers. However, if all hierarchical features are connected and learned and a large number of residual dense blocks are stacked, despite good performance, a large number of parameters and huge computational load are needed, so it takes a lot of time to learn a network and a slow processing speed, and it is not applicable to a mobile system. In this paper, we use the residual dense structure, which is a continuous memory structure that reuses previous information, and the residual dense channel attention block using the channel attention method that determines the importance according to the feature map of the image. We propose a method that can increase the depth to obtain a large receptive field and maintain a concise model at the same time. As a result of the experiment, the proposed network obtained PSNR as low as 0.205dB on average at 4× magnification compared to RDN, but about 1.8 times faster processing speed, about 10 times less number of parameters and about 1.74 times less computation.

Comparative Validation of the Mixed and Permanent Dentition at Web-Based Artificial Intelligence Cephalometric Analysis (혼합치열과 영구치열 환자를 대상으로 한 웹 기반 인공지능 두부 계측 분석에서의 비교 검증)

  • Shin, Sunhahn;Kim, Donghyun
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.49 no.1
    • /
    • pp.85-94
    • /
    • 2022
  • This retrospective study aimed to evaluate the difference in measurement between conventional orthodontic analysis and artificial intelligence orthodontic analysis in pediatric and adolescent patients aged 7 - 15 with the mixed and permanent dentition. A total of 60 pediatric and adolescent patients (30 mixed dentition, 30 permanent dentition) who underwent lateral cephalometric radiograph for orthodontic diagnosis were randomly selected. Seventeen cephalometric landmarks were identified, and 22 measurements were calculated by 1 examiner, using both conventional analysis method and deep learning-based analysis method. Errors due to repeated measurements were assessed by Pearson's correlation coefficient. For the mixed dentition group and the permanent dentition group, respectively, a paired t-test was used to evaluate the difference between the 2 methods. The difference between the 2 methods for 8 measurements were statistically significant in mixed dentition group: APDI, SNA, SNB, Mandibular plane angle, LAFH (p < 0.001), Facial ratio (p = 0.001), U1 to SN (p = 0.012), and U1 to A-Pg (p = 0.021). In the permanent dentition group, 4 measurements showed a statistically significant difference between the 2 methods: ODI (p = 0.020), Wits appraisal (p = 0.025), Facial ratio (p = 0.026), and U1 to A-Pg (p = 0.001). Compared with the time-consuming conventional orthodontic analysis, the deep learning-based cephalometric system can be clinically acceptable in terms of reliability and validity. However, it is essential to understand the limitations of the deep learning-based programs for orthodontic analysis of pediatric and adolescent patients and use these programs with the proper assessment.

Deep Learning Applied Method for Acquisition of Digital Position Signal of PET Detector (PET 검출기의 디지털 위치 신호 측정을 위한 딥러닝 적용 방법)

  • Byungdu, Jo;Seung-Jae, Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.6
    • /
    • pp.697-702
    • /
    • 2022
  • For imaging in positron emission tomography(PET), it is necessary to measure the position of the scintillation pixel interacting with the gamma rays incident on the detector. To this end, in the conventional system, a flood image of the scintillation pixel is obtained, the imaged area of each scintillation pixel is separated, and the position of the scintillation pixel is specified and acquired as a digital signal. In this study, a deep learning method was applied based on the signal formed by the photosensor of the detector, and a method was developed to directly acquire a digital signal without going through various procedures. DETECT2000 simulation was performed to verify this and evaluate the accuracy of position measurement. A detector was constructed using a 6 × 6 scintillation pixel array and a 4 × 4 photosensor, and a gamma ray event was generated at the center of the scintillation pixel and summed into four channels of signals through the Anger equation. After training the deep learning model using the acquired signal, the positions of gamma-ray events that occurred in different depth directions of the scintillation pixel were measured. The results showed accurate results at every scintillation pixel and position. When the method developed in this study is applied to the PET detector, it will be possible to measure the position of the scintillation pixel with a digital signal more conveniently.

A Study on A Deep Learning Algorithm to Predict Printed Spot Colors (딥러닝 알고리즘을 이용한 인쇄된 별색 잉크의 색상 예측 연구)

  • Jun, Su Hyeon;Park, Jae Sang;Tae, Hyun Chul
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.2
    • /
    • pp.48-55
    • /
    • 2022
  • The color image of the brand comes first and is an important visual element that leads consumers to the consumption of the product. To express more effectively what the brand wants to convey through design, the printing market is striving to print accurate colors that match the intention. In 'offset printing' mainly used in printing, colors are often printed in CMYK (Cyan, Magenta, Yellow, Key) colors. However, it is possible to print more accurate colors by making ink of the desired color instead of dotting CMYK colors. The resulting ink is called 'spot color' ink. Spot color ink is manufactured by repeating the process of mixing the existing inks. In this repetition of trial and error, the manufacturing cost of ink increases, resulting in economic loss, and environmental pollution is caused by wasted inks. In this study, a deep learning algorithm to predict printed spot colors was designed to solve this problem. The algorithm uses a single DNN (Deep Neural Network) model to predict printed spot colors based on the information of the paper and the proportions of inks to mix. More than 8,000 spot color ink data were used for learning, and all color was quantified by dividing the visible light wavelength range into 31 sections and the reflectance for each section. The proposed algorithm predicted more than 80% of spot color inks as very similar colors. The average value of the calculated difference between the actual color and the predicted color through 'Delta E' provided by CIE is 5.29. It is known that when Delta E is less than 10, it is difficult to distinguish the difference in printed color with the naked eye. The algorithm of this study has a more accurate prediction ability than previous studies, and it can be added flexibly even when new inks are added. This can be usefully used in real industrial sites, and it will reduce the attempts of the operator by checking the color of ink in a virtual environment. This will reduce the manufacturing cost of spot color inks and lead to improved working conditions for workers. In addition, it is expected to contribute to solving the environmental pollution problem by reducing unnecessarily wasted ink.

Development of Deep Learning Based Ensemble Land Cover Segmentation Algorithm Using Drone Aerial Images (드론 항공영상을 이용한 딥러닝 기반 앙상블 토지 피복 분할 알고리즘 개발)

  • Hae-Gwang Park;Seung-Ki Baek;Seung Hyun Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.71-80
    • /
    • 2024
  • In this study, a proposed ensemble learning technique aims to enhance the semantic segmentation performance of images captured by Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in fields such as urban planning, there has been active development of techniques utilizing deep learning segmentation methods for land cover segmentation. The study suggests a method that utilizes prominent segmentation models, namely U-Net, DeepLabV3, and Fully Convolutional Network (FCN), to improve segmentation prediction performance. The proposed approach integrates training loss, validation accuracy, and class score of the three segmentation models to enhance overall prediction performance. The method was applied and evaluated on a land cover segmentation problem involving seven classes: buildings,roads, parking lots, fields, trees, empty spaces, and areas with unspecified labels, using images captured by UAVs. The performance of the ensemble model was evaluated by mean Intersection over Union (mIoU), and the results of comparing the proposed ensemble model with the three existing segmentation methods showed that mIoU performance was improved. Consequently, the study confirms that the proposed technique can enhance the performance of semantic segmentation models.

Data-Driven-Based Beam Selection for Hybrid Beamforming in Ultra-Dense Networks

  • Ju, Sang-Lim;Kim, Kyung-Seok
    • International journal of advanced smart convergence
    • /
    • v.9 no.2
    • /
    • pp.58-67
    • /
    • 2020
  • In this paper, we propose a data-driven-based beam selection scheme for massive multiple-input and multiple-output (MIMO) systems in ultra-dense networks (UDN), which is capable of addressing the problem of high computational cost of conventional coordinated beamforming approaches. We consider highly dense small-cell scenarios with more small cells than mobile stations, in the millimetre-wave band. The analog beam selection for hybrid beamforming is a key issue in realizing millimetre-wave UDN MIMO systems. To reduce the computation complexity for the analog beam selection, in this paper, two deep neural network models are used. The channel samples, channel gains, and radio frequency beamforming vectors between the access points and mobile stations are collected at the central/cloud unit that is connected to all the small-cell access points, and are used to train the networks. The proposed machine-learning-based scheme provides an approach for the effective implementation of massive MIMO system in UDN environment.