• Title/Summary/Keyword: Deep Learning System

Search Result 1,738, Processing Time 0.034 seconds

A General Acoustic Drone Detection Using Noise Reduction Preprocessing (환경 소음 제거를 통한 범용적인 드론 음향 탐지 구현)

  • Kang, Hae Young;Lee, Kyung-ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.881-890
    • /
    • 2022
  • As individual and group users actively use drones, the risks (Intrusion, Information leakage, and Sircraft crashes and so on) in no-fly zones are also increasing. Therefore, it is necessary to build a system that can detect drones intruding into the no-fly zone. General acoustic drone detection researches do not derive location-independent performance by directly learning drone sound including environmental noise in a deep learning model to overcome environmental noise. In this paper, we propose a drone detection system that collects sounds including environmental noise, and detects drones by removing noise from target sound. After removing environmental noise from the collected sound, the proposed system predicts the drone sound using Mel spectrogram and CNN deep learning. As a result, It is confirmed that the drone detection performance, which was weak due to unstudied environmental noises, can be improved by more than 7%.

Physical-Layer Technology Trend and Prospect for AI-based Mobile Communication (AI 기반 이동통신 물리계층 기술 동향과 전망)

  • Chang, K.;Ko, Y.J.;Kim, I.G.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.14-29
    • /
    • 2020
  • The 6G mobile communication system will become a backbone infrastructure around 2030 for the future digital world by providing distinctive services such as five-sense holograms, ultra-high reliability/low-latency, ultra-high-precision positioning, ultra-massive connectivity, and gigabit-per-second data rate for aerial and maritime terminals. The recent remarkable advances in machine learning (ML) technology have recognized its efficiency in wireless networking fields such as resource management and cell-configuration optimization. Further innovation in ML is expected to play an important role in solving new problems arising from 6G network management and service delivery. In contrast, an approach to apply ML to a physical-layer (PHY) target tackles the basic problems in radio links, such as overcoming signal distortion and interference. This paper reviews the methodologies of ML-based PHY, relevant industrial trends, and candiate technologies, including future research directions and standardization impacts.

The training of convolution neural network for advanced driver assistant system

  • Nam, Kihun;Jeon, Heekyeong
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.23-29
    • /
    • 2016
  • In this paper, the learning technique for CNN processor on vehicle is proposed. In the case of conventional CNN processors, weighted values learned through training are stored for use, but when there is distortion in the image due to the weather conditions, the accuracy is decreased. Therefore, the method of enhancing the input image for classification is general, but it has the weakness of increasing the processor size. To solve this problem, the CNN performance was improved in this paper through the learning method of the distorted image. As a result, the proposed method showed improvement of approximately 38% better accuracy than the conventional method.

Multi-modal Sensor System and Database for Human Detection and Activity Learning of Robot in Outdoor (실외에서 로봇의 인간 탐지 및 행위 학습을 위한 멀티모달센서 시스템 및 데이터베이스 구축)

  • Uhm, Taeyoung;Park, Jeong-Woo;Lee, Jong-Deuk;Bae, Gi-Deok;Choi, Young-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1459-1466
    • /
    • 2018
  • Robots which detect human and recognize action are important factors for human interaction, and many researches have been conducted. Recently, deep learning technology has developed and learning based robot's technology is a major research area. These studies require a database to learn and evaluate for intelligent human perception. In this paper, we propose a multi-modal sensor-based image database condition considering the security task by analyzing the image database to detect the person in the outdoor environment and to recognize the behavior during the running of the robot.

Multi-Description Image Compression Coding Algorithm Based on Depth Learning

  • Yong Zhang;Guoteng Hui;Lei Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.232-239
    • /
    • 2023
  • Aiming at the poor compression quality of traditional image compression coding (ICC) algorithm, a multi-description ICC algorithm based on depth learning is put forward in this study. In this study, first an image compression algorithm was designed based on multi-description coding theory. Image compression samples were collected, and the measurement matrix was calculated. Then, it processed the multi-description ICC sample set by using the convolutional self-coding neural system in depth learning. Compressing the wavelet coefficients after coding and synthesizing the multi-description image band sparse matrix obtained the multi-description ICC sequence. Averaging the multi-description image coding data in accordance with the effective single point's position could finally realize the compression coding of multi-description images. According to experimental results, the designed algorithm consumes less time for image compression, and exhibits better image compression quality and better image reconstruction effect.

Detection of PCB Components Using Deep Neural Nets (심층신경망을 이용한 PCB 부품의 검지 및 인식)

  • Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.11-15
    • /
    • 2020
  • In a typical initial setup of a PCB component inspection system, operators should manually input various information such as category, position, and inspection area for each component to be inspected, thus causing much inconvenience and longer setup time. Although there are many deep learning based object detectors, RetinaNet is regarded as one of best object detectors currently available. In this paper, a method using an extended RetinaNet is proposed that automatically detects its component category and position for each component mounted on PCBs from a high-resolution color input image. We extended the basic RetinaNet feature pyramid network by adding a feature pyramid layer having higher spatial resolution to the basic feature pyramid. It was demonstrated by experiments that the extended RetinaNet can detect successfully very small components that could be missed by the basic RetinaNet. Using the proposed method could enable automatic generation of inspection areas, thus considerably reducing the setup time of PCB component inspection systems.

Multiple Fusion-based Deep Cross-domain Recommendation (다중 융합 기반 심층 교차 도메인 추천)

  • Hong, Minsung;Lee, WonJin
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.819-832
    • /
    • 2022
  • Cross-domain recommender system transfers knowledge across different domains to improve the recommendation performance in a target domain that has a relatively sparse model. However, they suffer from the "negative transfer" in which transferred knowledge operates as noise. This paper proposes a novel Multiple Fusion-based Deep Cross-Domain Recommendation named MFDCR. We exploit Doc2Vec, one of the famous word embedding techniques, to fuse data user-wise and transfer knowledge across multi-domains. It alleviates the "negative transfer" problem. Additionally, we introduce a simple multi-layer perception to learn the user-item interactions and predict the possibility of preferring items by users. Extensive experiments with three domain datasets from one of the most famous services Amazon demonstrate that MFDCR outperforms recent single and cross-domain recommendation algorithms. Furthermore, experimental results show that MFDCR can address the problem of "negative transfer" and improve recommendation performance for multiple domains simultaneously. In addition, we show that our approach is efficient in extending toward more domains.

Intelligent Face Recognition and Tracking System to Distribute GPU Resources using CUDA (쿠다를 사용하여 GPU 리소스를 분배하는 지능형 얼굴 인식 및 트래킹 시스템)

  • Kim, Jae-Heong;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.22 no.2
    • /
    • pp.281-288
    • /
    • 2018
  • In this paper, we propose an intelligent face recognition and tracking system that distributes GPU resources using CUDA. The proposed system consists of five steps such as GPU allocation algorithm that distributes GPU resources in optimal state, face area detection and face recognition using deep learning, real time face tracking, and PTZ camera control. The GPU allocation algorithm that distributes multi-GPU resources optimally distributes the GPU resources flexibly according to the activation level of the GPU, unlike the method of allocating the GPU to the thread fixedly. Thus, there is a feature that enables stable and efficient use of multiple GPUs. In order to evaluate the performance of the proposed system, we compared the proposed system with the non - distributed system. As a result, the system which did not allocate the resource showed unstable operation, but the proposed system showed stable resource utilization because it was operated stably. Thus, the utility of the proposed system has been demonstrated.

Development of Vehicle Queue Length Estimation Model Using Deep Learning (딥러닝을 활용한 차량대기길이 추정모형 개발)

  • Lee, Yong-Ju;Hwang, Jae-Seong;Kim, Soo-Hee;Lee, Choul-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.2
    • /
    • pp.39-57
    • /
    • 2018
  • The purpose of this study was to construct an artificial intelligence model that learns and estimates the relationship between vehicle queue length and link travel time in urban areas. The vehicle queue length estimation model is modeled by three models. First of all, classify whether vehicle queue is a link overflow and estimate the vehicle queue length in the link overflow and non-overflow situations. Deep learning model is implemented as Tensorflow. All models are based DNN structure, and network structure which shows minimum error after learning and testing is selected by diversifying hidden layer and node number. The accuracy of the vehicle queue link overflow classification model was 98%, and the error of the vehicle queue estimation model in case of non-overflow and overflow situation was less than 15% and less than 5%, respectively. The average error per link was about 12%. Compared with the detecting data-based method, the error was reduced by about 39%.

Automatic hand gesture area extraction and recognition technique using FMCW radar based point cloud and LSTM (FMCW 레이다 기반의 포인트 클라우드와 LSTM을 이용한 자동 핸드 제스처 영역 추출 및 인식 기법)

  • Seung-Tak Ra;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.486-493
    • /
    • 2023
  • In this paper, we propose an automatic hand gesture area extraction and recognition technique using FMCW radar-based point cloud and LSTM. The proposed technique has the following originality compared to existing methods. First, unlike methods that use 2D images as input vectors such as existing range-dopplers, point cloud input vectors in the form of time series are intuitive input data that can recognize movement over time that occurs in front of the radar in the form of a coordinate system. Second, because the size of the input vector is small, the deep learning model used for recognition can also be designed lightly. The implementation process of the proposed technique is as follows. Using the distance, speed, and angle information measured by the FMCW radar, a point cloud containing x, y, z coordinate format and Doppler velocity information is utilized. For the gesture area, the hand gesture area is automatically extracted by identifying the start and end points of the gesture using the Doppler point obtained through speed information. The point cloud in the form of a time series corresponding to the viewpoint of the extracted gesture area is ultimately used for learning and recognition of the LSTM deep learning model used in this paper. To evaluate the objective reliability of the proposed technique, an experiment calculating MAE with other deep learning models and an experiment calculating recognition rate with existing techniques were performed and compared. As a result of the experiment, the MAE value of the time series point cloud input vector + LSTM deep learning model was calculated to be 0.262 and the recognition rate was 97.5%. The lower the MAE and the higher the recognition rate, the better the results, proving the efficiency of the technique proposed in this paper.