• Title/Summary/Keyword: Real Time Object Detection

Search Result 535, Processing Time 0.026 seconds

Design of U-healthcare System for Real-time Blood Pressure Monitoring (실시간 혈압 모니터링 u-헬스케어 시스템의 설계)

  • Cho, Byung-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.4
    • /
    • pp.161-168
    • /
    • 2018
  • High blood pressure is main today's adult disease and existing blood pressure gauge is not possible for real-time blood pressure measurement and remote monitoring. But real-time blood pressure monitoring u-healthcare system makes effect health management. In my paper, for monitoring real-time blood pressure, an architecture of real-time blood pressure monitoring system which consisted of wrist type-blood pressure measurement, smart-phone and u-healthcare server is presented. And the analog circuit architecture which is major core function for pulse wave detection and digital hardware architecture for wrist type-blood pressure measurement is presented. Also for software development to operate this hardware system, UML analysis method and flowcharts and screen design for this software design are showed. Therefore such design method in my paper is expected to be useful for real-time blood pressure monitoring u-healthcare system implementation.

Implementation of Image Semantic Segmentation on Android Device using Deep Learning (딥-러닝을 활용한 안드로이드 플랫폼에서의 이미지 시맨틱 분할 구현)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.88-91
    • /
    • 2020
  • Image segmentation is the task of partitioning an image into multiple sets of pixels based on some characteristics. The objective is to simplify the image into a representation that is more meaningful and easier to analyze. In this paper, we apply deep-learning to pre-train the learning model, and implement an algorithm that performs image segmentation in real time by extracting frames for the stream input from the Android device. Based on the open source of DeepLab-v3+ implemented in Tensorflow, some convolution filters are modified to improve real-time operation on the Android platform.

Implementation and Validation of Traffic Light Recognition Algorithm for Low-speed Special Purpose Vehicles in an Urban Autonomous Environment (저속 특장차의 도심 자율주행을 위한 신호등 인지 알고리즘 적용 및 검증)

  • Wonsub, Yun;Jongtak, Kim;Myeonggyu, Lee;Wongun, Kim
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.4
    • /
    • pp.6-15
    • /
    • 2022
  • In this study, a traffic light recognition algorithm was implemented and validated for low-speed special purpose vehicles in an urban environment. Real-time image data using a camera and YOLO algorithm were applied. Two methods were presented to increase the accuracy of the traffic light recognition algorithm, and it was confirmed that the second method had the higher accuracy according to the traffic light type. In addition, it was confirmed that the optimal YOLO algorithm was YOLO v5m, which has over 98% mAP values and higher efficiency. In the future, it is thought that the traffic light recognition algorithm can be used as a dual system to secure the platform safety in the traffic information error of C-ITS.

Automatic indoor progress monitoring using BIM and computer vision

  • Deng, Yichuan;Hong, Hao;Luo, Han;Deng, Hui
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.252-259
    • /
    • 2017
  • Nowadays, the existing manual method for recording actual progress of the construction site has some drawbacks, such as great reliance on the experience of professional engineers, work-intensive, time consuming and error prone. A method integrating computer vision and BIM(Building Information Modeling) is presented for indoor automatic progress monitoring. The developed method can accurately calculate the engineering quantity of target component in the time-lapse images. Firstly, sample images of on-site target are collected for training the classifier. After the construction images are identified by edge detection and classifier, a voting algorithm based on mathematical geometry and vector operation will divide the target contour. Then, according to the camera calibration principle, the image pixel coordinates are conversed into the real world Coordinate and the real coordinates would be corrected with the help of the geometric information in BIM model. Finally, the actual engineering quantity is calculated.

  • PDF

Efficient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded Systems (경량화된 임베디드 시스템에서 의미론적인 픽셀 분할 마스킹을 이용한 효율적인 영상 객체 인식 기법)

  • Yun, Heuijee;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.813-826
    • /
    • 2022
  • AI-based image processing technologies in various fields have been widely studied. However, the lighter the board, the more difficult it is to reduce the weight of image processing algorithm due to a lot of computation. In this paper, we propose a method using deep learning for object recognition algorithm in lightweight embedded boards. We can determine the area using a deep neural network architecture algorithm that processes semantic segmentation with a relatively small amount of computation. After masking the area, by using more accurate deep learning algorithm we could operate object detection with improved accuracy for efficient neural network (ENet) and You Only Look Once (YOLO) toward executing object recognition in real time for lightweighted embedded boards. This research is expected to be used for autonomous driving applications, which have to be much lighter and cheaper than the existing approaches used for object recognition.

Thermal Imagery-based Object Detection Algorithm for Low-Light Level Nighttime Surveillance System (저조도 야간 감시 시스템을 위한 열영상 기반 객체 검출 알고리즘)

  • Chang, Jeong-Uk;Lin, Chi-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.3
    • /
    • pp.129-136
    • /
    • 2020
  • In this paper, we propose a thermal imagery-based object detection algorithm for low-light level nighttime surveillance system. Many features selected by Haar-like feature selection algorithm and existing Adaboost algorithm are often vulnerable to noise and problems with similar or overlapping feature set for learning samples. It also removes noise from the feature set from the surveillance image of the low-light night environment, and implements it using the lightweight extended Haar feature and adaboost learning algorithm to enable fast and efficient real-time feature selection. Experiments use extended Haar feature points to recognize non-predictive objects with motion in nighttime low-light environments. The Adaboost learning algorithm with video frame 800*600 thermal image as input is implemented with CUDA 9.0 platform for simulation. As a result, the results of object detection confirmed that the success rate was about 90% or more, and the processing speed was about 30% faster than the computational results obtained through histogram equalization operations in general images.

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Moving Object Detection using Clausius Entropy and Adaptive Gaussian Mixture Model (클라우지우스 엔트로피와 적응적 가우시안 혼합 모델을 이용한 움직임 객체 검출)

  • Park, Jong-Hyun;Lee, Gee-Sang;Toan, Nguyen Dinh;Cho, Wan-Hyun;Park, Soon-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.22-29
    • /
    • 2010
  • A real-time detection and tracking of moving objects in video sequences is very important for smart surveillance systems. In this paper, we propose a novel algorithm for the detection of moving objects that is the entropy-based adaptive Gaussian mixture model (AGMM). First, the increment of entropy generally means the increment of complexity, and objects in unstable conditions cause higher entropy variations. Hence, if we apply these properties to the motion segmentation, pixels with large changes in entropy in moments have a higher chance in belonging to moving objects. Therefore, we apply the Clausius entropy theory to convert the pixel value in an image domain into the amount of energy change in an entropy domain. Second, we use an adaptive background subtraction method to detect moving objects. This models entropy variations from backgrounds as a mixture of Gaussians. Experiment results demonstrate that our method can detect motion object effectively and reliably.

Augmented Reality Framework to Visualize Information about Construction Resources Based on Object Detection (웨어러블 AR 기기를 이용한 객체인식 기반의 건설 현장 정보 시각화 구현)

  • Pham, Hung;Nguyen, Linh;Lee, Yong-Ju;Park, Man-Woo;Song, Eun-Seok
    • Journal of KIBIM
    • /
    • v.11 no.3
    • /
    • pp.45-54
    • /
    • 2021
  • The augmented reality (AR) has recently became an attractive technology in construction industry, which can play a critical role in realizing smart construction concepts. The AR has a great potential to help construction workers access digitalized information about design and construction more flexibly and efficiently. Though several AR applications have been introduced for on-site made to enhance on-site and off-site tasks, few are utilized in actual construction fields. This paper proposes a new AR framework that provides on-site managers with an opportunity to easily access the information about construction resources such as workers and equipment. The framework records videos with the camera installed on a wearable AR device and streams the video in a server equipped with high-performance processors, which runs an object detection algorithm on the streamed video in real time. The detection results are sent back to the AR device so that menu buttons are visualized on the detected objects in the user's view. A user is allowed to access the information about a worker or equipment appeared in one's view, by touching the menu button visualized on the resource. This paper details implementing parts of the framework, which requires the data transmission between the AR device and the server. It also discusses thoroughly about accompanied issues and the feasibility of the proposed framework.

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.