• Title/Summary/Keyword: object detection system

Search Result 1,079, Processing Time 0.031 seconds

Digital Twin and Visual Object Tracking using Deep Reinforcement Learning (심층 강화학습을 이용한 디지털트윈 및 시각적 객체 추적)

  • Park, Jin Hyeok;Farkhodov, Khurshedjon;Choi, Piljoo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.145-156
    • /
    • 2022
  • Nowadays, the complexity of object tracking models among hardware applications has become a more in-demand duty to complete in various indeterminable environment tracking situations with multifunctional algorithm skills. In this paper, we propose a virtual city environment using AirSim (Aerial Informatics and Robotics Simulation - AirSim, CityEnvironment) and use the DQN (Deep Q-Learning) model of deep reinforcement learning model in the virtual environment. The proposed object tracking DQN network observes the environment using a deep reinforcement learning model that receives continuous images taken by a virtual environment simulation system as input to control the operation of a virtual drone. The deep reinforcement learning model is pre-trained using various existing continuous image sets. Since the existing various continuous image sets are image data of real environments and objects, it is implemented in 3D to track virtual environments and moving objects in them.

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Implementation of a Single Human Detection Algorithm for Video Digital Door Lock (영상디지털도어록용 단일 사람 검출 알고리즘 구현)

  • Shin, Seung-Hwan;Lee, Sang-Rak;Choi, Han-Go
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.127-134
    • /
    • 2012
  • Video digital door lock(VDDL) system detects people who access to the door and acquires the human image. Design considerations is that current consumption must be minimized by applying fast human detection algorithm because of battery-based operation. Since the digital door lock takes an image through a fixed camera, detection of a person based on background image leads to high degree of reliability. This paper deals with a single human detection algorithm suitable for VDDL with fulfilling these requirements such that it detects a moving object in an image, then identifies whether the object is a person or not using image processing. The proposed image processing algorithm consists of two steps: Firstly, it detects the human image region using both background image and skin color information. Secondly, it identifies the person using polar histogram based on proportional information of human body. Proposed algorithm is implemented in VDDL and is verified the performance through experiments.

Development of Obstacle Recognition System Using Ultrasonic Sensor (초음파 센서를 이용한 장애물 인식 장치 개발)

  • Yu, Byeonggu;Kwon, Sunwook;Kim, Jusung
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.5
    • /
    • pp.25-30
    • /
    • 2017
  • In this Paper, we Propose the Low-cost Obstacle Recognition System Utilizing the Ultrasonic Sensor. Developed Obstacle Recognition System can be used to Aid the Visually Impaired Person. The Existence of the Obstacle is Notified to the Person through the Embodied Electronic Vibration Motor. The Timing Difference from the Recognition to the Notification Indicates the Distance to the Obstacle. Pulsed Ultrasonic Signal Controlled by MCU is Utilized and the Reflected Pulse through the Obstacle gives the Developed System the Existence of the Obstacle and the Distance to the Object. Pulse is sent Repetitively to Improve the Detection Accuracy. Developed Apparatus gives 30 Degree of Detection Angle and 2cm-30cm of the Detection Range when the Apparatus is Tested under Normal Walking Environment.

Real-Time Face Tracking System using Adaptive Face Detector and Kalman Filter (적응적 얼굴 검출기와 칼만 필터를 이용한 실시간 얼굴 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Shin, Bum-Joo
    • Journal of Information Technology Services
    • /
    • v.6 no.3
    • /
    • pp.241-249
    • /
    • 2007
  • This paper describes a real-time face tracking system using effective detector and Kalman filter. In the proposed system, an image is separated into a background and an object using a real-time updated face color for effective face detection. The face features are extracted using the five types of simple Haar-like features. The extracted features are reinterpreted using Principal Component Analysis (PCA), and interpreted principal components are used for Support Vector Machine (SVM) that classifies the faces and non-faces. The moving face is traced with Kalman filter, which uses the static information of the detected faces and the dynamic information of changes between previous and current frames. The proposed system sets up an initial skin color and updates a region of a skin color through a moving skin color in a real time. It is possible to remove a background which has a similar color with a skin through updating a skin color in a real time. Also, as reducing a potential-face region using a skin color, the performance is increased up to 50% when comparing to the case of extracting features from a whole region.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

A Real-time Face Recognition System using Fast Face Detection (빠른 얼굴 검출을 이용한 실시간 얼굴 인식 시스템)

  • Lee Ho-Geun;Jung Sung-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.12
    • /
    • pp.1247-1259
    • /
    • 2005
  • This paper proposes a real-time face recognition system which detects multiple faces from low resolution video such as web-camera video. Face recognition system consists of the face detection step and the face classification step. At First, it finds face region candidates by using AdaBoost based object detection method which have fast speed and robust performance. It generates reduced feature vector for each face region candidate by using principle component analysis. At Second, Face classification used Principle Component Analysis and multi-SVM. Experimental result shows that the proposed method achieves real-time face detection and face recognition from low resolution video. Additionally, We implement the auto-tracking face recognition system using the Pan-Tilt Web-camera and radio On/Off digital door-lock system with face recognition system.

In-Vehicle AR-HUD System to Provide Driving-Safety Information

  • Park, Hye Sun;Park, Min Woo;Won, Kwang Hee;Kim, Kyong-Ho;Jung, Soon Ki
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1038-1047
    • /
    • 2013
  • Augmented reality (AR) is currently being applied actively to commercial products, and various types of intelligent AR systems combining both the Global Positioning System and computer-vision technologies are being developed and commercialized. This paper suggests an in-vehicle head-up display (HUD) system that is combined with AR technology. The proposed system recognizes driving-safety information and offers it to the driver. Unlike existing HUD systems, the system displays information registered to the driver's view and is developed for the robust recognition of obstacles under bad weather conditions. The system is composed of four modules: a ground obstacle detection module, an object decision module, an object recognition module, and a display module. The recognition ratio of the driving-safety information obtained by the proposed AR-HUD system is about 73%, and the system has a recognition speed of about 15 fps for both vehicles and pedestrians.

Development of Infrared Telemeter for Autonomous Orchard Vehicle (과수원용 차량의 자율주행을 위한 적외선 측거 장치개발)

  • 장익주;김태한;이상민
    • Journal of Biosystems Engineering
    • /
    • v.25 no.2
    • /
    • pp.131-140
    • /
    • 2000
  • Spraying operation is one of the most essential in an orchard management and it is also hazardous to human body. for automatic and unmanned spraying , an autonomous travelling vehicle is demanded. In this study, a telemeter was developed using infrared beam which could detect trunks and obstacles measure distance and direction from the vehicle travelling in the orchard. The telemeter system was composed of two infrared LED transmitters and receivers, a beam scanning device for continuous object detection , two rotary encoders for angle detector, and a beam level controller for uneven soil surface. The detected distance and direction signal s were sent to personal computer which made for the system display the angular and distance measurements through I/O board. According to a field test in an apple farm, the system detected up to 10m distance under 12 V of transmitted beam intensity, however, it was recommended that the proper beam transmit intensity be 7 v at the 10 m distance, because of the negative effect to human body at 12 V. The error rate of this system was 0.92 % when the actual distance was compared to measured one. The system was feasible at the small error rate. The developed telemeter system was an important part for autonomous travelling vehicle provided the real time object recognition . A direction control system could be constructed suing the system. It is expected that the system could greatly contribute to the development of autonomous farm vehicle.

  • PDF

Fast, Accurate Vehicle Detection and Distance Estimation

  • Ma, QuanMeng;Jiang, Guang;Lai, DianZhi;cui, Hua;Song, Huansheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.610-630
    • /
    • 2020
  • A large number of people suffered from traffic accidents each year, so people pay more attention to traffic safety. However, the traditional methods use laser sensors to calculate the vehicle distance at a very high cost. In this paper, we propose a method based on deep learning to calculate the vehicle distance with a monocular camera. Our method is inexpensive and quite convenient to deploy on the mobile platforms. This paper makes two contributions. First, based on Light-Head RCNN, we propose a new vehicle detection framework called Light-Car Detection which can be used on the mobile platforms. Second, the planar homography of projective geometry is used to calculate the distance between the camera and the vehicles ahead. The results show that our detection system achieves 13FPS detection speed and 60.0% mAP on the Adreno 530 GPU of Samsung Galaxy S7, while only requires 7.1MB of storage space. Compared with the methods existed, the proposed method achieves a better performance.