• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.028 seconds

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Online Monitoring System based notifications on Mobile devices with Kinect V2 (키넥트와 모바일 장치 알림 기반 온라인 모니터링 시스템)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1183-1188
    • /
    • 2016
  • Kinect sensor version 2 is a kind of camera released by Microsoft as a computer vision and a natural user interface for game consoles like Xbox one. It allows acquiring color images, depth images, audio input and skeletal data with a high frame rate. In this paper, using depth image, we present a surveillance system of a certain area within Kinect's field of view. With computer vision library(Emgu CV), if an object is detected in the target area, it is tracked and kinect camera takes RGB image to send it in database server. Therefore, a mobile application on android platform was developed in order to notify the user that Kinect has sensed strange motion in the target region and display the RGB image of the scene. User gets the notification in real-time to react in the best way in the case of valuable things in monitored area or other cases related to a reserved zone.

Vision-based hybrid 6-DOF displacement estimation for precast concrete member assembly

  • Choi, Suyoung;Myeong, Wancheol;Jeong, Yonghun;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.20 no.4
    • /
    • pp.397-413
    • /
    • 2017
  • Precast concrete (PC) members are currently being employed for general construction or partial replacement to reduce construction period. As assembly work in PC construction requires connecting PC members accurately, measuring the 6-DOF (degree of freedom) relative displacement is essential. Multiple planar markers and camera-based displacement measurement systems can monitor the 6-DOF relative displacement of PC members. Conventional methods, such as direct linear transformation (DLT) for homography estimation, which are applied to calculate the 6-DOF relative displacement between the camera and marker, have several major problems. One of the problems is that when the marker is partially hidden, the DLT method cannot be applied to calculate the 6-DOF relative displacement. In addition, when the images of markers are blurred, error increases with the DLT method which is employed for its estimation. To solve these problems, a hybrid method, which combines the advantages of the DLT and MCL (Monte Carlo localization) methods, is proposed. The method evaluates the 6-DOF relative displacement more accurately compared to when either the DLT or MCL is used alone. Each subsystem captures an image of a marker and extracts its subpixel coordinates, and then the data are transferred to a main system via a wireless communication network. In the main system, the data from each subsystem are used for 3D visualization. Thereafter, the real-time movements of the PC members are displayed on a tablet PC. To prove the feasibility, the hybrid method is compared with the DLT method and MCL in real experiments.

Real time detection and recognition of traffic lights using component subtraction and detection masks (성분차 색분할과 검출마스크를 통한 실시간 교통신호등 검출과 인식)

  • Jeong Jun-Ik;Rho Do-Whan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.65-72
    • /
    • 2006
  • The traffic lights detection and recognition system is an essential module of the driver warning and assistance system. A method which is a color vision-based real time detection and recognition of traffic lights is presented in this paper This method has four main modules : traffic signals lights detection module, traffic lights boundary candidate determination module, boundary detection module and recognition module. In traffic signals lights detection module and boundary detection module, the color thresholding and the subtraction value of saturation and intensity in HSI color space and detection probability mask for lights detection are used to segment the image. In traffic lights boundary candidate determination module, the detection mask of traffic lights boundary is proposed. For the recognition module, the AND operator is applied to the results of two detection modules. The input data for this method is the color image sequence taken from a moving vehicle by a color video camera. The recorded image data was transformed by zooming function of the camera. And traffic lights detection and recognition experimental results was presented in this zoomed image sequence.

A Real-time Augmented Reality System using Hand Geometric Characteristics based on Computer Vision (손의 기하학적인 특성을 적용한 실시간 비전 기반 증강현실 시스템)

  • Choi, Hee-Sun;Jung, Da-Un;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.3
    • /
    • pp.323-335
    • /
    • 2012
  • In this paper, we propose an AR(augmented reality) system using user's bare hand based on computer vision. It is important for registering a virtual object on the real input image to detect and track correct feature points. The AR systems with markers are stable but they can not register the virtual object on an acquired image when the marker goes out of a range of the camera. There is a tendency to give users inconvenient environment which is limited to control a virtual object. On the other hand, our system detects fingertips as fiducial features using adaptive ellipse fitting method considering the geometric characteristics of hand. It registers the virtual object stably by getting movement of fingertips with determining the shortest distance from a palm center. We verified that the accuracy of fingertip detection over 82.0% and fingertip ordering and tracking have just 1.8% and 2.0% errors for each step. We proved that this system can replace the marker system by tacking a camera projection matrix effectively in the view of stable augmentation of virtual object.

Parking Lot Vehicle Counting Using a Deep Convolutional Neural Network (Deep Convolutional Neural Network를 이용한 주차장 차량 계수 시스템)

  • Lim, Kuoy Suong;Kwon, Jang woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.173-187
    • /
    • 2018
  • This paper proposes a computer vision and deep learning-based technique for surveillance camera system for vehicle counting as one part of parking lot management system. We applied the You Only Look Once version 2 (YOLOv2) detector and come up with a deep convolutional neural network (CNN) based on YOLOv2 with a different architecture and two models. The effectiveness of the proposed architecture is illustrated using a publicly available Udacity's self-driving-car datasets. After training and testing, our proposed architecture with new models is able to obtain 64.30% mean average precision which is a better performance compare to the original architecture (YOLOv2) that achieved only 47.89% mean average precision on the detection of car, truck, and pedestrian.

A study on counting number of passengers by moving object detection (이동 객체 검출을 통한 승객 인원 개수에 대한 연구)

  • Yoo, Sang-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.21 no.2
    • /
    • pp.9-18
    • /
    • 2020
  • In the field of image processing, a method of detecting and counting passengers as moving objects when getting on and off the bus has been studied. Among these technologies, one of the artificial intelligence techniques, the deep learning technique is used. As another method, a method of detecting an object using a stereo vision camera is also used. However, these techniques require expensive hardware equipment because of the computational complexity of used to detect objects. However, most video equipments have a significant decrease in computational processing power, and thus, in order to detect passengers on the bus, there is a need for an image processing technology suitable for various equipment using a relatively low computational technique. Therefore, in this paper, we propose a technique that can efficiently obtain the number of passengers on the bus by detecting the contour of the object through the background subtraction suitable for low-cost equipment. Experiments have shown that passengers were counted with approximately 70% accuracy on lower-end machines than those equipped with stereo vision camera.

Vision-based Walking Guidance System Using Top-view Transform and Beam-ray Model (탑-뷰 변환과 빔-레이 모델을 이용한 영상기반 보행 안내 시스템)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.93-102
    • /
    • 2011
  • This paper presents a walking guidance system for blind pedestrians in an outdoor environment using just one single camera. Unlike many existing travel-aid systems that rely on stereo-vision, the proposed system aims to get necessary information of the road environment by using just single camera fixed at the belly of the user. To achieve this goal, a top-view image of the road is used, on which obstacles are detected by first extracting local extreme points and then verified by the polar edge histogram. Meanwhile, user motion is estimated by using optical flow in an area close to the user. Based on these information extracted from image domain, an audio message generation scheme is proposed to deliver guidance instructions via synthetic voice to the blind user. Experiments with several sidewalk video-clips show that the proposed walking guidance system is able to provide useful guidance instructions under certain sidewalk environments.

Design of Safe Autonomous Navigation System for Deployable Bio-inspired Robot (전개형 생체모방로봇을 위한 안전한 자율주행시스템 설계)

  • Choi, Keun Ha;Han, Sang Kwon;Lee, Jinyi;Lee, Jin Woo;Ahn, Jung Do;Kim, Kyung-Soo;Kim, Soohyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.4
    • /
    • pp.456-462
    • /
    • 2014
  • In this paper, we present a deployable bio-inspired robot called the Pillbot-light, which utilizes a safe autonomous navigation system. The Pillbot-light is mounted the station robot, and can be operated in a disaster relief operation or military operation. However, the Pilbot-light has a challenge to navigate autonomously because the Pilbot-light cannot be equipped with various sensors. As a result, we propose a new robot system for autonomous navigation that the station robot controls Pillbot-light equipped with vision camera and CPU of high performance. This system detects obstacles based on the edge extraction using vision camera. Also, it cannot only achieve path planning using the hazard cost function, but also localization using the Particle Filter. And this system is verified by simulation and experiment.

A Development of The Remote Robot Control System with Virtual Reality Interface System (가상현실과 결합된 로봇제어 시스템의 구현방법)

  • 김우경;김훈표;현웅근
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.320-324
    • /
    • 2003
  • Recently, Virtual reality parts is applied in various fields of industry. In this paper we got under control motion of reality robot from interface manipulation in the virtual world. This paper created virtual robot using of 3D Graphic Tool. And we reappeared a similar image with reality robot put on texture the use of components of Direct 3D Graphic. Also a reality robot and a virtual robot is controlled by joystick. The developed robot consists of robot controller with vision system and host PC program. The robot and camera can move with 2 degree of freedom by independent remote controlling a user friendly designed joystick. An environment is recognized by the vision system and ultra sonic sensors. The visual mage and command data translated through 900MHz and 447MHz RF controller, respectively. If user send robot control command the use of simulator to control the reality robot, the transmitter/recever got under control until 500miter outdoor at the rate of 4800bps a second in Hlaf Duplex method via radio frequency module useing 447MHz frequency.

  • PDF