• Title/Summary/Keyword: Color-based Vision System

Search Result 168, Processing Time 0.021 seconds

Autonomous Mobile Robot System Using Adaptive Spatial Coordinates Detection Scheme based on Stereo Camera (스테레오 카메라 기반의 적응적인 공간좌표 검출 기법을 이용한 자율 이동로봇 시스템)

  • Ko Jung-Hwan;Kim Sung-Il;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.26-35
    • /
    • 2006
  • In this paper, an automatic mobile robot system for a intelligent path planning using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Finally, based-on the analysis of these calculated coordinates, a mobile robot system is derived as a intelligent path planning and a estimation. From some experiments on robot driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the distance between the mobile robot and the objects, and relative distance between the other objects is found to be very low value of $2.19\%$ and $1.52\%$ on average, respectably.

Study on vision-based object recognition to improve performance of industrial manipulator (산업용 매니퓰레이터의 작업 성능 향상을 위한 영상 기반 물체 인식에 관한 연구)

  • Park, In-Cheol;Park, Jong-Ho;Ryu, Ji-Hyoung;Kim, Hyoung-Ju;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.358-365
    • /
    • 2017
  • In this paper, we propose an object recognition method using image information to improve the efficiency of visual servoingfor industrial manipulators in industry. This is an image-processing method for real-time responses to an abnormal situation or to external environment change in a work object by utilizing camera-image information of an industrial manipulator. The object recognition method proposed in this paper uses the Otsu method, a thresholding technique based on separation of the V channel containing color information and the S channel, in which it is easy to separate the background from the HSV channel in order to improve the recognition rate of the existing Harris Corner algorithm. Through this study, when the work object is not placed in the correct position due to external factors or from being twisted,the position is calculated and provided to the industrial manipulator.

Implement of Hand Gesture Interface using Ratio and Size Variation of Gesture Clipping Region (제스쳐 클리핑 영역 비율과 크기 변화를 이용한 손-동작 인터페이스 구현)

  • Choi, Chang-Yur;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.121-127
    • /
    • 2013
  • A vision based hand-gesture interface method for substituting a pointing device is proposed in this paper, which is used the ratio and size variation of Gesture Region. Proposed method uses the skin hue&saturation of the hand region from the HSI color model to extract the hand region effectively. This method can remove the non-hand region, and reduces the noise effect by the light source. Also, as the computation quantity is reduced by detecting not the static hand-shape recognition, but the ratio and size variation of hand-moving from the clipped hand region in real time, more response speed is guaranteed. In order to evaluate the performance of the our proposed method, after applying to the computerized self visual acuity testing system as a pointing device. As a result, the proposed method showed the average 86% gesture recognition ratio and 87% coordinate moving recognition ratio.

Inspection for Inner Wall Surface of Communication Conduits by Laser Projection Image Analysis (레이저 투영 영상 분석에 의한 통신 관로 내벽 검사 기법)

  • Lee Dae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1131-1138
    • /
    • 2006
  • This paper proposes a novel method for grading of underground communication conduits by laser projection image analysis. The equipment thrust into conduit consists of a laser diode, a light emitting diode and a camera, the laser diode is utilized for generating projection image onto pipe wall, the light emitting diode for lighting environment and the image of conduit is acquired by the camera. In order to segment profile region, we used a novel color difference model and multiple thresholds method. The shape of profile ring is represented as a minimum diameter and the Fourier descriptor, and then the pipe status is graded by the rule-based method. Both local and global features of the segmented ring shaped, the minimum diameter and the Fourier descriptor, are utilized, therefore injured and distorted pipes can be correctly graded. From the experimental results, the classification is measured with accuracy such that false alarms are less than 2% under the various conditions.

  • PDF

A Camera Based Traffic Signal Generating Algorithm for Safety Entrance of the Vehicle into the Joining Road (차량의 안전한 합류도로 진입을 위한 단일 카메라 기반 교통신호 발생 알고리즘)

  • Jeong Jun-Ik;Rho Do-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.66-73
    • /
    • 2006
  • Safety is the most important for all traffic management and control technology. This paper focuses on developing a flexible, reliable and real-time processing algorithm which is able to generate signal for the entering vehicle at the joining road through a camera and image processing technique. The images obtained from the camera located beside and upon the road can be used for traffic surveillance, the vehicle's travel speed measurement, predicted arriving time in joining area between main road and joining road. And the proposed algorithm displays the confluence safety signal with red, blue and yellow color sign. The three methods are used to detect the vehicle which is driving in setted detecting area. The first method is the gray scale normalized correlation algorithm, and the second is the edge magnitude ratio changing algorithm, and the third is the average intensity changing algorithm The real-time prototype confluence safety signal generation algorithm is implemented on stored digital image sequences of real traffic state and a program with good experimental results.

Smart HCI Based on the Informations Fusion of Biosignal and Vision (생체 신호와 비전 정보의 융합을 통한 스마트 휴먼-컴퓨터 인터페이스)

  • Kang, Hee-Su;Shin, Hyun-Chool
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.4
    • /
    • pp.47-54
    • /
    • 2010
  • We propose a smart human-computer interface replacing conventional mouse interface. The interface is able to control cursor and command action with only hand performing without object. Four finger motions(left click, right click, hold, drag) for command action are enough to express all mouse function. Also we materialize cursor movement control using image processing. The measure what we use for inference is entropy of EMG signal, gaussian modeling and maximum likelihood estimation. In image processing for cursor control, we use color recognition to get the center point of finger tip from marker, and map the point onto cursor. Accuracy of finger movement inference is over 95% and cursor control works naturally without delay. we materialize whole system to check its performance and utility.

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.