• Title/Summary/Keyword: eye camera

Search Result 308, Processing Time 0.033 seconds

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 Hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.596-601
    • /
    • 2000
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

  • PDF

Development of Road Safety Estimation Method using Driving Simulator and Eye Camera (차량시뮬레이터 및 아이카메라를 이용한 도로안전성 평가기법 개발)

  • Doh, Tcheol-Woong;Kim, Won-Keun
    • International Journal of Highway Engineering
    • /
    • v.7 no.4 s.26
    • /
    • pp.185-202
    • /
    • 2005
  • In this research, to get over restrictions of a field expreiment, we modeled a planning road through the 3D Virtual Reality and achieved data about dynamic response related to sector fluctuation and about driver's visual behavior on testers' driving the Driving Simulator Car with Eye Camera. We made constant efforts to reduce the non-reality and side effect of Driving Simulator on maximizing the accord between motion reproduction and virtual reality based on data Driving Simulator's graphic module achieved by dynamic analysis module. Moreover, we achieved data of driver's natural visual behavior using Eye Camera(FaceLAB) that is able to make an expriment without such attaching equipments such as a helmet and lense. In this paper, to evaluate the level of road's safety, we grasp the meaning of the fluctuation of safety that drivers feel according to change of road geometric structure with methods of Driving Simulator and Eye Camera and investigate the relationship between road geometric structure and safety level. Through this process, we suggest the method to evaluate the road making drivers comfortable and pleasant from planning schemes.

  • PDF

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.172-179
    • /
    • 2002
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

Camera Calibration and Barrel Undistortion for Fisheye Lens (차량용 어안렌즈 카메라 캘리브레이션 및 왜곡 보정)

  • Heo, Joon-Young;Lee, Dong-Wook
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.9
    • /
    • pp.1270-1275
    • /
    • 2013
  • A lot of research about camera calibration and lens distortion for wide-angle lens has been made. Especially, calibration for fish-eye lens which has 180 degree FOV(field of view) or above is more tricky, so existing research employed a huge calibration pattern or even 3D pattern. And it is important that calibration parameters (such as distortion coefficients) are suitably initialized to get accurate calibration results. It can be achieved by using manufacturer information or lease-square method for relatively narrow FOV(135, 150 degree) lens. In this paper, without any previous manufacturer information, camera calibration and barrel undistortion for fish-eye lens with over 180 degree FOV are achieved by only using one calibration pattern image. We applied QR decomposition for initialization and Regularization for optimization. With the result of experiment, we verified that our algorithm can achieve camera calibration and image undistortion successfully.

Generation of an eye-contacted view using color and depth cameras (컬러와 깊이 카메라를 이용한 시점 일치 영상 생성 기법)

  • Hyun, Jee-Ho;Han, Jae-Young;Won, Jong-Pil;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.8
    • /
    • pp.1642-1652
    • /
    • 2012
  • Generally, a camera isn't located at the center of display in a tele-presence system and it causes an incorrect eye contact between speakers which reduce the realistic feeling during the conversation. To solve this incorrect eye contact problem, we newly propose an intermediate view reconstruction algorithm using both a color camera and a depth camera and applying for the depth image based rendering (DIBR) algorithm. In the proposed algorithm, an efficient hole filling method using the arithmetic mean value of neighbor pixels and an efficient boundary noise removal method by expanding the edge region of depth image are included. We show that the generated eye-contacted image has good quality through experiments.

Gaze Detection System by Wide and Narrow View Camera (광각 및 협각 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1239-1249
    • /
    • 2003
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Previous gaze detection system uses a wide view camera, which can capture the whole face of user. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with a wide view camera and a narrow view camera. In order to detect the position of user's eye changed by facial movements, the narrow view camera has the functionalities of auto focusing and auto pan/tilt based on the detected 3D facial feature positions. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 3.1 cm of RMS error in case of Permitting facial movements and 3.57 cm in case of permitting facial and eye movement. The processing time is so short as to be implemented in real-time system(below 30 msec in Pentium -IV 1.8 GHz)

The Implementation of Automatic Compensation Modules for Digital Camera Image by Recognition of the Eye State (눈의 상태 인식을 이용한 디지털 카메라 영상 자동 보정 모듈의 구현)

  • Jeon, Young-Joon;Shin, Hong-Seob;Kim, Jin-Il
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.3
    • /
    • pp.162-168
    • /
    • 2013
  • This paper examines the implementation of automatic compensation modules for digital camera image when a person is closing his/her eyes. The modules detect the face and eye region and then recognize the eye state. If the image is taken when a person is closing his/her eyes, the function corrects the eye and produces the image by using the most satisfactory image of the eye state among the past frames stored in the buffer. In order to recognize the face and eye precisely, the pre-process of image correction is carried out using SURF algorithm and Homography method. For the detection of face and eye region, Haar-like feature algorithm is used. To decide whether the eye is open or not, similarity comparison method is used along with template matching of the eye region. The modules are tested in various facial environments and confirmed to effectively correct the images containing faces.

Active eye system for tracking a moving object (이동물체 추적을 위한 능동시각 시스템 구축)

  • 백문홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.257-259
    • /
    • 1996
  • This paper presents the active eye system for tracking a moving object in 3D space. A prototype system able to track a moving object is designed and implemented. The mechanical system ables the control of platform that consists of binocular camera and also the control of the vergence angle of each camera by step motor. Each camera has two degrees of freedom. The image features of the object are extracted from complicated environment by using zero disparity filtering(ZDF). From the cnetroid of the image features the gaze point on object is calculated and the vergence angle of each camera is controlled by step motor. The Proposed method is implemented on the prototype with robust and fast calculation time.

  • PDF

Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot (이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출)

  • Kwon, Oh-Sang
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.