• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.034 seconds

An Obstacle Detection and Avoidance Method for Mobile Robot Using a Stereo Camera Combined with a Laser Slit

  • Kim, Chul-Ho;Lee, Tai-Gun;Park, Sung-Kee;Kim, Jai-Hie
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.871-875
    • /
    • 2003
  • To detect and avoid obstacles is one of the important tasks of mobile navigation. In a real environment, when a mobile robot encounters dynamic obstacles, it is required to simultaneously detect and avoid obstacles for its body safely. In previous vision system, mobile robot has used it as either a passive sensor or an active sensor. This paper proposes a new obstacle detection algorithm that uses a stereo camera as both a passive sensor and an active sensor. Our system estimates the distances from obstacles by both passive-correspondence and active-correspondence using laser slit. The system operates in three steps. First, a far-off obstacle is detected by the disparity from stereo correspondence. Next, a close obstacle is acquired from laser slit beam projected in the same stereo image. Finally, we implement obstacle avoidance algorithm, adopting the modified Dynamic Window Approach (DWA), by using the acquired the obstacle's distance.

  • PDF

Guidance Line Extraction Algorithm using Central Region Data of Crop for Vision Camera based Autonomous Robot in Paddy Field (비전 카메라 기반의 무논환경 자율주행 로봇을 위한 중심영역 추출 정보를 이용한 주행기준선 추출 알고리즘)

  • Choi, Keun Ha;Han, Sang Kwon;Park, Kwang-Ho;Kim, Kyung-Soo;Kim, Soohyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • In this paper, we propose a new algorithm of the guidance line extraction for autonomous agricultural robot based on vision camera in paddy field. It is the important process for guidance line extraction which finds the central point or area of rice row. We are trying to use the central region data of crop that the direction of rice leaves have convergence to central area of rice row in order to improve accuracy of the guidance line. The guidance line is extracted from the intersection points of extended virtual lines using the modified robust regression. The extended virtual lines are represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.

A Simple Auto Calibration Method for CCD Camera With High Distortion Lens (왜곡율이 큰 렌즈가 부착된 CCD 카메라를 위한 간단한 자동 보정 방법)

  • 한기태;김회율
    • Journal of Broadcast Engineering
    • /
    • v.5 no.2
    • /
    • pp.260-272
    • /
    • 2000
  • In this paper, we propose a simple auto calibration method for a CCD camera with wide an91e lens that causes high degree of distortion. We formulate a cubic warping equation for the relationship between the cross points on the distorted calibration target and the corresponding points from the standard grid image, and calibrate distorted images using the computed parameters. The experiment has been performed with the distorted images resulted from wide angle CCD camera. The experimental results show that the proposed method, in terms of the average and maximum distorted error, has higher accuracy than the existing methods because of maintaining the calibration ratio more than 95 percent. The proposed method is applicable to wide variety of images regardless a type of lens or distortion.

  • PDF

Collimator Design and Manufacture for $M{\ddot{o}}ssbauer$ Source ($M{\ddot{o}}ssbauer$ 선원용 콜리메이터 설계 및 제작)

  • Park, Sung-Ho;Kim, Jong-Kyung
    • Journal of Radiation Protection and Research
    • /
    • v.28 no.3
    • /
    • pp.183-187
    • /
    • 2003
  • Collimator for $M{\ddot{o}}ssbauer$ source was manufactured for compton scattering experiment. Exposure dose rate was calculated and measured using GM counter for radiation evaluation. These results were well agreed to each other and used for collimator design. SUS303 was used for collimator material because exposure dose rate at 10 cm is about 2 mR/h. The radiation emited from the 35 mm, 65 mm hole was measured using gamma camera which have 4' diameter. 2-D radiation image was acquired and analyzed. The radiation size at Gamma Camera was 8.0 mm and 5.8 mm respectively.

Automatic Titration for KMnO4 Consumption Test of Tap Water Using Personal Computer Camera (PC 카메라를 이용한 수돗물의 과망간산칼륨소비량 적정 자동화)

  • Lee, Hyeong-Choon
    • Journal of Environmental Health Sciences
    • /
    • v.34 no.1
    • /
    • pp.95-100
    • /
    • 2008
  • An automatic titration system using a PC-camera with a color filter on its lens was used in the $KMnO_4$ consumption test of tap water and distilled water in relation to blank tests. The very faint pink color of titration end point could be effectively detected by using a yellow cellophane paper as a color filter. The average hue value (Havg) of 192 pixels in the image of the sample solution being titrated was computed and followed up at regular time intervals during titration in order to detect the titration end point. The Havg decrease of 2 degrees from the average of first 10 Havgs was regarded as reaching the end point. The volume of 0.01N $KMnO_4$ consumed by a tap water sample was $0.728{\pm}0.022ml$ in manual titration and $0.735{\pm}0.013ml$ in automatic titration (p=0.580). The volume of 0.01N $KMnO_4$ consumed by a distilled water sample was $0.383{\pm}0.015ml$ in manual titration and $0.367{\pm}0.015ml$ in automatic titration (p=0.252). The high p-values for t-test suggested that there were good agreements between manual and automatic titration data and the automatic method proposed in this article was considered to effectively replace the manual titration.

Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry (영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법)

  • Lee, Yong-Joo;Kim, Do-Wan;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.5 no.2
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.

Opto - Mechanical Design of IGRINS Slit-viewing Camera Barrel

  • Oh, Hee-Young;Yuk, In-Soo;Park, Chan;Lee, Han-Shin;Lee, Sung-Ho;Chun, Moo-Young;Jaffe, Daniel T.
    • Bulletin of the Korean Space Science Society
    • /
    • 2011.04a
    • /
    • pp.31.2-31.2
    • /
    • 2011
  • IGRINS (Immersion GRating INfrared Spectrometer) is a high resolution wide-band infrared spectrograph developed by Korea Astronomy and Space Science Institute (KASI) and the University of Texas at Austin (UT). The slit-viewing camera is one of four re-imaging optics in IGRINS including the input relay optics and the H- and K- band spectrograph cameras. Consisting of five lenses and one Ks-band filter, the slit viewing camera relays the infrared image of $2'{\times}2'$ field around the slit to the detector focal plane. Since IGRINS is a cryogenic instrument, the lens barrel is designed to be optimized at the operating temperature of 130 K. The barrel design also aims to achieve easy alignment and assembly. We use radial springs and axial springs to support lenses and lens spacers against the gravity and thermal contraction. Total weight of the lens barrel is estimated to be 1.2 kg. Results from structural analysis are presented.

  • PDF

Tracking and Capturing a Moving Object Using Active Camera Mounted on a Mobile Robot (이동로봇에 장착된 능동 카메라를 이용한 이동물체의 추적과 포획)

  • Park, Jin-U;Park, Jae-Han;Yun, Gyeong-Sik;Lee, Jang-Myeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.9
    • /
    • pp.741-748
    • /
    • 2001
  • In this paper, we propose a method of tracking and capturing a moving object by a mobile robot. The position of the moving object is acquired from the relation through color-based image information from a 2-DOF active camera mounted on the mobile robot. The direction and rotational angular velocity of the moving object are estimated using a state estimator. A Kalman fiber is used as the state estimator for taking characteristics of robustness against noises and uncertainties included in the input data. After estimating the trajectory of the moving object, we decide on the optimal trajectory and plan the motion of the mobile robot to capture the target object within the shortest distance and time. The effectiveness of the proposed method is demonstrated by the simulations and experiments.

  • PDF

3D Reconstruction Using a Single Camera (단일 카메라를 이용한 3차원 공간 정보 생성)

  • Kwon, Oh-Young;Seo, Kyoung-Taek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2943-2948
    • /
    • 2015
  • Run 3D reconstruction using a single camera, based on the information, we are advancing research on driving assistance apparatus or can be informed how to pass the obstacle existing ahead the driver. As a result depth information falls but it is possible to provide information that can pass through an obstacle on the straight. For 3D reconstruction by measuring the internal parameters, it calculates the Fundamental matrix and matching to find the feature points obtained by executing the triangulation on the basis of this. When the through experiments try to confirm the results, the depth information is present error information in the X and Y axes which can determine whether or not to pass through an obstacle has reliability.

Object Tracking & PTZ camera Control for Intelligent Surveillance System (지능형 감시 시스템을 위한 객체 추적 및 PTZ 카메라 제어)

  • Park, Ho-Sik;Hwang, Suen-Ki;Nam, Kee-Hwan;Bae, Cheol-Soo;Lee, Jin-Ki;Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.2
    • /
    • pp.95-100
    • /
    • 2013
  • Smart surveillance, is the use of automatic video analysis technologies in video surveillance applications. We present a robust object tracking method using pan-tilt-zoom camera for intelligent surveillance System, As the result of the experiment using 78 vehicle, the success rate of the tracking for moving object & non-moving object werw 97.4% and 91%. and 84.6%. the success rate o PTZ control for license plate image.