• 제목/요약/키워드: 3D Robot Vision

검색결과 138건 처리시간 0.032초

A New Robotic 3D Inspection System of Automotive Screw Hole

  • Baeg, Moon-Hong;Baeg, Seung-Ho;Moon, Chan-Woo;Jeong, Gu-Min;Ahn, Hyun-Sik;Kim, Do-Hyun
    • International Journal of Control, Automation, and Systems
    • /
    • 제6권5호
    • /
    • pp.740-745
    • /
    • 2008
  • This paper presents a new non-contact 3D robotic inspection system to measure the precise positions of screw and punch holes on a car body frame. The newly developed sensor consists of a CCD camera, two laser line generators and LED light. This lightweight sensor can be mounted on an industrial robot hand. An inspection algorithm and system that work with this sensor is presented. In performance evaluation tests, the measurement accuracy of this inspection system was about 200 ${\mu}m$, which is a sufficient accuracy in the automotive industry.

A Novel Robot Sensor System Utilizing the Combination Of Stereo Image Intensity And Laser Structured Light Image Information

  • Lee, Hyun-Ki;Xingyong, Song;Kim, Min-Young;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.729-734
    • /
    • 2005
  • One of the important research issues in mobile robot is how to detect the 3D environment fast and accurately, and recognize it. Sensing methods of utilizing laser structured light and/or stereo vision are representatively used among a number of methodologies developed to date. However, the methods are still in need of achieving high accuracy and reliability to be used for real world environments. In this paper to implement a new robotic environmental sensing algorithm is presented by combining the information between intensity image and that of laser structured light image. To see how effectively the algorithm applied to real environments, we developed a sensor system that can be mounted on a mobile robot and tested performance for a series of environments.

  • PDF

스테레오 영상을 활용한 3차원 지도 복원과 동적 물체 검출에 관한 연구 (A Study of 3D World Reconstruction and Dynamic Object Detection using Stereo Images)

  • 서보길;윤영호;김규영
    • 한국산학기술학회논문지
    • /
    • 제20권10호
    • /
    • pp.326-331
    • /
    • 2019
  • 실제 환경에서는 움직이지 않는 정적 물체만큼이나 많은 수의 움직이는 동적 물체가 존재한다. 사람은 정적 물체와 동적 물체를 쉽게 구분할 수 있지만, 자율 주행 차량이나 모바일 로봇은 이를 구분하지 못한다. 따라서 차량이나 로봇이 성공적이고 안정적인 자율 주행을 수행하기 위해서는 정적 물체와 동적 물체를 정확하게 구분하는 것이 중요하다. 이를 수행하기 위해서 자율 주행 차량이나 모바일 로봇은 카메라, 라이다 등과 같은 다양한 센서 시스템을 활용할 수 있다. 그중에서 스테레오 카메라 영상은 자율 주행을 위해 많이 활용하는 데이터이다. 스테레오 카메라 영상은 물체 분할, 분류, 추적과 같은 물체 인식 분야는 물론 3차원 지도 복원과 같은 네비게이션 분야에 활용할 수 있다. 본 연구에서는 실시간으로 주행하는 차량과 로봇을 위하여 스테레오 영상을 활용한 정적/동적 물체 구분 방법을 제안하고, 향후 네비게이션 목적으로도 활용할 수 있도록 3차원 지도를 복원하여 이를 적용한 결과 및 성능 확인을 위한 정확도 분석 결과(99.81%)를 제시한다.

에지 및 픽셀 데이터를 이용한 어레이구조의 스테레오 매칭 알고리즘 (Stereo matching algorithm based on systolic array architecture using edges and pixel data)

  • 정우영;박성찬;정홍
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 B
    • /
    • pp.777-780
    • /
    • 2003
  • We have tried to create a vision system like human eye for a long time. We have obtained some distinguished results through many studies. Stereo vision is the most similar to human eye among those. This is the process of recreating 3-D spatial information from a pair of 2-D images. In this paper, we have designed a stereo matching algorithm based on systolic array architecture using edges and pixel data. This is more advanced vision system that improves some problems of previous stereo vision systems. This decreases noise and improves matching rate using edges and pixel data and also improves processing speed using high integration one chip FPGA and compact modules. We can apply this to robot vision and automatic control vehicles and artificial satellites.

  • PDF

서비스 로봇을 위한 지시 물체 분할 방법 (Segmentation of Pointed Objects for Service Robots)

  • 김형오;김수환;김동환;박성기
    • 로봇학회논문지
    • /
    • 제4권2호
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF

Development of a Hovering Robot System for Calamity Observation

  • Kang, M.S.;Park, S.;Lee, H.G.;Won, D.H.;Kim, T.J.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.580-585
    • /
    • 2005
  • A QRT(Quad-Rotor Type) hovering robot system is developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV(Unmanned Aerial Vehicle) is equipped with four propellers driven by each electric motor, an embedded controller using a DSP, INS(Inertial Navigation System) using 3-axis rate gyros, a CCD camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. The developed hovering robot shows stable flying performances under the adoption of RIC(Robust Internal-loop Compensator) based disturbance compensation and the vision based localization method. The UAV can also avoid obstacles using eight IR and four ultrasonic range sensors. The VTOL(Vertical Take-Off and Landing) flying object flies into indoor fire spots and sends the images captured by the CCD camera to the operator. This kind of small-sized UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment.

  • PDF

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

펄스 위상차와 스트럭춰드 라이트를 이용한 이동 로봇 시각 장치 구현 (Implementation of vision system for a mobile robot using pulse phase difference & structured light)

  • 방석원;정명진;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1991년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 22-24 Oct. 1991
    • /
    • pp.652-657
    • /
    • 1991
  • Up to date, application areas of mobile robots have been expanded. In addition, Many types of LRF(Laser Range Finder) systems have been developed to acquire three dimensional information about unknown environments. However in real world, because of various noises (sunlight, fluorescent light), it is difficult to separate reflected laser light from these noise. To overcome the previous restriction, we have developed a new type vision system which enables a mobile robot to measure the distance to a object located 1-5 (m) ahead with an error than 2%. The separation and detection algorithm used in this system consists of pulse phase difference method and multi-stripe structured light. The effectiveness and feasibility of the proposed vision system are demonstrated by 3-D maps of detected objects and computation time analysis.

  • PDF

2D 영상센서 기반 6축 로봇 팔 원격제어 (A Remote Control of 6 d.o.f. Robot Arm Based on 2D Vision Sensor)

  • 현웅근
    • 한국전자통신학회논문지
    • /
    • 제17권5호
    • /
    • pp.933-940
    • /
    • 2022
  • 2차원 영상 센서를 이용하여 조종자의 3차원 손 위치를 인식하고 이를 기반으로 원격으로 6축 로봇팔을 제어하는 시스템을 개발하였다. 시스템은 물체의 영상정보를 인식하는 2차원 영상 센서 모듈, 영상정보를 로봇팔 제어 명령어로 전환하는 알고리즘, 자체 제작한 6축 로봇팔 및 제어 시스템으로 구성된다. 영상 센서는 조종자가 착용한 장갑의 모양과 색을 인지하여 크기 및 위치정보를 출력하게 되며, 본 연구에서는 이러한 위치 및 물체를 둘러싼 크기 정보를 이용하여 로봇 선단의 속도를 제어한다. 연구 방법의 검증은 자체 제작된 6축 로봇으로 실행하였으며, 조종자의 손동작 조종에 의한 실험을 통해 제안한 영상정보 제어 및 로봇 선단 제어 방법이 성공적으로 동작함을 확인하였다.

3D 부품모델 실시간 인식을 위한 로봇 비전기술 개발 (Development of Robot Vision Technology for Real-Time Recognition of Model of 3D Parts)

  • 심병균;최경선;장성철;안용석;한성현
    • 한국산업융합학회 논문집
    • /
    • 제16권4호
    • /
    • pp.113-117
    • /
    • 2013
  • This paper describes a new technology to develop the character recognition technology based on pattern recognition for non-contacting inspection optical lens slant or precision parts, and including external form state of lens or electronic parts for the performance verification, this development can achieve badness finding. And, establish to existing reflex data because inputting surface badness degree of scratch's standard specification condition directly, and error designed to distinguish from product more than schedule error to badness product by normalcy product within schedule extent after calculate the error comparing actuality measurement reflex data and standard reflex data mutually. Developed system to smallest 1 pixel unit though measuring is possible 1 pixel as $37{\mu}m{\times}37{\mu}m$ ($0.1369{\times}10-4mm^2$) the accuracy to $1.5{\times}10-4mm$ minutely measuring is possible performance verification and trust ability through an experiment prove.