• 제목/요약/키워드: mobile robot vision

검색결과 316건 처리시간 0.032초

Control of a mobile robot supporting a task robot on the top

  • Lee, Jang M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 Proceedings of the Korea Automatic Control Conference, 11th (KACC); Pohang, Korea; 24-26 Oct. 1996
    • /
    • pp.1-7
    • /
    • 1996
  • This paper addresses the control problem of a mobile robot supporting a task robot with needs to be positioned precisely. The main difficulty residing in the precise control of a mobile robot supporting a task robot is providing an accurate and stable base for the task robot. That is, the end-plate of the mobile robot which is the base of the task robot can not be positioned accurately without external position sensors. This difficulty is resolved in this paper through the vision information obtained from the camera attached at the end of a task robot. First of all, the camera parameters were measured by using the images of a fixed object captured by the camera. The measured parameters include the rotation, the position, the scale factor, and the focal length of the camera. These parameters could be measured by using the features of each vertex point for a hexagonal object and by using the pin-hole model of a camera. Using the measured pose(position and orientation) of the camera and the given kinematics of the task robot, we calculate a pose of the end-plate of the mobile robot, which is used for the precise control of the mobile robot. Experimental results for the pose estimations are shown.

  • PDF

유비쿼터스 이동로봇용 천장 인공표식을 이용한 비젼기반 자기위치인식법 (Vision-based Self Localization Using Ceiling Artificial Landmark for Ubiquitous Mobile Robot)

  • 이주상;임영철;유영재
    • 한국지능시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.560-566
    • /
    • 2005
  • 본 논문은 유비쿼터스 이동로봇의 자기위치인식에 적용되는 비젼시스템의 왜곡된 영상을 보정하기 위한 실용적인 방법을 제안하다. 이동로봇에서 자기위치인식은 필수적인 요소이며 카메라 비젼시스템을 이용하여 처리 가능하다. 자기위치인식에서 비젼시스템은 넓은 시야를 확보하기 위해 어안렌즈를 이용하는데, 이는 영상의 왜곡을 발생한다. 또한 이동로봇은 지속적인 움직임을 가지므로 빠른 시간 내에 영상을 처리하여 자기위치를 인식해야 한다. 따라서 이동로봇에 적용 가능한 실용적인 영상왜곡 보정기법을 제안하고 실험을 통하여 성능을 검증한다.

속도센서가 없는 비전시스템을 이용한 이동로봇의 목표물 추종 (Target Tracking Control of Mobile Robots with Vision System in the Absence of Velocity Sensors)

  • 조남섭;권지욱;좌동경
    • 전기학회논문지
    • /
    • 제62권6호
    • /
    • pp.852-862
    • /
    • 2013
  • This paper proposes a target tracking control method for wheeled mobile robots with nonholonomic constraints by using a backstepping-like feedback linearization. For the target tracking, we apply a vision system to mobile robots to obtain the relative posture information between the mobile robot and the target. The robots do not use the sensors to obtain the velocity information in this paper and therefore assumed the unknown velocities of both mobile robot and target. Instead, the proposed method uses only the maximum velocity information of the mobile robot and target. First, the pseudo command for the forward linear velocity and the heading direction angle are designed based on the kinematics by using the obtained image information. Then, the actual control inputs are designed to make the actual forward linear velocity and the heading direction angle follow the pseudo commands. Through simulations and experiments for the mobile robot we have confirmed that the proposed control method is able to track target even when the velocity sensors are not used at all.

전방향 구동 로봇에서의 비젼을 이용한 이동 물체의 추적 (Moving Target Tracking using Vision System for an Omni-directional Wheel Robot)

  • 김산;김동환
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1053-1061
    • /
    • 2008
  • In this paper, a moving target tracking using a binocular vision for an omni-directional mobile robot is addressed. In the binocular vision, three dimensional information on the target is extracted by vision processes including calibration, image correspondence, and 3D reconstruction. The robot controller is constituted with SPI(serial peripheral interface) to communicate effectively between robot master controller and wheel controllers.

이동로봇의 시각센서를 위한 동영상 압축기 구현 (Implementation of Visual Data Compressor for Vision Sensor of Mobile Robot)

  • 김형오;조경수;백문열;기창두
    • 한국정밀공학회지
    • /
    • 제22권9호
    • /
    • pp.99-106
    • /
    • 2005
  • In recent years, vision sensors are widely used to mobile robot for navigation or exploration. The analog signal transmission of visual data being used in this area, however, has some disadvantages including noise weakness in view of the data storage. A large amount of data also makes it difficult to use this method for a mobile robot. In this paper, a digital data compressing technology based on MPEG4 which substitutes for analog technology is proposed to overcome the disadvantages by using DWT(Discreate Wavelet Transform) instead of DCT(Discreate Cosine Transform). The TI Company's DSP chip, TMS320C6711, is used for the image encoder, and the performance of the proposed method is evaluated by PSNR(Peake Signal to Noise Rates), QP(Quantization Parameter) and bitrate.

3차원 공간 맵핑을 통한 로봇의 경로 구현 (Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic)

  • 손은호;김영철;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제14권2호
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

모노비전과 퍼지규칙을 이용한 이동로봇의 경로계획과 장애물회피 (Obstacle Avoidance and Path Planning for a Mobile Robot Using Single Vision System and Fuzzy Rule)

  • 배봉규;이원창;강근택
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2000년도 추계학술대회 학술발표 논문집
    • /
    • pp.274-277
    • /
    • 2000
  • In this paper we propose new algorithms of path planning and obstacle avoidance for an autonomous mobile robot with vision system. Distance variation is included in path planning to approach the target point and avoid obstacles well. The fuzzy rules are also applied to both trajectory planning and obstacle avoidance to improve the autonomy of mobile robot. It is shown by computer simulation that the proposed algorithm is working well.

  • PDF

퍼지 규칙을 이용한 비전 및 무선 센서 네트워크 기반의 이동로봇의 자율 주행 및 위치 인식 (Navigation and Localization of Mobile Robot Based on Vision and Sensor Network Using Fuzzy Rules)

  • 허준영;강근택;이원창
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.673-674
    • /
    • 2008
  • This paper presents a new navigation algorithm of an autonomous mobile robot with vision and IR sensors, Zigbee Sensor Network using fuzzy rules. We also show that the developed mobile robot with the proposed algorithm is navigating very well in complex unknown environments.

  • PDF

단일 카메라를 이용한 이동로봇의 자기 위치 추정 (Self-Localization of Mobile Robot Using Single Camera)

  • 김명호;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.404-404
    • /
    • 2000
  • This paper presents a single vision-based sel(-localization method in an corridor environment. We use the Hough transform for finding parallel lines and vertical lines. And we use these cross points as feature points and it is calculated relative distance from mobile robot to these points. For matching environment map to feature points, searching window is defined and self-localization is performed by matching procedure. The result shows the suitability of this method by experiment.

  • PDF

다중 표식을 이용한 자율이동로봇의 자기위치측정 (Self-Localization of Autonomous Mobile Robot using Multiple Landmarks)

  • 강현덕;조강현
    • 제어로봇시스템학회논문지
    • /
    • 제10권1호
    • /
    • pp.81-86
    • /
    • 2004
  • This paper describes self-localization of a mobile robot from the multiple candidates of landmarks in outdoor environment. Our robot uses omnidirectional vision system for efficient self-localization. This vision system acquires the visible information of all direction views. The robot uses feature of landmarks whose size is bigger than that of others in image such as building, sculptures, placard etc. Robot uses vertical edges and those merged regions as the feature. In our previous work, we found the problem that landmark matching is difficult when selected candidates of landmarks belonging to region of repeating the vertical edges in image. To overcome these problems, robot uses the merged region of vertical edges. If interval of vertical edges is short then robot bundles them regarding as the same region. Thus, these features are selected as candidates of landmarks. Therefore, the extracted merged region of vertical edge reduces the ambiguity of landmark matching. Robot compares with the candidates of landmark between previous and current image. Then, robot is able to find the same landmark between image sequences using the proposed feature and method. We achieved the efficient self-localization result using robust landmark matching method through the experiments implemented in our campus.