• 제목/요약/키워드: Vision based control

검색결과 688건 처리시간 0.031초

물체 잡기를 위한 비전 기반의 로봇 메뉴플레이터 (Vision-Based Robot Manipulator for Grasping Objects)

  • 백영민;안호석;최진영
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.331-333
    • /
    • 2007
  • Robot manipulator is one of the important features in service robot area. Until now, there has been a lot of research on robot" manipulator that can imitate the functions of a human being by recognizing and grasping objects. In this paper, we present a robot arm based on the object recognition vision system. We have implemented closed-loop control that use the feedback from visual information, and used a sonar sensor to improve the accuracy. We have placed the web-camera on the top of the hand to recognize objects. We also present some vision-based manipulation issues and our system features.

  • PDF

실시간 로봇 위치 제어를 위한 확장 칼만 필터링의 비젼 저어 기법 개발 (Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control)

  • 장완식;김경석;박성일;김기영
    • 비파괴검사학회지
    • /
    • 제23권1호
    • /
    • pp.21-29
    • /
    • 2003
  • 실시간 로봇 위치 제어를 위해 비젼시스템을 사용할 때 이 모델에 포함된 매개변수를 추정하는데 있어 계산시간을 줄이는 것은 매우 중요하다. 불행히도 흔히 사용되고 있는 일괄 처리 기법은 반복적으로 계산이 수행되기 때문에 많은 계산 시간을 필요로 하여 실시간 로봇 위치 제어를 어렵게 한다. 반면에 본 연구에서 사용하고자 하는 화장 칼만 필터링은 사용하기 편리하고, 또한 순환적 방법으로 계산되기 때문에 비젼시스템의 매개변수를 계산하는데 있어 시간을 줄이는 커다란 장점을 가지고 있다. 따라서 본 연구에서는 실시간 로봇 위치 제어를 위해 사용하는 비젼 제어 기법에 확장 칼만 필터링을 적용되었다 여기서 사용된 비젼시스템 모델은 카메라 내부 매개변수(방향, 초점거리 등) 및 외부 매개변수(카메라와 로봇 사이의 상대적 위치)를 설명하기 위해 6개 매개변수를 포함하고 있다. 이러한 매개변수를 추정하기 위해 확장 칼만 필터링 기법이 적용되었다. 또한 이렇게 추정된 6개 매개변수를 사용하여 로봇을 구동시키기 위해 필요한 로봇 회전각 추정에도 화장 칼만 필터링 기법이 적용되었다. 최종적으로 확장 칼만 필터링을 사용하여 개발된 비젼 제어 기법의 타당성을 로봇 위치 제어 실험을 수행하여 확인하였다.

영상기반 자동결함 검사시스템에서 재현성 향상을 위한 결함 모델링 및 측정 기법 (Robust Defect Size Measuring Method for an Automated Vision Inspection System)

  • 주영복;허경무
    • 제어로봇시스템학회논문지
    • /
    • 제19권11호
    • /
    • pp.974-978
    • /
    • 2013
  • AVI (Automatic Vision Inspection) systems automatically detect defect features and measure their sizes via camera vision. AVI systems usually report different measurements on the same defect with some variations on position or rotation mainly because different images are provided. This is caused by possible variations from the image acquisition process including optical factors, nonuniform illumination, random noises, and so on. For this reason, conventional area based defect measuring methods have problems of robustness and consistency. In this paper, we propose a new defect size measuring method to overcome this problem, utilizing volume information that is completely ignored in the area based defect measuring method. The results show that our proposed method dramatically improves the robustness and consistency of defect size measurement.

반도체 웨이퍼 고속 검사를 위한 GPU 기반 병렬처리 알고리즘 (The GPU-based Parallel Processing Algorithm for Fast Inspection of Semiconductor Wafers)

  • 박영대;김준식;주효남
    • 제어로봇시스템학회논문지
    • /
    • 제19권12호
    • /
    • pp.1072-1080
    • /
    • 2013
  • In a the present day, many vision inspection techniques are used in productive industrial areas. In particular, in the semiconductor industry the vision inspection system for wafers is a very important system. Also, inspection techniques for semiconductor wafer production are required to ensure high precision and fast inspection. In order to achieve these objectives, parallel processing of the inspection algorithm is essentially needed. In this paper, we propose the GPU (Graphical Processing Unit)-based parallel processing algorithm for the fast inspection of semiconductor wafers. The proposed algorithm is implemented on GPU boards made by NVIDIA Company. The defect detection performance of the proposed algorithm implemented on the GPU is the same as if by a single CPU, but the execution time of the proposed method is about 210 times faster than the one with a single CPU.

Visual Servoing of a Mobile Manipulator Based on Stereo Vision

  • Lee, H.J.;Park, M.G.;Lee, M.C.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.767-771
    • /
    • 2003
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the position of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. However, color information is useful for simple recognition in real-time visual servoing. In this paper, we refer to about object recognition using colors, stereo matching method, recovery of 3D space and the visual servoing.

  • PDF

Light-Adaptive Vision System for Remote Surveillance Using an Edge Detection Vision Chip

  • Choi, Kyung-Hwa;Jo, Sung-Hyun;Seo, Sang-Ho;Shin, Jang-Kyoo
    • 센서학회지
    • /
    • 제20권3호
    • /
    • pp.162-167
    • /
    • 2011
  • In this paper, we propose a vision system using a field programmable gate array(FPGA) and a smart vision chip. The output of the vision chip is varied by illumination conditions. This chip is suitable as a surveillance system in a dynamic environment. However, because the output swing of a smart vision chip is too small to definitely confirm the warning signal with the FPGA, a modification was needed for a reliable signal. The proposed system is based on a transmission control protocol/internet protocol(TCP/IP) that enables monitoring from a remote place. The warning signal indicates that some objects are too near.

Vision-Based Roadway Sign Recognition

  • Jiang, Gang-Yi;Park, Tae-Young;Hong, Suk-Kyo
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권1호
    • /
    • pp.47-55
    • /
    • 2000
  • In this paper, a vision-based roadway detection algorithm for an automated vehicle control system, based on roadway sign information on roads, is proposed. First, in order to detect roadway signs, the color scene image is enhanced under hue-invariance. Fuzzy logic is employed to simplify the enhanced color image into a binary image and the binary image is morphologically filtered. Then, an effective algorithm of locating signs based on binary rank order transform (BROT) is utilized to extract signs from the image. This algorithm performs better than those previously presented. Finally, the inner shapes of roadway signs with curving roadway direction information are recognized by neural networks. Experimental results show that the new detection algorithm is simple and robust, and performs well on real sign detection. The results also show that the neural networks used can exactly recognize the inner shapes of signs even for very noisy shapes.

  • PDF

어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM (Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image)

  • 최윤원;최정원;대염염;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권8호
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

영상궤환을 이용한 이동체의 주적 및 잡기 작업의 구현 (Implementation of tracking and grasping the moving object using visual feedback)

  • 권철;강형진;박민용
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1995년도 추계학술대회 논문집 학회본부
    • /
    • pp.579-582
    • /
    • 1995
  • Recently, the vision system has the wide and growing' application field on account of the vast information from that visual mechanism. Especially, in the control field, the vision system has been applied to the industrial robot. In this paper, the object tracking and grasping task is accomplished by the robot vision system with a camera in the robot hand. The camera setting method is proposed to implement that task in a simple way. In spite of the calibration error, the stable grasping task is achieved using the tracking control algorithm based on the vision feature.

  • PDF

평면상에 있는 물체 위치 결정을 위한 컴퓨터 비젼 시스템의 응용 (An Application of Computer Vision System for the Determination of Object Position in the Plane)

  • 장완식
    • 한국생산제조학회지
    • /
    • 제7권2호
    • /
    • pp.62-68
    • /
    • 1998
  • This paper presents the application of computer vision for the purpose of determining the position of the unknown object in the plane. The presented control method is to estimate the six view parameters representing the relationship between the image plane coordinates and the real physical coordinates. The estimation of six parameters is indispensable for transforming the 2-dimensional camera coordinates to the 3-dimensional spatial coordinates. Then, the position of unknown point is estimated based on the estimated parameters depending on the cameras. The suitability of this control scheme is demonstrated experimentally by determining position of the unknown object in the plane.

  • PDF