• 제목/요약/키워드: 3D robot vision

검색결과 138건 처리시간 0.025초

다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation (A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31)

  • V.옥센핸들러;A.벤스하이르;P.미셰;이상국
    • 센서학회지
    • /
    • 제7권2호
    • /
    • pp.124-130
    • /
    • 1998
  • 독립적인 로보트나 자동차 제어 응용을 위하여 고속 3-D 비젼시스템들은 매우 중요하다. 이 논문은 다음과 같은 세가지 과정으로 구성되는 stereo vision process 개발에 대하여 논술한다 : 왼쪽과 오른쪽 이미지의 edges 추출, matching coresponding edges와 3-D map의 계산. 이 process는 VME 150/40 Imaging Technology vision system에서 이루어졌다. 이것은 display, acqusition, 4Mbytes image frame memory와 세 개의 연산 카드로 구성되는 modular system이다. 40 MHz로 작동하는 프로그래머불 연산 모듈은 $64{\times}32$ bit instruction cache와 두개의 $1024{\times}32$ bit RAM을 가진 TMS320C31 DSP에 기초를 두고 있다. 그것들은 각각 512 Kbyte static RAM, 4 Mbyte image memory, 1 Mbyte flash EEPROM과 하나의 직렬 포트로 구성되어있다. 모듈간의 데이터 전송과 교환은 8 bit globalvideo bus와 세 개의 local configurable pipeline 8 bit video bus에 의하여 이루어졌고, system management를 위하여 VME bus가 쓰였다. 두 개의 DSP는 왼쪽 및 오른쪽 이미지 edges 검출을 위하여 쓰였고 마지막 processor는 matching process와 3-D 연산에 사용되었다. $512{\times}512$픽셀 이미지에서 이 센서는 scene complexity에 따라 1Hz정도의 조밀한 3-D map을 생성했다. 특수목적의 multiprocessor card들을 사용하면 결과를 향상시킬 수 있을 것이다.

  • PDF

A Study on Real-time Control of Bead Height and Joint Tracking Using Laser Vision Sensor

  • Kim, H. K.;Park, H.
    • International Journal of Korean Welding Society
    • /
    • 제4권1호
    • /
    • pp.30-37
    • /
    • 2004
  • There have been continuous efforts on automating welding processes. This automation process could be said to fall into two categories, weld seam tracking and weld quality evaluation. Recently, the attempts to achieve these two functions simultaneously are on the increase. For the study presented in this paper, a vision sensor is made, a vision system is constructed and using this, the 3 dimensional geometry of the bead is measured on-line. For the application as in welding, which is the characteristic of nonlinear process, a fuzzy controller is designed. And with this, an adaptive control system is proposed which acquires the bead height and the coordinates of the point on the bead along the horizontal fillet joint, performs seam tracking with those data, and also at the same time, controls the bead geometry to a uniform shape. A communication system, which enables the communication with the industrial robot, is designed to control the bead geometry and to track the weld seam. Experiments are made with varied offset angles from the pre-taught weld path, and they showed the adaptive system works favorable results.

  • PDF

Development of Automotive Position Measuring Vision System

  • Lee, Chan-Ho;Oh, Jong-Kyu;Hur, Jong-Sung;Han, Chul-Hi;Kim, Young-Su;Lee, Kyu-Ho;Hur, Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1511-1515
    • /
    • 2004
  • Machine vision system plays an important role in factory automation. Its many applications are found in automobile manufacturing industries, as an eye for robotic automation system. In this paper, an automobile position measuring vision system(APMVS) applicable to manufacturing line for under body painting of a car is introduced. The APMVS measures position and orientation of the car body to be sealed or painted by the robots. The configuration of the overall robotic sealing/painting system, design and application procedure, and application examples are described.

  • PDF

평면 구조물의 단일점 일치를 이용한 2차원 레이저 거리감지센서의 자동 캘리브레이션 (Autonomous Calibration of a 2D Laser Displacement Sensor by Matching a Single Point on a Flat Structure)

  • 정지훈;강태선;신현호;김수종
    • 제어로봇시스템학회논문지
    • /
    • 제20권2호
    • /
    • pp.218-222
    • /
    • 2014
  • In this paper, we introduce an autonomous calibration method for a 2D laser displacement sensor (e.g. laser vision sensor and laser range finder) by matching a single point on a flat structure. Many arc welding robots install a 2D laser displacement sensor to expand their application by recognizing their environment (e.g. base metal and seam). In such systems, sensing data should be transformed to the robot's coordinates, and the geometric relation (i.e. rotation and translation) between the robot's coordinates and sensor coordinates should be known for the transformation. Calibration means the inference process of geometric relation between the sensor and robot. Generally, the matching of more than 3 points is required to infer the geometric relation. However, we introduce a novel method to calibrate using only 1 point matching and use a specific flat structure (i.e. circular hole) which enables us to find the geometric relation with a single point matching. We make the rotation component of the calibration results as a constant to use only a single point by moving a robot to a specific pose. The flat structure can be installed easily in a manufacturing site, because the structure does not have a volume (i.e. almost 2D structure). The calibration process is fully autonomous and does not need any manual operation. A robot which installed the sensor moves to the specific pose by sensing features of the circular hole such as length of chord and center position of the chord. We show the precision of the proposed method by performing repetitive experiments in various situations. Furthermore, we applied the result of the proposed method to sensor based seam tracking with a robot, and report the difference of the robot's TCP (Tool Center Point) trajectory. This experiment shows that the proposed method ensures precision.

Implementation of 3D Moving Target-Tracking System based on MSE and BPEJTC Algorithms

  • Ko, Jung-Hwan;Lee, Maeng-Ho;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • 제5권1호
    • /
    • pp.41-46
    • /
    • 2004
  • In this paper, a new stereo 3D moving-target tracking system using the MSE (mean square error) and BPEJTC (binary phase extraction joint transform correlator) algorithms is proposed. A moving target is extracted from the sequential input stereo image by applying a region-based MSE algorithm following which, the location coordinates of a moving target in each frame are obtained through correlation between the extracted target image and the input stereo image by using the BPEJTC algorithm. Through several experiments performed with 20 frames of the stereo image pair with $640{\times}480$ pixels, we confirmed that the proposed system is capable of tracking a moving target at a relatively low error ratio of 1.29 % on average at real time.

Development of Personal Robot Platform : Designed Approach for Modularization

  • Roh, Se-gon;S. M Baek;Lee, D. H;Park, K. H;T. K Moon;S. W Ryew;T. Y Kuc;Kim, H. S;Lee, H. G H
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.117.3-117
    • /
    • 2002
  • In this paper a new framework is presented for developing the personal robot being used in home environments. We mainly focus on the system engineering technology such as the modularization and standardization. Effective ways for interfacing among modules are addressed regarding compatibility in hardware and software, and as a result a personal robot platform named DHR I is built. The robot is composed of five modules such as brain, mobile, sensor, vision, and user interface modules. Each module can be easily plugged in the system in a mechanical as well as electrical sense by sharing the communication protocol with IEEE1394 FireWire. &n...

  • PDF

Developement and control of a sensor based quadruped walking robot

  • Bien, Zeungnam;Lee, Yun-Jung;Suh, Il-Hong;Lee, Ji-Hong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1990년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 26-27 Oct. 1990
    • /
    • pp.1087-1092
    • /
    • 1990
  • This paper describes the development and control of a quadruped walking robot, named as KAISER-II. The control system with multiprocessor based hierachical structure is developed. In order to navigate autonomously on a rough terrain, an identification algorithm for robot's position is proposed using 3-D vision and guide-mark pattern Also, a simple attitude control algorithm is included using force sensors. Through experimental results, it is shown that the robot can not only walk statically on even terrain but also cross over or go through the artificially made obstacles such as stairs, horizontal bar and tunnel-typed one.

  • PDF

LQG 시각추종제어기를 이용한 로봇매니퓰레이터의 제어 (Control of Robot Manipulators Using LQG Visual Tracking Cotroller)

  • 임태헌;전향식;최영규;김성신
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.2995-2997
    • /
    • 1999
  • Recently, real-time visual tracking control for a robot manipulator is performed by using a vision feedback sensor information. In this paper, the optical flow is computed based on the eye-in-hand robot configuration. The image jacobian is employed to calculate the rotation and translation velocity of a 3D moving object. LQG visual controller generates the real-time visual trajectory. In order to improving the visual tracking performance. VSC controller is employed to control the robot manipulator. Simulation results show a better visual tracking performance than other method.

  • PDF

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • 한국정보기술응용학회:학술대회논문집
    • /
    • 한국정보기술응용학회 2005년도 6th 2005 International Conference on Computers, Communications and System
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

A Robotic System for Transferring Tobacco Seedlings

  • Lee, D.W.;W.F.McClure
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 1993년도 Proceedings of International Conference for Agricultural Machinery and Process Engineering
    • /
    • pp.850-858
    • /
    • 1993
  • Germinatin and early growth of tobacco seedlings in trays containing many cells is increasing in popularity . Since 100 % germination is not likely , a major problem is to locate and replace the content of those cells which contain either no seedling or a stunted seedling with a plug containing a viable seedling. Empty cells and seedlings of poor quality take up valuable space in a greenhouse. They may also cause difficulty when transplanting seedlings into the field. Robotic technology, including the implementation of computer vision, appears to be an attractive alternative to the use of manual labor for accomplishing this task. Operating AGBOT, short for Agricultural ROBOT, involved four steps : (1) capturing the image, (2) processing the image, (3) moving the manipulator, (4) working the gripper. This research seedlings within a cell-grown environment. the configuration of the cell-grown seedling environment dictated the design of a Cartesian robot suitable for working ov r a flat plane. Experiments of AGBOT performance in transferring large seedlings produced trays which were more than 98% survived one week after transfer. In general , the system generated much better than expected.

  • PDF