• Title/Summary/Keyword: robotic vision

Search Result 127, Processing Time 0.024 seconds

Flexible 3-dimension measuring system using robot hand

  • Ishimatsu, T.;Yasuda, K.;Kumon, K.;Matsui, R.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.700-704
    • /
    • 1989
  • A robotic system with a 3-dimensional profile measuring sensor is developed in order to measure the complicated shape of the target body. Due to this 3-dimensional profile measuring sensor, a computer is able to adjust the posture of the robot hand so that complicated global profile of the target body can be recognized after several measurements from the variant directions. In order to enable fast data processing, a digital signal processor and a look-up table is introduced.

  • PDF

Adaptive planar vision marker composed of LED arrays for sensing under low visibility

  • Kim, Kyukwang;Hyun, Jieum;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.141-149
    • /
    • 2018
  • In image processing and robotic applications, two-dimensional (2D) black and white patterned planar markers are widely used. However, these markers are not detectable in low visibility environment and they are not changeable. This research proposes an active and adaptive marker node, which displays 2D marker patterns using light emitting diode (LED) arrays for easier recognition in the foggy or turbid underwater environments. Because each node is made to blink at a different frequency, active LED marker nodes were distinguishable from each other from a long distance without increasing the size of the marker. We expect that the proposed system can be used in various harsh conditions where the conventional marker systems are not applicable because of low visibility issues. The proposed system is still compatible with the conventional marker as the displayed patterns are identical.

Recognition of Missing and Bad Seedings via Color Image Precessing (칼라 영상처리에 의한 결주 및 불량모 인식)

  • 손재룡;강창호;한길수;정성림;권기영
    • Journal of Biosystems Engineering
    • /
    • v.26 no.3
    • /
    • pp.253-262
    • /
    • 2001
  • This study was conducted to develop the vision system of a robotic transplanter for plug-seedling. A color image processing algorithm was developed to identify and locate empty cells and bad plants in the seedling tray. The image of pepper and tomato seedling tray was segmented into regions of plants, frame and soil using threshold technique which utilized Q of YIQ for finding leaves and H of HSI for finding frame of tray in the color coordinate system. The recognition system was able to successfully identify empty cells and bad seeding and locate their two-dimensional locations. The overall success rate of the recognition system was about 99%.

  • PDF

A Novel Robot Sensor System Utilizing the Combination Of Stereo Image Intensity And Laser Structured Light Image Information

  • Lee, Hyun-Ki;Xingyong, Song;Kim, Min-Young;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.729-734
    • /
    • 2005
  • One of the important research issues in mobile robot is how to detect the 3D environment fast and accurately, and recognize it. Sensing methods of utilizing laser structured light and/or stereo vision are representatively used among a number of methodologies developed to date. However, the methods are still in need of achieving high accuracy and reliability to be used for real world environments. In this paper to implement a new robotic environmental sensing algorithm is presented by combining the information between intensity image and that of laser structured light image. To see how effectively the algorithm applied to real environments, we developed a sensor system that can be mounted on a mobile robot and tested performance for a series of environments.

  • PDF

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

A Small Humanoid Robot that can Play Golf (소형 인간형 로봇의 골프하기)

  • Kim, Jong-Woo;Cha, Chul;Cho, Dong-Kwon;Sung, Young-Whee
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.2
    • /
    • pp.374-382
    • /
    • 2007
  • Robot mobility and intelligence become more important for robots to be used in various fields other than automation. The main purpose of providing mobility to a robot is to extend the robot's manipulability. In this paper, we introduce a small humanoid robot that can autonomously play golf as an example of incorporating robot intelligence, mobility, and manipulability. The robot has 12 degrees of freedom for legs and has various basic walking patterns. It can move to a desired position and change orientation by combining the basic waking patterns. The robot has a color CCD camera and can extract coordinates of the objects in the environments. The small humanoid robot has 8 degrees of freedom for arms and can play golf autonomously with two kinds of dexterous swing motions. Kinematic analysis of the robot arms, vision data processing for the recognition of the environments, algorithm for playing robotic golf have been performed or proposed. The experimental results show that the robot can play golf autonomously.

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김동호;김재웅
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.208-214
    • /
    • 2001
  • In this study, we constructed a visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame p개cess, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data or weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam. such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

Dynamic Visual Servoing of Robot Manipulators (로봇 메니퓰레이터의 동력학 시각서보)

  • Baek, Seung-Min;Im, Gyeong-Su;Han, Ung-Gi;Guk, Tae-Yong
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.1
    • /
    • pp.41-47
    • /
    • 2000
  • A better tracking performance can be achieved, if visual sensors such as CCD cameras are used in controling a robot manipulator, than when only relative sensors such as encoders are used. However, for precise visual servoing of a robot manipulator, an expensive vision system which has fast sampling rate must be used. Moreover, even if a fast vision system is implemented for visual servoing, one cannot get a reliable performance without use of robust and stable inner joint servo-loop. In this paper, we propose a dynamic control scheme for robot manipulators with eye-in-hand camera configuration, where a dynamic learning controller is designed to improve the tracking performance of robotic system. The proposed control scheme is implemented for tasks of tracking moving objects and shown to be robust to parameter uncertainty, disturbances, low sampling rate, etc.

  • PDF

Dividing Occluded Humans Based on an Artificial Neural Network for the Vision of a Surveillance Robot (감시용 로봇의 시각을 위한 인공 신경망 기반 겹친 사람의 구분)

  • Do, Yong-Tae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.505-510
    • /
    • 2009
  • In recent years the space where a robot works has been expanding to the human space unlike traditional industrial robots that work only at fixed positions apart from humans. A human in the recent situation may be the owner of a robot or the target in a robotic application. This paper deals with the latter case; when a robot vision system is employed to monitor humans for a surveillance application, each person in a scene needs to be identified. Humans, however, often move together, and occlusions between them occur frequently. Although this problem has not been seriously tackled in relevant literature, it brings difficulty into later image analysis steps such as tracking and scene understanding. In this paper, a probabilistic neural network is employed to learn the patterns of the best dividing position along the top pixels of an image region of partly occlude people. As this method uses only shape information from an image, it is simple and can be implemented in real time.

Development of Vision-based Lateral Control System for an Autonomous Navigation Vehicle (자율주행차량을 위한 비젼 기반의 횡방향 제어 시스템 개발)

  • Rho Kwanghyun;Steux Bruno
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.19-25
    • /
    • 2005
  • This paper presents a lateral control system for the autonomous navigation vehicle that was developed and tested by Robotics Centre of Ecole des Mines do Paris in France. A robust lane detection algorithm was developed for detecting different types of lane marker in the images taken by a CCD camera mounted on the vehicle. $^{RT}Maps$ that is a software framework far developing vision and data fusion applications, especially in a car was used for implementing lane detection and lateral control. The lateral control has been tested on the urban road in Paris and the demonstration has been shown to the public during IEEE Intelligent Vehicle Symposium 2002. Over 100 people experienced the automatic lateral control. The demo vehicle could run at a speed of 130km1h in the straight road and 50km/h in high curvature road stably.