• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.068 seconds

Measuring Technique for Height of Burst using Stereo-vision Recognition (스테레오 영상인식을 이용한 신관폭발고도 계측기술)

  • Kang, Gyu-Chang;Choi, Ju-Ho;Park, Won-U;Hwang, Ui-Seong;Hong, Seong-Su;Yoo, Jun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.2 no.1
    • /
    • pp.194-203
    • /
    • 1999
  • This paper presents a measuring technique for bursting height of proximity fuses. This technique uses camera calibration to obtain the perspective transformation matrix describing the projection of the world coordinates to image coordinates, and calculates the world coordinates of bursting points from their image coordinates. The surface approximation algorithm by polynomial functions are also implemented.

  • PDF

The Image Measuring System for accurate calibration-matching in objects (정밀 켈리브레이션 정합을 위한 화상측징계)

  • Kim, Jong-Man
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.357-358
    • /
    • 2006
  • Accurate calibration matching for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The camera calibration is a necessary procedure for stereo vision-based depth computation. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. No effective methods for such alignment have been presented before. Some amount of depth error caused from such non-parallel installation of cameras is inevitable. If the pixel distance parameter which is one of Intra parameter is calibrated with known points, such error can be compensated in some amount and showed the variable experiments for accurate effects.

  • PDF

Bin Picking method using stereo vision (스테레오 비젼을 이용한 Bin Picking Method)

  • 주기세;한민홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.692-698
    • /
    • 1993
  • This paper presents a Bin-Picking method in which robot recognizes the positions and orientations of jumbled objects placed in a bin, then picks up distinctive objects from the top of the jumble. The jumbled objects are recognized comparing the characteristics extracted from stereo images with those in the CAD data. The 3-D information is obtained using the bipartite-matching method which compares image of one camera with the image of the other camera Then the robot picks up the object which will cause the least amount of disturbance to the jumble, and places it at a predetermined place. This paper contributes to the basic study of Bin-Picking, and can be used in an automatic assembly system without using part sorting or orienting devices.

  • PDF

Wireless Sensors Module for Remote Room Environment Monitoring

  • Lee, Dae-Seok;Chung, Wan-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.449-452
    • /
    • 2005
  • For home networking system with a function of air quality monitoring, a wireless sensor module with several air quality monitoring sensors was developed for indoor environment monitoring system in home networking. The module has various enlargements for various kinds of sensors such as humidity sensor, temperature sensor, CO2 sensor, flying dust sensor, and etc. The developed wireless module is very convenient to be installed on the wall of a room or office, and the sensors in the module can be easily replaced due to well designed module structure and RF connection method. To reduce the system cost, only one RF transmission block was used for sensors' signal transmission to 8051 microcontroller board in time sharing method. In this home networking system, various indoor environmental parameters could be monitored in real time from RF wireless sensor module. Indoor vision was transferred to client PC or PDA from surveillance camera installed indoor or desired site. Web server using Oracle DB was used for saving the visions by web-camera and various data from wireless sensor module.

  • PDF

Location Identification Using an Fisheye Lens and Landmarks Placed on Ceiling in a Cleaning Robot (어안렌즈와 천장의 위치인식 마크를 활용한 청소로봇의 자기 위치 인식 기술)

  • Kang, Tae-Gu;Lee, Jae-Hyun;Jung, Kwang-Oh;Cho, Deok-Yeon;Yim, Choog-Hyuk;Kim, Dong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.10
    • /
    • pp.1021-1028
    • /
    • 2009
  • In this paper, a location identification for a cleaning robot using a camera shooting forward a room ceiling which kas three point landmarks is introduced. These three points are made from a laser source which is placed on an auto charger. A fisheye lens covering almost 150 degrees is utilized and the image is transformed to a camera image grabber. The widly shot image has an inevitable distortion even if wide range is coverd. This distortion is flatten using an image warping scheme. Several vision processing techniques such as an intersection extraction erosion, and curve fitting are employed. Next, three point marks are identified and their correspondence is investigated. Through this image processing and image distortion adjustment, a robot location in a wide geometrical coverage is identified.

Mapping of Real-Time 3D object movement

  • Tengis, Tserendondog;Batmunkh, Amar
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.2
    • /
    • pp.1-8
    • /
    • 2015
  • Tracking of an object in 3D space performed in real-time is a significant task in different domains from autonomous robots to smart vehicles. In traditional methods, specific data acquisition equipments such as radars, lasers etc, are used. Contemporary computer technology development accelerates image processing, and it results in three-dimensional stereo vision to be used for localizing and object tracking in space. This paper describes a system for tracking three dimensional motion of an object using color information in real time. We create stereo images using pair of a simple web camera, raw data of an object positions are collected under realistic noisy conditions. The system has been tested using OpenCV and Matlab and the results of the experiments are presented here.

Depth error calibration of stereo cameras for accurate instrumentation in objects (정밀한 영상 계측을 위한 스테레오 카메라의 오차 보정시스템)

  • Kim, Jong-Man
    • Proceedings of the KIEE Conference
    • /
    • 2004.07d
    • /
    • pp.2313-2316
    • /
    • 2004
  • Accurate calibration effect for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The camera calibration is a necessary procedure for stereo vision-based depth computation. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. No effective methods for such alignment have been presented before. Some amount of depth error caused from such non-parallel installation of cameras is inevitable. If the pixel distance parameter which is one of intra parameter is calibrated with known points, such error can be compensated in some amount and showed the variable experiments for accurate effects.

  • PDF

Using FPGA for Real-Time Processing of Digital Linescan Camera

  • Heon Jeong;Jung, Nam-Chae;Park, Han-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.152.4-152
    • /
    • 2001
  • We investigate, in this paper, the use of FPGA(Field Programmable Gate Array) architectures for real-time processing of digital linescan camera. The use of FPGAS for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented. These modules are designed with gate-level hardware components that are compiled into the functionality of the FPGA chips. This new synchronous unidirectional interface establishes a protocol for the transfer of image and result data between modules. This reduces the design complexity and allows several different low-level operations to be applied to the same input image ...

  • PDF

Moving Stereo Vision-based Motion Plan by Recognizing the Obstacle Height for Intelligent Mobile Robot

  • Yoon, Yeo-Hong;Jo, Kang-Hyun;Kang, Hyun-Deok;Moon, In-Hyuk
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.91.2-91
    • /
    • 2001
  • This paper describes the path planning of an autonomous mobile robot using one camera-based sequence image processing. As an assumption, all objects in front of the mobile robot are located on the same plane where robot moves. Using the moving camera grounded on the autonomous mobile robot, the robot extracts the angular points of obstacle objects, calculates the height using the assumption and discrepancy between two consecutive images. In the image processing, angular points of objects must correspond so that they deliver the size of objects. Thus, the robot passes through if the object has not the height, like the paper or the shadow projected. Otherwise, the robot passes aside if ...

  • PDF

Bin-picking method using stereo vision

  • Joo, Kisee;Han, Min-Hong
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1994.04a
    • /
    • pp.527-534
    • /
    • 1994
  • This paper presents a Bin-Picking method in which robot recognizes the positions and orientations of unoccluded objects at the top of jumbled objects placed in a bin, and picks up the unoccluded objects one by one from the jumble. A method using feasible region, painting, and hierarchical test is introduced for recognizing the unoccluded objects from the jumbled objects. The 3D information is obtained using the bipartite-matching method which finds the least difference of 3D by comparing vertexes of one camera with vertexes of the other camera, then hypothesis and test are done. The working order of unoccluded objects is made based on 3D, position, and orientation information. The robot picks up the unoccluded objects from the jumbled objects according to the working order. This all process continues to the empty bin.