• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.037 seconds

Clausius Normalized Field-Based Stereo Matching for Uncalibrated Image Sequences

  • Koh, Eun-Jin;Lee, Jae-Yeon;Park, Jun-Seok
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.750-760
    • /
    • 2010
  • We propose a homology between thermodynamic systems and images for the treatment of time-varying imagery. A physical system colder than its surroundings absorbs heat from the surroundings. Furthermore, the absorbed heat increases the entropy of the system, which is closely related to its disorder as given by the definition of Clausius and Boltzmann. Because pixels of an image are viewed as a state of lattice-like molecules in a thermodynamic system, the task of reckoning the entropy variations of pixels is similar to estimating their degrees of disorder. We apply this homology to the uncalibrated stereo matching problem. The absence of calibrations alleviates user efforts to install stereo cameras and enables users to freely modify the composition of the cameras. The proposed method is also robust to differences in brightness, white balancing, and even focusing between stereo image pairs. These peculiarities enable users to estimate the depths of interesting objects in practical applications without much effort in order to set and maintain a stereo vision setup. Users can consequently utilize two webcams as a stereo camera.

The Development of Trajectory Generation Algorithm of Palletizing Robot Considered to Time-variable Obstacles (변형 장애물을 고려한 최적 로봇 팔레타이징 경로 생성 알고리즘의 개발)

  • Yu, Seung-Nam;Lim, Sung-Jin;Kang, Maing-Kyu;Han, Chang-Soo;Kim, Sung-Rak
    • Proceedings of the KSME Conference
    • /
    • 2007.05a
    • /
    • pp.814-819
    • /
    • 2007
  • Palletizing task is well-known time consuming and laborious process in factory, hence automation is seriously required. To do this, artificial robot is generally used. These systems however, mostly user teaches the robot point to point and to avoid time-variable obstacle, robot is required to attach the vision camera. These system structures bring about inefficiency and additional cost. In this paper we propose task-oriented trajectory generation algorithm for palletizing. This algorithm based on $A^{*}$ algorithm and slice plane theory, and modify the object dealing method. As a result, we show the elapsed simulation time and compare with old method. This simulation algorithm can be used directly to the off-line palletizing simulator and raise the performance of robot palletizing simulator not using excessive motion area of robot to avoid adjacent components or vision system. Most of all, this algorithm can be used to low-level PC or portable teach pendent

  • PDF

Reduction of Variable Illumination Effect on Pixel Gray-levels of Machine Vision

  • Suh S. R.;Huang J. K.;Kim Y. T.;Yoo S. N.;Choi Y. S.;Sung J. H.
    • Agricultural and Biosystems Engineering
    • /
    • v.5 no.1
    • /
    • pp.5-9
    • /
    • 2004
  • This study was carried out to develop methods of reducing the effect of solar illumination on pixel gray-levels of machine vision for agricultural field use. Two kinds of monochrome CCD cameras with manual and auto-iris lenses were used to take pictures within a range of 15 to 120 klux of solar illumination. A camera having more precise automatic control functions gave much better result. Four kinds of indices using pixel gray-level of the $99\%$ white DRS (diffuse reflectance standard) as a reference were tried to compensate pixel gray-levels of an image for variable illumination. Coefficients of variation of the indices within a range of illumination were used as a criterion for comparison. The study concluded that an index of (A+B)/A, where A is gray-level of the $99\%$ DRS and B is gray-level of the tested material, gave the best consistency in the range of solar illumination.

  • PDF

A STUDY ON WELD POOL MONITORING IN PULSED LASER EDGE WELDING

  • Lee, Seung-Key;Na, Suck-Joo
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.595-599
    • /
    • 2002
  • Edge welding of thin sheets is very difficult because of the fit-up problem and small weld area In laser welding, joint fit-up and penetration are critical for sound weld quality, which can be monitored by appropriate methods. Among the various monitoring systems, visual monitoring method is attractive because various kinds of weld pool information can be extracted directly. In this study, a vision sensor was adopted for the weld pool monitoring in pulsed Nd:YAG laser edge welding to monitor whether the penetration is enough and the joint fit-up is within the requirement. Pulsed Nd:YAG laser provides a series of periodic laser pulses, while the shape and brightness of the weld pool change temporally even in one pulse duration. The shutter-triggered and non-interlaced CCD camera was used to acquire a temporally changed weld pool image at the moment representing the weld status well. The information for quality monitoring can be extracted from the monitored weld pool image by an image processing algorithm. Weld pool image contains not only the information about the joint fit-up, but the penetration. The information about the joint fit-up can be extracted from the weld pool shape, and that about a penetration from the brightness. Weld pool parameters that represent the characteristics of the weld pool were selected based on the geometrical appearance and brightness profile. In order to achieve accurate prediction of the weld penetration, which is nonlinear model, neural network with the selected weld pool parameters was applied.

  • PDF

Physical Properties Analysis of Mango using Computer Vision

  • Yimyam, Panitnat;Chalidabhongse, Thanarat;Sirisomboon, Panmanas;Boonmung, Suwanee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.746-750
    • /
    • 2005
  • This paper describes image processing techniques that can detect, segment, and analyze the mango's physical properties such as size, shape, surface area, and color from images. First, images of mangoes taken by a digital camera are analyzed and segmented. The segmentation is done based on constructed hue model of the sample mangoes. Some morphological and filtering techniques are then applied to clean noises before fitting spline curve on the mango boundary. From the clean segmented image, the mango projected area can be computed. The shape of the mango is then analyzed using some structuring models. Color is also spatially analyzed and indexed in the database for future classification. To obtain the surface area, the mango is peeled. The scanned image of its peels is then segmented and filtered using similar approach. With calibration parameters, the surface area could then be computed. We employed the system to evaluate physical properties of a mango cultivar called "Nam Dokmai". There were sixty mango samples in three various sizes graded by an experienced farmer's eyes and hands. The results show the techniques could be a good alternative and more feasible method for grading mango comparing to human's manual grading.

  • PDF

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김재웅;김동호
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.643-646
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame process, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data of weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam, such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

Presentation Control System using Vision Based Hand-Gesture Recognition (Vision 기반 손동작 인식을 활용한 프레젠테이션 제어 시스템)

  • Lim, Kyoung-Jin;Kim, Eui-Jeong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.281-284
    • /
    • 2010
  • In this paper, we present Hand-gesture recognition for actual computing into color images from camera. Color images are binarization and labeling by using the YCbCr Color model. Respectively label area seeks the center point of the hand from to search Maximum Inscribed Circle which applies Voronoi-Diagram. This time, searched maximum circle and will analyze the elliptic ingredient which is contiguous so a hand territory will be able to extract. we present the presentation contral system using elliptic element and Maximum Inscribed Circle. This algorithm is to recognize the various environmental problems in the hand gesture recognition in the background objects with similar colors has the advantage that can be effectively eliminated.

  • PDF

An Automatic Teaching Method by Vision Information for A Robotic Assembly System

  • Ahn, Cheol-Ki;Lee, Min-Cheol;Kim, Jong-Hyung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.65-68
    • /
    • 1999
  • In this study, an off-line automatic teaching method using vision information for robotic assembly task is proposed. Many of industrial robots are still taught and programmed by a teaching pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and played back repetitively to perform the robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and transferred to the robot controller. This teaching process is implemented through an off-line programming(OLP) software. The OLP is developed for the robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on the assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line automatic teaching.

  • PDF

Vision Based Estimation of 3-D Position of Target for Target Following Guidance/Control of UAV (무인 항공기의 목표물 추적을 위한 영상 기반 목표물 위치 추정)

  • Kim, Jong-Hun;Lee, Dae-Woo;Cho, Kyeum-Rae;Jo, Seon-Yeong;Kim, Jung-Ho;Han, Dong-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.12
    • /
    • pp.1205-1211
    • /
    • 2008
  • This paper describes methods to estimate 3-D position of target with respect to reference frame through monocular image from unmanned aerial vehicle (UAV). 3-D position of target is used as information for surveillance, recognition and attack. In this paper. 3-D position of target is estimated to make guidance and control law, which can follow target, user interested. It is necessary that position of target is measured in image to solve 3-D position of target. In this paper, kalman filter is used to track and output position of target in image. Estimation of target's 3-D position is possible using result of image tracking and information of UAV and camera. To estimate this, two algorithms are used. One is methode from arithmetic derivation of dynamics between UAV, carmer, and target. The other is LPV (Linear Parametric Varying). These methods have been run on simulation, and compared in this paper.

A Study on a Vision Sensor System for Tracking the I-Butt Weld Joints

  • Kim Jae-Woong;Bae Hee-Soo
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.10
    • /
    • pp.1856-1863
    • /
    • 2005
  • In this study, a visual sensor system for weld seam tracking the I-butt weld joints in GMA welding was constructed. The sensor system consists of a CCD camera, a diode laser with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and arc light. In order to obtain the enhanced image, quantitative relationship between laser intensity and iris opening was investigated. Throughout the repeated experiments, the shutter speed was set at 1/1000 second for minimizing the effect of spatters on the image, and therefore the image without the spatter traces could be obtained. Region of interest was defined from the entire image and gray level of the searched laser stripe was compared to that of weld line. The differences between these gray levels lead to spot the position of weld joint using central difference method. The results showed that, as long as weld line is within $\pm15^{o}$ from the longitudinal straight line, the system constructed in this study could track the weld line successfully. Since the processing time is no longer than 0.05 sec, it is expected that the developed method could be adopted to high speed welding such as laser welding.