• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.026 seconds

A Laser Vision System for the High-Speed Measurement of Hole Positions (홀위치 측정을 위한 레이져비젼 시스템 개발)

  • Ro, Young-Shick;Suh, Young-Soo;Choi, Won-Tai
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.333-335
    • /
    • 2006
  • In this page, we developed the inspection system for automobile parts using the laser vision sensor. Laser vision sensor has gotten 2 dimensions information and third dimension information of laser vision camera using the vision camera. Used JIG and ROBOT for inspection position transfer. Also, computer integration system developed that control system component pal1s and manage data measurement information. Compare sensor measurement result with CAD Data and verified measurement result effectiveness taking advantage of CAD to get information of measurement object.

  • PDF

Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control (능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마)

  • 한영모
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.39-43
    • /
    • 2004
  • A typical theme of active vision systems is gaze-fixing of a camera. Here gaze-fixing of a camera means by steering orientation of a camera so that a given point on the object is always at the center of the image. For this we need to combine a function to analyze image data and a function to control orientation of a camera. This paper presents an algorithm for gaze-fixing of a camera where image analysis and orientation control are designed in a frame. At this time, for avoiding difficulties in implementing and aiming for real-time applications we design the algorithm to be a simple closed-form without using my information related to calibration of the camera or structure estimation.

A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System (Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안)

  • Kim, Jong Hyeong;Jang, Kyoungjae;Kwon, Hyuk-dong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.2
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

Development of Vision Technology for the Test of Soldering and Pattern Recognition of Camera Back Cover (카메라 Back Cover의 형상인식 및 납땜 검사용 Vision 기술 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1999.10a
    • /
    • pp.119-124
    • /
    • 1999
  • This paper presents new approach to technology pattern recognition of camera back cover and test of soldering. In real-time implementing of pattern recognition camera back cover and test of soldering, the MVB-03 vision board has been used. Image can be captured from standard CCD monochrome camera in resolutions up to 640$\times$480 pixels. Various options re available for color cameras, a synchronous camera reset, and linescan cameras. Image processing os performed using Texas Instruments TMS320C31 digital signal processors. Image display is via a standard composite video monitor and supports non-destructive color overlay. System processing is possible using c30 machine code. Application software can be written in Borland C++ or Visual C++

  • PDF

Correction of Photometric Distortion of a Micro Camera-Projector System for Structured Light 3D Scanning

  • Park, Go-Gwang;Park, Soon-Yong
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.2
    • /
    • pp.96-102
    • /
    • 2012
  • This paper addresses photometric distortion problems of a compact 3D scanning sensor which is composed of a micro-size and inexpensive camera-projector system. Recently, many micro-size cameras and projectors are available. However, erroneous 3D scanning results may arise due to the poor and nonlinear photometric properties of the sensors. This paper solves two inherent photometric distortions of the sensors. First, the response functions of both the camera and projector are derived from the least squares solutions of passive and active calibration, respectively. Second, vignetting correction of the vision camera is done by using a conventional method, however the projector vignetting is corrected by using the planar homography between the image planes of the projector and camera, respectively. Experimental results show that the proposed technique enhances the linear properties of the phase patterns that are generated by the sensor.

An Experimental Study on the Optimal Arrangement of Cameras Used for the Robot's Vision Control Scheme (로봇 비젼 제어기법에 사용된 카메라의 최적 배치에 대한 실험적 연구)

  • Min, Kwan-Ung;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.19 no.1
    • /
    • pp.15-25
    • /
    • 2010
  • The objective of this study is to investigate the optimal arrangement of cameras used for the robot's vision control scheme. The used robot's vision control scheme involves two estimation models, which are the parameter estimation and robot's joint angle estimation models. In order to perform this study, robot's working region is divided into three work spaces such as left, central and right spaces. Also, cameras are positioned on circular arcs with radius of 1.5m, 2.0m and 2.5m. Seven cameras are placed on each circular arc. For the experiment, nine cases of camera arrangement are selected in each robot's work space, and each case uses three cameras. Six parameters are estimated for each camera using the developed parameter estimation model in order to show the suitability of the vision system model in nine cases of each robot's work space. Finally, the robot's joint angles are estimated using the joint angle estimation model according to the arrangement of cameras for robot's point-position control. Thus, the effect of camera arrangement used for the robot's vision control scheme is shown for robot's point-position control experimentally.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Development of Robot Vision Control Schemes based on Batch Method for Tracking of Moving Rigid Body Target (강체 이동타겟 추적을 위한 일괄처리방법을 이용한 로봇비젼 제어기법 개발)

  • Kim, Jae-Myung;Choi, Cheol-Woong;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.17 no.5
    • /
    • pp.161-172
    • /
    • 2018
  • This paper proposed the robot vision control method to track a moving rigid body target using the vision system model that can actively control camera parameters even if the relative position between the camera and the robot and the focal length and posture of the camera change. The proposed robotic vision control scheme uses a batch method that uses all the vision data acquired from each moving point of the robot. To process all acquired data, this robot vision control scheme is divided into two cases. One is to give an equal weight for all acquired data, the other is to give weighting for the recent data acquired near the target. Finally, using the two proposed robot vision control schemes, experiments were performed to estimate the positions of a moving rigid body target whose spatial positions are unknown but only the vision data values are known. The efficiency of each control scheme is evaluated by comparing the accuracy through the experimental results of each control scheme.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor (바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF