• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.032 seconds

Study of Intelligent Vision Sensor for the Robotic Laser Welding

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.447-457
    • /
    • 2019
  • The intelligent sensory system is required to ensure the accurate welding performance. This paper describes the development of an intelligent vision sensor for the robotic laser welding. The sensor system includes a PC based vision camera and a stripe-type laser diode. A set of robust image processing algorithms are implemented. The laser-stripe sensor can measure the profile of the welding object and obtain the seam line. Moreover, the working distance of the sensor can be changed and other configuration is adjusted accordingly. The robot, the seam tracking system, and CW Nd:YAG laser are used for the laser welding robot system. The simple and efficient control scheme of the whole system is also presented. The profile measurement and the seam tracking experiments were carried out to validate the operation of the system.

Vision-Based Piano Music Transcription System (비전 기반 피아노 자동 채보 시스템)

  • Park, Sang-Uk;Park, Si-Hyun;Park, Chun-Su
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.249-253
    • /
    • 2019
  • Most of music-transcription systems that have been commercialized operate based on audio information. However, these conventional systems have disadvantages of environmental dependency, equipment dependency, and time latency. This paper studied a vision-based music-transcription system that utilizes video information rather than audio information, which is a traditional method of music-transcription programs. Computer vision technology is widely used as a field for analyzing and applying information from equipment such as cameras. In this paper, we created a program to generate MIDI file which is electronic music notes by using smart-phone cameras to record the play of piano.

The Camera Calibration Parameters Estimation using The Projection Variations of Line Widths (선폭들의 투영변화율을 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Moon, Sung-Young;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2372-2374
    • /
    • 2003
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as focal length, scale factor, pose, orientations, and distance. But, radial lens distortion is not modeled. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1,2,3,4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

A Development of Object Position Information Extraction Algorithm using Stereo Vision (스테레오 비전을 이용한 물체의 위치정보 추출 알고리즘 개발)

  • Kim, Moo-Hyun;Lee, Ji-Hyun;Lee, Seung-Kuy;Kim, Young-Hee;Park, Mu-Hun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.8
    • /
    • pp.1767-1775
    • /
    • 2010
  • As factory automation is getting popular, there has been a lot of research concerned with stereo vision systems as a part of an automation system with unmanned moving equipment. In the stereo vision system, information about an object could be gained by searching through images. Edges which are based on the information about an object are used to find the position of the object and send a message of its position coordinate to a unmanned crain. This thesis proposes an algorithm to find the center point of the object's surface which is connected to the unmanned crain's hookblock, and to recognize the shape of the object by using two CCD cameras. At first, getting information about the edges, and distinguishing each edge's characteristics depend on user's option, and then find the location information by a set of positions that are proposed. This thesis is expected to be devoted to the development of an automation system of unmanned moving equipment.

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

A Speaker Detection System based on Stereo Vision and Audio (스테레오 시청각 기반의 화자 검출 시스템)

  • An, Jun-Ho;Hong, Kwang-Seok
    • Journal of Internet Computing and Services
    • /
    • v.11 no.6
    • /
    • pp.21-29
    • /
    • 2010
  • In this paper, we propose the system which detects the speaker, who is speaking currently, among a number of users. A proposed speaker detection system based on stereo vision and audio is mainly composed of the followings: a position estimation of speaker candidates using stereo camara and microphone, a current speaker detection, and a speaker information acquisition based on a mobile device. We use the haar-like features and the adaboost algorithm to detect the faces of speaker candidates with stereo camera, and the position of speaker candidates is estimated by a triangulation method. Next, the Time Delay Of Arrival (TDOA) is estimated by the Cross Power Spectrum Phase (CPSP) analysis to find the direction of source with two microphone. Finally we acquire the information of the speaker including his position, voice, and face by comparing the information of the stereo camera with that of two microphone. Furthermore, the proposed system includes a TCP client/server connection method for mobile service.

A Study on the Effect Analysis Influenced on the Advanced System of Moving Object (이동물체가 정밀 시스템에 미치는 영항분석에 관한 연구)

  • Shin, Hyeon-Jae;Kim, Soo-In;Choi, In-Ho;Shon, Young-Woo;An, Young-Hwan;Kim, Dae-Wook;Lee, Jae-Soo
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.21 no.8
    • /
    • pp.87-95
    • /
    • 2007
  • In this paper, we analyzed the mr detection and the stability of the object tracking system by an adaptive stereo object hacking using region-based MAD(Mean Absolute Difference) algorithm and the modified PID(Proportional Integral Derivative)-based pan/tilt controller. That is, in the proposed system, the location coordinates of the target object in the right and left images are extracted from the sequential stereo input image by applying a region-based MAD algorithm and the configuration parameter of the stereo camera, and then these values could effectively control to pan/tilt of the stereo camera under the noisy circumstances through the modified PID controller. Accordingly, an adaptive control effect of a moving object can be analyzed through the advanced system with the proposed 3D robot vision, in which the possibility of real-time implementation of the robot vision system is also confirmed.

Development of Intergrated Vision System for Unmanned-Crane Automation System (무인 크레인 자동화 시스템 구축을 위한 통합 비전 시스템 개발)

  • Lee, Ji-Hyun;Kim, Mu-Hyun;Park, Mu-Hun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.259-263
    • /
    • 2010
  • This paper introduces an integrated vision system that enables us to detect the image of Slabs and Coils and get the complete three dimensional location data without any other obstacles in the field of unmanned-crane automation system. Existing researches with laser scanner tend to be easily influenced by environment in the work place so they cannot give the exact location information. Also, CCD camera has some problems recognize the pattern because of intensity of illumination caused in the industrial setting. To overcome these two weaknesses, this thesis suggests laser scanner should be combined with CCD camera named integrated vision system. This system can draw more clear pictures and take the advanced 3D location information. The suggested system is expected to help build unmanned-crane automation system.

  • PDF

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Vision-based Autonomous Landing System of an Unmanned Aerial Vehicle on a Moving Vehicle (무인 항공기의 이동체 상부로의 영상 기반 자동 착륙 시스템)

  • Jung, Sungwook;Koo, Jungmo;Jung, Kwangyik;Kim, Hyungjin;Myung, Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.4
    • /
    • pp.262-269
    • /
    • 2016
  • Flight of an autonomous unmanned aerial vehicle (UAV) generally consists of four steps; take-off, ascent, descent, and finally landing. Among them, autonomous landing is a challenging task due to high risks and reliability problem. In case the landing site where the UAV is supposed to land is moving or oscillating, the situation becomes more unpredictable and it is far more difficult than landing on a stationary site. For these reasons, the accurate and precise control is required for an autonomous landing system of a UAV on top of a moving vehicle which is rolling or oscillating while moving. In this paper, a vision-only based landing algorithm using dynamic gimbal control is proposed. The conventional camera systems which are applied to the previous studies are fixed as downward facing or forward facing. The main disadvantage of these system is a narrow field of view (FOV). By controlling the gimbal to track the target dynamically, this problem can be ameliorated. Furthermore, the system helps the UAV follow the target faster than using only a fixed camera. With the artificial tag on a landing pad, the relative position and orientation of the UAV are acquired, and those estimated poses are used for gimbal control and UAV control for safe and stable landing on a moving vehicle. The outdoor experimental results show that this vision-based algorithm performs fairly well and can be applied to real situations.