• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.028 seconds

Automatic Punching System for FPC using Machine Vision (비전 기반의 FPC용 자동 펀칭시스템)

  • Lee Young-Choon;Lee Seong-Cheol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.12 s.177
    • /
    • pp.77-86
    • /
    • 2005
  • This paper is about the development of Automatic FPC(flexible printed circuit) punching instrument for the improvement of working condition and cost saving. FPC is used to detect the contact position of keyboard and button like a cellular phone. Depending on the quality of the printed ink and position of reference punching point to the FPC, the resistance and current are varied to the malfunctioning values. The size of reference punching point is 2mm and the above. Because the punching operation is done manually, The punching accuracy is varied with operator's condition. Recently, The punching accuracy has deteriorated severely to the 2mm punching reference hall so that assembly of the K/B has hardly done. To improve this manual punching operation to the FPC, automatic FPC punching system is introduced. Precise mechanical parts like a 5-step stepping motor and ball screw mechanism are designed and tested and low cost PC camera is used fur the sake of cost down instead of using high quality vision systems for the factory automation. Test algorithms and programs showed good results to the designed automatic punching system and led to the increasement of productivity and huge cost down to law material like FPC by avoiding bad quality.

Punching Position Control by Vision System (비전을 이용한 펀칭위치 제어 시스템)

  • 이성철;이영춘;심기중
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.981-984
    • /
    • 2004
  • This paper is about the development of Automatic FPC punching instrument. FPC(flexible printed circuit) is used to detect the contact position of K/B and button like a cellular phone. Depending on the quality of the printed ink and position of reference punching point to the FPC, the resistance and current are varied to the malfunctioning values. The size of reference punching point is 2mm and the above. Because the punching operation is done manually, the accuracy of the punching degree is varied with operator's condition. Recently, The punching accuracy has deteriorated severely to the 2mm punching reference hall so that assembly of the K/B has hardly done. To improve this manual punching operation to the FPC, automatic FPC punching system is introduced. Precise mechanical parts like a 5-step stepping motor and ball screw mechanism are designed and tested and low cost PC camera is used for the sake of cost down instead of using high quality vision systems for the FA. Test algorithm shows good results to the designed automatic punching system.

  • PDF

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

A Study of the Defect Detection Method of Vision Technology via Camera Image Analysis on 4-col 7-row LED Screen Module (4단 7열 LED 사이니지 전면부 설치형 카메라기반 불량 LED 소자 검출 Vision 기술에 관한 연구)

  • Park, Young ki;Im, Sang il;Jo, Ik hyeon;Cha, Jae sang
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.11
    • /
    • pp.1383-1387
    • /
    • 2020
  • Recently, a 4-col 7-row LED Screen that provides various information of major roads and local governments has been installed and operated. However, due to deterioration due to changes in temperature and humidity, deterioration due to static electricity, and mechanical stress, partial module failure of the display may occur, which is a major cause of missing information of vitally given to citizens. However, there have been frequent cases where the 4-col and 7-row LED Screen that have failed due to reasons such as installed location where the signboards are installed on the road and outdoor, the lack of monitoring means at all times, and the lack of manpower is often neglected for a long time. Following this flow, this paper proposes a method to detect defective modules by analyzing the images collected through the camera fixed to the front part of the LED display.

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Pattern Elimination Method Based on Perspective Transform for Defect Detection of TFT-LCD (TFT-LCD의 결함 검출을 위한 원근 변환 기반의 패턴 제거 방법)

  • Lee, Joon-Jae;Lee, Kwang-Ho;Chung, Chang-Do;Park, Kil-Houm;Park, Yun-Beom;Lee, Byung-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.784-793
    • /
    • 2012
  • Defects of TFT-LCD is detected by thresholding the difference image between the input image and template one because LCD panel has its inherent patterns. However, the pitch corresponding to pattern period is gradually changed according to the distance from the center of camera due to geometric distortion of camera characteristics. This paper presents a method to detect defects through comparing the pitch area with neighbor pitch areas where the perspective transform is performed with the extracted features to correct the distortion. The experimental results show that the performance of the proposed method is very effective for real data.

Development of a Hovering Robot System for Calamity Observation

  • Kang, M.S.;Park, S.;Lee, H.G.;Won, D.H.;Kim, T.J.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.580-585
    • /
    • 2005
  • A QRT(Quad-Rotor Type) hovering robot system is developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV(Unmanned Aerial Vehicle) is equipped with four propellers driven by each electric motor, an embedded controller using a DSP, INS(Inertial Navigation System) using 3-axis rate gyros, a CCD camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. The developed hovering robot shows stable flying performances under the adoption of RIC(Robust Internal-loop Compensator) based disturbance compensation and the vision based localization method. The UAV can also avoid obstacles using eight IR and four ultrasonic range sensors. The VTOL(Vertical Take-Off and Landing) flying object flies into indoor fire spots and sends the images captured by the CCD camera to the operator. This kind of small-sized UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment.

  • PDF

Depth Evaluation from Pattern Projection Optimized for Automated Electronics Assembling Robots

  • Park, Jong-Rul;Cho, Jun Dong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.195-204
    • /
    • 2014
  • This paper presents the depth evaluation for object detection by automated assembling robots. Pattern distortion analysis from a structured light system identifies an object with the greatest depth from its background. An automated assembling robot should prior select and pick an object with the greatest depth to reduce the physical harm during the picking action of the robot arm. Object detection is then combined with a depth evaluation to provide contour, showing the edges of an object with the greatest depth. The contour provides shape information to an automated assembling robot, which equips the laser based proxy sensor, for picking up and placing an object in the intended place. The depth evaluation process using structured light for an automated electronics assembling robot is accelerated for an image frame to be used for computation using the simplest experimental set, which consists of a single camera and projector. The experiments for the depth evaluation process required 31 ms to 32 ms, which were optimized for the robot vision system that equips a 30-frames-per-second camera.

A Study of the Shaft Power Measuring System Using Cameras (카메라를 이용한 축계 비틀림 계측 장치 개발)

  • Jeong, Jeong-Soon;Kim, Young-Bok;Choi, Myung-Soo
    • Journal of Ocean Engineering and Technology
    • /
    • v.24 no.4
    • /
    • pp.72-77
    • /
    • 2010
  • This paper presents a method for measuring the shaft power of a marine main engine. Usually, in traditional systems for measuring shaft power, a strain gauge is used even though it has several disadvantages. First, it is difficult to set up the strain gauge on the shaft and acquire the correct signal for analysis. Second, it is very expensive and complicated. For these reasons, we investigated alternative approaches for measuring shaft power and proposed a new method that uses a vision-based measurement system. For this study, templates for image processing and CCD cameras were installed at the both ends of the shaft. Then, in order for the cameras to capture the images synchronously, we used a trigger mark and a optical sensor. The position of each template between the first and the second camera images were compared to calculate the torsion angle. The proposed measurement system can be installed more easily than traditional measurement systems and is suitable for any shaft because it does not contact the shaft. With this approach, it is possible to measure the shaft power while a ship is operating.

Obstacle Avoidance Algorithm of a Mobile Robot using Image Information (화상 정보를 이용한 이동 로봇의 장애물 회피 알고리즘)

  • Kwon, O-Sang;Lee, Eung-Hyuk;Han, Yong-Hwan;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.2 no.1 s.2
    • /
    • pp.139-149
    • /
    • 1998
  • There are some problems in robot navigations with a single kind of sensor. We propose a system that takes advantages of both CCD camera and ultrasonic sensors for the concerning matter. A coordinate extraction algorithm to avoid obstacles during the navigation is also proposed. We implemented a CCD based vision system at the front part of the vehicle and did experiments to verify the suggested algorithm's availability. From experimental results, the error rate was reduced when a CCD camera was used rather than when only ultrasonic sensors were used. Also we can generate path to avoid those obstacles using the measured values.

  • PDF