• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.026 seconds

Perception Method of the Marking Location for Automation of Billet Marking Processes (빌릿 마킹 공정의 자동화를 위한 마킹 위치 인식 방법)

  • Park Jin-Woo;Yook Hyunho;Che Wooseong;Boo Kwangsuck
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.6 s.171
    • /
    • pp.127-134
    • /
    • 2005
  • The machine vision has been applied to a number of industrial applications for quality control and automations to improve the manufacturing processes. In this paper, the automation system using the machine vision is developed, which is applicable to the marking process in a steel production process line. The working environment is very harsh to workers so that the automatic system in the steel industry is required increasingly. The developed automatic marking system consists of several mechanical and electrical elements such as the laser position detecting sensor system fur a structured laser beam which is projected to the billet in order to detect the geometry of the billet. An image processing algorithm has been developed to percept the two center positions of a camera and a billet, respectively, and to align two centers. A series of experiments has been conducted to investigate the performance of the proposed algorithm. The results show that two centers of the camera and the billet could be detected very well and differences between two center positions could be also decreased via the proposed location error decreasing algorithm.

Game Engine Driven Synthetic Data Generation for Computer Vision-Based Construction Safety Monitoring

  • Lee, Heejae;Jeon, Jongmoo;Yang, Jaehun;Park, Chansik;Lee, Dongmin
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.893-903
    • /
    • 2022
  • Recently, computer vision (CV)-based safety monitoring (i.e., object detection) system has been widely researched in the construction industry. Sufficient and high-quality data collection is required to detect objects accurately. Such data collection is significant for detecting small objects or images from different camera angles. Although several previous studies proposed novel data augmentation and synthetic data generation approaches, it is still not thoroughly addressed (i.e., limited accuracy) in the dynamic construction work environment. In this study, we proposed a game engine-driven synthetic data generation model to enhance the accuracy of the CV-based object detection model, mainly targeting small objects. In the virtual 3D environment, we generated synthetic data to complement training images by altering the virtual camera angles. The main contribution of this paper is to confirm whether synthetic data generated in the game engine can improve the accuracy of the CV-based object detection model.

  • PDF

A vision-based system for inspection of expansion joints in concrete pavement

  • Jung Hee Lee ;bragimov Eldor ;Heungbae Gil ;Jong-Jae Lee
    • Smart Structures and Systems
    • /
    • v.32 no.5
    • /
    • pp.309-318
    • /
    • 2023
  • The appropriate maintenance of highway roads is critical for the safe operation of road networks and conserves maintenance costs. Multiple methods have been developed to investigate the surface of roads for various types of cracks and potholes, among other damage. Like road surface damage, the condition of expansion joints in concrete pavement is important to avoid unexpected hazardous situations. Thus, in this study, a new system is proposed for autonomous expansion joint monitoring using a vision-based system. The system consists of the following three key parts: (1) a camera-mounted vehicle, (2) indication marks on the expansion joints, and (3) a deep learning-based automatic evaluation algorithm. With paired marks indicating the expansion joints in a concrete pavement, they can be automatically detected. An inspection vehicle is equipped with an action camera that acquires images of the expansion joints in the road. You Only Look Once (YOLO) automatically detects the expansion joints with indication marks, which has a performance accuracy of 95%. The width of the detected expansion joint is calculated using an image processing algorithm. Based on the calculated width, the expansion joint is classified into the following two types: normal and dangerous. The obtained results demonstrate that the proposed system is very efficient in terms of speed and accuracy.

Machine Vision Inspection System of Micro-Drilling Processes On the Machine Tool (공작기계 상에서 마이크로드릴 공정의 머신비전 검사시스템)

  • Yoon, Hyuk-Sang;Chung, Sung-Chong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.6
    • /
    • pp.867-875
    • /
    • 2004
  • In order to inspect burr geometry and hole quality in micro-drilling processes, a cost-effective method using an image processing and shape from focus (SFF) methods on the machine tool is proposed. A CCD camera with a zoom lens and a novel illumination unit is used in this paper. Since the on-machine vision unit is incorporated with the CNC function of the machine tool, direct measurement and condition monitoring of micro-drilling processes are conducted between drilling processes on the machine tool. Stainless steel and hardened tool steel are used as specimens, as well as twist drills made of carbide are used in experiments. Validity of the developed system is confirmed through experiments.

Development of a Sensor System for Real-Time Posture Measurement of Mobile Robots (이동 로봇의 실시간 자세 추정을 위한 센서 시스템의 개발)

  • 이상룡;권승만
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.9
    • /
    • pp.2191-2204
    • /
    • 1993
  • A sensor system has been developed to measure the posture(position and orientation) of mobile robots working in industrial environments. The proposed sensor system consists of a CCD camera, retro-reflective landmarks, a strobe unit and an image processing board. The proposed hardware system can be built in economic price compared to commercial vision systems. The system has the capability of measuring the posture of mobile robots within 60 msec when a 386 personal computer is used as the host computer. The experimental results demonstrated a remarkable performance of the proposed sensor system in the posture measurement of mobile robots - the average error in position is less than 3 mm and the average error in orientation is less than 1.5.

An Accurate Camera Calibration Using Higher-Order Polynomials (고차 polynomial을 이용한 정밀한 카메라 캘리브레이션)

  • Jo, Tae-Hun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.413-416
    • /
    • 2007
  • 카메라 캘리브레이션은 비젼(vision) 시스템의 광학왜곡을 보정하기 위해, 영상 좌표계와 실세계 좌표계간의 변환관계를 정의해 주는 mapping을 구하는 과정으로 카메라를 이용한 측정, 검사, 위치보정 등의 응용에서 매우 중요하다. 카메라 캘리브레이션 방법으로 많이 사용되는 Tsai 알고리즘은 여러 카메라 내부 상수들을 필요로 하며, 적절한 활용을 위해서는 이에 대한 이해와 카메라와 렌즈왜곡의 모델에 대한 사전지식을 요한다. 본 논문에서는 카메라나 렌즈왜곡에 대한 모델이나 가정없이, 영상좌표와 실세계 좌표간의 변환을 고차(higher order) polynomial을 이용하여 구현하여 사용이 손쉬운 카메라 캘리브레이션 방법을 소개하고 성능을 평가하였다. 성능 평가 결과, 3차 polynomial을 이용한 카메라 캘리브레이션 방법이 Tsai알고리즘보다 정밀도에서 우수하였다.

  • PDF

Development of On-line Wrinkle Measurement System Using Machine Vision (머신 비젼을 이용한 실시간 링클 측정 시스템 개발)

  • Shin, Dong-Keun;To, Hoang-Minh;Ko, Sung-Lim
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.32 no.3
    • /
    • pp.274-279
    • /
    • 2008
  • Roll to roll (R2R) manufacturing process, also known as 'web processing', has been tried for producing electronic devices on a flexible plastic or metal foil. To increase the performance and productivity the R2R process, effective control and on-line supervision for web quality becomes very important. Wrinkle is one of the defects, which is incurred due to compressive stresses. A system for on-line measurement of wrinkle is developed using area scan camera and machine vision laser. The 2D image, obtained by area scan camera, is produced by Gaussian regression method to characterize the wrinkle on a transparent web. The experiment proves that 0.3mm wrinkle height can be measured successfully with 74fps.

Implementation of Transformation Algorithm for a Leg-wheel Hexapod Robot Using Stereo Vision (스테레오 영상처리를 이용한 바퀴달린 6족 로봇의 형태변형 알고리즘 구현)

  • Lee, Sang-Hun;Kim, Jin-Geol
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.202-204
    • /
    • 2006
  • In this paper, the detection scheme of the spatial coordinates based on stereo camera for a Transformation algorithm of an Leg-wheel Hexapod Robot is proposed. Robot designed as can have advantages that do transfer possibility fast mobility in flat topography and uneven topography through walk that use wheel drive. In the proposed system, using the disparity data obtained from the left and right images captured by the stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Robot uses construed environmental data and transformation algorithm, decide wheel drive and leg waik, and can calculate width of street and regulate width of robot.

  • PDF

Development of a rotation angle estimation algorithm of HMD using feature points extraction (특징점 추출을 통한 HMD 회전각측정 알고리즘 개발)

  • Ro, Young-Shick;Kim, Chul-Hee;Yun, Won-Jun;Yoon, Yoo-Kyoung
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.360-362
    • /
    • 2009
  • In this paper, we studied for the real-time azimuthal measurement of HMD(Head Mounted Display) using the feature points detection to control the tele-operated vision system on the mobile robot. To give the sense of presence to the tele-operator, we used a HMD to display the remote scene, measured the rotation angle of the HMD on a real time basis, and transmitted the measured rotation angles to the mobile robot controller to synchronize the pan-tilt angles of remote camera with the HMD. In this paper, we suggest an algorithm for the real-time estimation of the HMD rotation angles using feature points extraction from pc-camera image.

  • PDF

A Study on Development of Visual Navigation System based on Neural Network Learning

  • Shin, Suk-Young;Lee, Jang-Hee;You, Yang-Jun;Kang, Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.1
    • /
    • pp.1-8
    • /
    • 2002
  • It has been integrated into several navigation systems. This paper shows that system recognizes difficult indoor roads without any specific marks such as painted guide line or tape. In this method the robot navigates with visual sensors, which uses visual information to navigate itself along the read. The Neural Network System was used to learn driving pattern and decide where to move. In this paper, I will present a vision-based process for AMR(Autonomous Mobile Robot) that is able to navigate on the indoor read with simple computation. We used a single USB-type web camera to construct smaller and cheaper navigation system instead of expensive CCD camera.