• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.032 seconds

Development of an Uplift Measurement System for Overhead Contact Wire using High Speed Camera (고속카메라를 이용한 전차선 압상량 검측 시스템 개발)

  • Park, Young;Cho, Yong-Hyeon;Lee, Ki-Won;Kim, Hyung-Jun;Kim, In-Chol
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.22 no.10
    • /
    • pp.864-869
    • /
    • 2009
  • The measurement of contact wire uplift in electric railways is one of the most important test parameters to accepting the maximum permitted speed of new electric vehicles and pantographs. The contact wire uplift can be measured over short periods when the pantograph passes monitoring stations. In this paper, a high-speed image measurement system and its image processing method are being developed to evaluate dynamic uplift of overhead contact wires caused by pantograph contact forces of Korea Tilting Train eXpress (TTX) and Korea Train eXpress (KTX). The image measurement system was implemented utilizing a high-speed CMOS (Complementary Metal Oxide Semiconductor) camera and gigabit ethernet LAN. Unlike previous systems, the uplift measurement system using high speed camera is installed on the side of the rail, making maintenance convenient. On-field verification of the uplift measurement system for overhead contact wire using high speed camera was conducted by measuring uplift of the TTX followed by operation speeds at the Honam conventional line and high-speed railway line. The proposed high-speed image measurement system to evaluate dynamic uplift of overhead contact wires shows promising on-field applications for high speed trains such as KTX and TTX.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

Development of a Project Schedule Simulation System by a Synchronization Methodology of Active nD Object and Real Image of Construction Site

  • Kim, Hyeon Seung;Shin, Jong Myeong;Park, Sang Mi;Kang, Leen Seok
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.344-348
    • /
    • 2015
  • The image data of the web camera is used to identify the construction status of the site in a remote office and it can be used for safety management. This study develops a construction schedule simulation system based on the active nD object linked with real image data of web camera from the construction site. The progress control method by 4D object uses a method that the progress of each activity is represented with different colors by progress status. Since this method is still based on a virtual reality object, it is less realistic description for practical engineers. Therefore, in order to take advantage of BIM more realistic, the real image of actual construction status and 4D object of planned schedule in a data date should be compared in a screen simultaneously. Those methodologies and developed system are verified in a case project where a web camera is installed for the verification of the system.

  • PDF

The Flight Data Measurement System of Flying Golf Ball Using the High Speed CCD Camera (고속 카메라를 CCD 이용한 비행골프공의 데이터 측정 시스템)

  • Kim, Ki-Hyun;Jo, Jae-Ik;Yun, Chang-Ok;Park, Hyun-Woo;Joo, Woo-Suk;Lee, Dong-Hoon;Yun, Tae-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.168-172
    • /
    • 2009
  • Recently, while 3D sports game increases, the research that it recognizes the operation of the real user actively progresses. Most of all, the research about the golf is active. In this paper, the image acquiring in a high-speed CCD camera measures the flight data of the golf ball through the image processing. While photographing, the high-speed camera, using this system, exposes an image at regular intervals. And line scan camera checks whether the golf ball passed or not. After the location information of the calculated golf ball calculates a speed and a direction by using the physical formula, it applies the golf simulation.

  • PDF

Three-Dimensional Visualization Technique of Occluded Objects Using Integral Imaging with Plenoptic Camera

  • Lee, Min-Chul;Inoue, Kotaro;Tashiro, Masaharu;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.3
    • /
    • pp.193-198
    • /
    • 2017
  • In this study, we propose a three-dimensional (3D) visualization technique of occluded objects using integral imaging with a plenoptic camera. In previous studies, depth map estimation from elemental images was used to remove occlusion. However, the resolution of these depth maps is low. Thus, the occlusion removal accuracy is not efficient. Therefore, we use a plenoptic camera to obtain a high-resolution depth map. Hence, individual depth map for each elemental image can also be generated. Finally, we can regenerate a more accurate depth map for 3D objects with these separate depth maps, allowing us to remove the occlusion layers more efficiently. We perform optical experiments to prove our proposed technique. Moreover, we use MSE and PSNR as a performance metric to evaluate the quality of the reconstructed image. In conclusion, we enhance the visual quality of the reconstructed image after removing the occlusion layers using the plenoptic camera.

A Study on Design of Visual Sensor Using Scanning Beam for Shape Recognition of Weld Joint. (용접접합부의 형상계측을 위한 주사형 시각센서의 설계에 관한 연구)

  • 배강열
    • Journal of Welding and Joining
    • /
    • v.21 no.2
    • /
    • pp.102-110
    • /
    • 2003
  • A visual sensor consisted of polygonal mirror, laser, and CCD camera was proposed to measure the distance to the weld joint for recognizing the joint shape. To scan the laser beam of the sensor onto an object, 8-facet polygonal mirror was used as the rotating mirror. By locating the laser and the camera at axi-symmetrical positions around the mirror, the synchronized-scan condition could be satisfied even when the mirror was set to rotate through one direction continuously, which could remove the inertia effect of the conventional oscillating-mirror methods. The mathematical modelling of the proposed sensor with the optical triangulation method made it possible to derive the relation between the position of an image on the camera and the one of a laser light on the object. Through the geometrical simulation of the proposed sensor with the principal of reflection and virtual image, the optical path of a laser light could be predicted. The position and direction of the CCD camera were determined based on the Scheimpflug's condition to fit the focus of any image reflected from an object within the field of view. The results of modelling and simulation revealed that the proposed visual sensor could be used to recognize the weld joint and its vicinity located within the range of the field of view and the resolution. (Received February 19, 2003)

Development of Color 3D Scanner Using Laser Structured-light Imaging Method

  • Ko, Youngjun;Yi, Sooyeong
    • Current Optics and Photonics
    • /
    • v.2 no.6
    • /
    • pp.554-562
    • /
    • 2018
  • This study presents a color 3D scanner based on the laser structured-light imaging method that can simultaneously acquire 3D shape data and color of a target object using a single camera. The 3D data acquisition of the scanner is based on the structured-light imaging method, and the color data is obtained from a natural color image. Because both the laser image and the color image are acquired by the same camera, it is efficient to obtain the 3D data and the color data of a pixel by avoiding the complicated correspondence algorithm. In addition to the 3D data, the color data is helpful for enhancing the realism of an object model. The proposed scanner consists of two line lasers, a color camera, and a rotation table. The line lasers are deployed at either side of the camera to eliminate shadow areas of a target object. This study addresses the calibration methods for the parameters of the camera, the plane equations covered by the line lasers, and the center of the rotation table. Experimental results demonstrate the performance in terms of accurate color and 3D data acquisition in this study.

Development of a Camera Self-calibration Method for 10-parameter Mapping Function

  • Park, Sung-Min;Lee, Chang-je;Kong, Dae-Kyeong;Hwang, Kwang-il;Doh, Deog-Hee;Cho, Gyeong-Rae
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.3
    • /
    • pp.183-190
    • /
    • 2021
  • Tomographic particle image velocimetry (PIV) is a widely used method that measures a three-dimensional (3D) flow field by reconstructing camera images into voxel images. In 3D measurements, the setting and calibration of the camera's mapping function significantly impact the obtained results. In this study, a camera self-calibration technique is applied to tomographic PIV to reduce the occurrence of errors arising from such functions. The measured 3D particles are superimposed on the image to create a disparity map. Camera self-calibration is performed by reflecting the error of the disparity map to the center value of the particles. Vortex ring synthetic images are generated and the developed algorithm is applied. The optimal result is obtained by applying self-calibration once when the center error is less than 1 pixel and by applying self-calibration 2-3 times when it was more than 1 pixel; the maximum recovery ratio is 96%. Further self-correlation did not improve the results. The algorithm is evaluated by performing an actual rotational flow experiment, and the optimal result was obtained when self-calibration was applied once, as shown in the virtual image result. Therefore, the developed algorithm is expected to be utilized for the performance improvement of 3D flow measurements.

Development of a real-time gamma camera for high radiation fields

  • Minju Lee;Yoonhee Jung;Sang-Han Lee
    • Nuclear Engineering and Technology
    • /
    • v.56 no.1
    • /
    • pp.56-63
    • /
    • 2024
  • In high radiation fields, gamma cameras suffer from pulse pile-up, resulting in poor energy resolution, count losses, and image distortion. To overcome this problem, various methods have been introduced to reduce the size of the aperture or pixel, reject the pile-up events, and correct the pile-up events, but these technologies have limitations in terms of mechanical design and real-time processing. The purpose of this study is to develop a real-time gamma camera to evaluate the radioactive contamination in high radiation fields. The gamma camera is composed of a pinhole collimator, NaI(Tl) scintillator, position sensitive photomultiplier (PSPMT), signal processing board, and data acquisition (DAQ). The pulse pile-up is corrected in real-time with a field programmable gate array (FPGA) using the start time correction (STC) method. The STC method corrects the amplitude of the pile-up event by correcting the time at the start point of the pile-up event. The performance of the gamma camera was evaluated using a high dose rate 137Cs source. For pulse pile-up ratios (PPRs) of 0.45 and 0.30, the energy resolution improved by 61.5 and 20.3%, respectively. In addition, the image artifacts in the 137Cs radioisotope image due to pile-up were reduced.

Kinematic Method of Camera System for Tracking of a Moving Object

  • Jin, Tae-Seok
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.2
    • /
    • pp.145-149
    • /
    • 2010
  • In this paper, we propose a kinematic approach to estimating the real-time moving object. A new scheme for a mobile robot to track and capture a moving object using images of a camera is proposed. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the active camera. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time path to capture the moving object, the linear and angular velocities are estimated and utilized. The experimental results of tracking and capturing of the target object with the mobile robot are presented.