• Title/Summary/Keyword: camera image

Search Result 4,916, Processing Time 0.041 seconds

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.

Remote Monitoring and Motor Control Based on Multi-Platform (다중플랫폼 기반 영상감시 및 원격지 모터제어시스템 구현)

  • Choi, Seung-Dal;Jang, Gun-Ho;Kim, Seok-Min;Nam, Boo-Hee
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.200-202
    • /
    • 2004
  • This paper deals with the real-time monitoring and control system using PC, PDA(Win CE embedded device) and PCS(based BREW platform). The camera attached to the server captures the moving target, and the captured frame of color image is encoded in JPEG for image compression at the server. The client(PC, PDA, PCS) receives the image data from the remote server and the received image is decoded from decompression. We use the TCP/IP protocol to send the image frames. The client can control the position of the camera by sending the control command to the server. Two DC servo motors for the camera are controlled in any directions, up-down and left-right, by the controller which is communicating with the server via the serial communication to get the control command. In this way, on the client we can monitor the moving images at the server and also control the position of the camera.

  • PDF

Development of a Inspection System for the Metal Mask Using a Vision System

  • Choi, Kyung-jin;Park, Chong-Kug;Lee, Yong-Hyun;Park, Se-Seung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.140.2-140
    • /
    • 2001
  • In this paper, we develop the system which inspects the metal mask using area scan camera and belu type xy-table and introduce its inspection algorithm. Thes whole area of the metal mask is divides into several inspection block and the sixe of a inspection block is decided by FOV(Field of View). To compare with the camera image of each block, the reference image is made by gerber file. The ratation angle of the metal mask is calculated through the linear equation that is substituted two end points of horizontal boundary of a specific hole in a camera image for. To calculate the position error caused by belt type xy-table, hough-transform using the distances among the holes in two images os used. The center of the reference image is moved as much as the calculated position error to be coincide with the camera image ...

  • PDF

Bomb Impact Point Location Acquisition by Image Transformation using High-Resolution Commercial Camera (고해상도 상용카메라를 사용하는 영상변환을 이용한 탄착점 좌표획득)

  • Park, Sang-Jae;Ha, Seok-Wun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.1
    • /
    • pp.1-7
    • /
    • 2011
  • In the bomb impact test, to acquire the bomb impact point location the high-priced embedded equipments such as the Bomb Scoring System or the EOTS are needed. Recently, a high-resolution image processing could be possible since the resolution of the commercial camera is growing rapidly. In this paper we first propose an image transformation method for acquiring the real bomb impact image using a high-resolution commercial camera, and then present the process calculating the real bomb impact point location coordinate from the transformed image. Based on the experimental results we found the possibilities that the real bomb impact point information could be effectively earned just using the commercial camera.

Development of Vision Technology for the Test of Soldering and Pattern Recognition of Camera Back Cover (카메라 Back Cover의 형상인식 및 납땜 검사용 Vision 기술 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1999.10a
    • /
    • pp.119-124
    • /
    • 1999
  • This paper presents new approach to technology pattern recognition of camera back cover and test of soldering. In real-time implementing of pattern recognition camera back cover and test of soldering, the MVB-03 vision board has been used. Image can be captured from standard CCD monochrome camera in resolutions up to 640$\times$480 pixels. Various options re available for color cameras, a synchronous camera reset, and linescan cameras. Image processing os performed using Texas Instruments TMS320C31 digital signal processors. Image display is via a standard composite video monitor and supports non-destructive color overlay. System processing is possible using c30 machine code. Application software can be written in Borland C++ or Visual C++

  • PDF

EVALUATION OF CAMERA PERFORMANCE USING ISO-BASED CRITERIA

  • Ko, Kyung-Woo;Park, Kee-Hyon;Ha, Yeong-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.76-79
    • /
    • 2009
  • This paper investigates the performance of a vehicular rear-view camera through quantifying the image quality based on several objective criteria from the ISO (International Organization for Standardization). In addition, various experimental environments are defined considering the conditions under which a rear-view camera may need to operate. The process for evaluating the performance of a rear-view camera is composed of five objective criteria: noise test, resolution test, OECF (opto-electronic conversion function) test, color characterization test, and pincushion and barrel distortion tests. The proposed image quality quantification method then expresses the results of each test as a single value, allowing easy evaluation. In experiments, the performance evaluation results are analyzed and compared with those for a regular digital camera.

  • PDF

Design of a Heart Rate Measurement System Using a Web Camera

  • Jang, Seung-Ju
    • International journal of advanced smart convergence
    • /
    • v.11 no.3
    • /
    • pp.179-186
    • /
    • 2022
  • In this paper, we design a heart rate measurement system using a web camera. In order to measure the heart rate, face image information is acquired and classified. The face image data is collected from web camera. The heart rate is measured using the collected face image data. We design a function to measure heart rate using input of face information using a web camera in non-contact manner. We design a function that reads face information and estimates heart rate by analyzing face color. An experiment was performed to compare the non-contact heart rate with the actual measured heart rate. The heart rate measurement system using a web camera proposed in this paper is a technology that can be used in various fields. It will be used in sports fields that require heart rate measurement at a low cost.

The estimation of camera calibration parameters using the properties of vanishing point at the paved and unpaved road (무한원점의 성질을 이용한 포장 및 비포장 도로에서의 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Jeong, Myeong-Hee;Rho, Do-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.178-180
    • /
    • 2006
  • In general, camera calibration has to be gone ahead necessarily to estimate a position and an orientation of the object exactly using a camera. Autonomous land system in order to run a vehicle autonomously needs a camera calibration method appling a camera and various road environment. Camera calibration is to prescribe the confrontation relation between third dimension space and the image plane. It means to find camera calibration parameters. Camera calibration parameters using the paved road and the unpaved road are estimated. The proposed algorithm has been detected through the image processing after obtaining the paved road and the unpaved road. There is able to detect easily edges because the road lanes exist in the raved road. Image processing method is two. One is a method on the paved road. Image is segmentalized using open, dilation, and erosion. The other is a method on the unpaved road. Edges are detected using blur and sharpening. So it has been made use of Hough transformation in order to detect the correct straight line because it has less error than least-square method. In addition to, this thesis has been used vanishing point' principle. an algorithm suggests camera calibration method using Hough transformation and vanishing point. When the algorithm was applied, the result of focal length was about 10.7[mm] and RMS errors of rotation were 0.10913 and 0.11476 ranges. these have the stabilized ranges comparatively. This shows that this algorithm can be applied to camera calibration on the raved and unpaved road.

  • PDF

3D Image Construction Using Color and Depth Cameras (색상과 깊이 카메라를 이용한 3차원 영상 구성)

  • Jung, Ha-Hyoung;Kim, Tae-Yeon;Lyou, Joon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a method for 3D image construction using the hybrid (color and depth) camera system, in which the drawbacks of each camera can be compensated for. Prior to an image generation, intrinsic parameters and extrinsic parameters of each camera are extracted through experiments. The geometry between two cameras is established with theses parameters so as to match the color and depth images. After the preprocessing step, the relation between depth information and distance is derived experimentally as a simple linear function, and 3D image is constructed by coordinate transformations of the matched images. The present scheme has been realized using the Microsoft hybrid camera system named Kinect, and experimental results of 3D image and the distance measurements are given to evaluate the method.

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.