• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.038 seconds

A Study on Extraction Depth Information Using a Non-parallel Axis Image (사각영상을 이용한 물체의 고도정보 추출에 관한 연구)

  • 이우영;엄기문;박찬응;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.9 no.2
    • /
    • pp.7-19
    • /
    • 1993
  • In stereo vision, when we use two parallel axis images, small portion of object is contained and B/H(Base-line to Height) ratio is limited due to the size of object and depth information is inaccurate. To overcome these difficulities we take a non-parallel axis image which is rotated $\theta$ about y-axis and match other parallel-axis image. Epipolar lines of non-parallel axis image are not same as those of parallel-axis image and we can't match these two images directly. In this paper, we transform the non-parallel axis image geometrically with camera parameters, whose epipolar lines are alingned parallel. NCC(Normalized Cross Correlation) is used as match measure, area-based matching technique is used find correspondence and 9$\times$9 window size is used, which is chosen experimentally. Focal length which is necessary to get depth information of given object is calculated with least-squares method by CCD camera characteristics and lenz property. Finally, we select 30 test points from given object whose elevation is varied to 150 mm, calculate heights and know that height RMS error is 7.9 mm.

Image Processing in Digital 'Takbon' and the Decipherment of Epigraphic Letters (영상신호처리에 의한 디지털 탁본화 문자 판독)

  • 황재호
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.27-30
    • /
    • 2003
  • In this paper a new approach of digitalized ‘Takbon’ is introduced. By image signal processing, the letters which were written on stones can be deciphered. Epigraphic letter is detected by digital image device, digital camera. The two dimensional digital image is preprocessed because of sensor noise and detective turbulence. Color image is transformed into grey level. The letter image is analyzed in time/frequency domain. By the resultant analysis data decisive functions are calculated. Signal Processing techniques, such as scaling, clipping, digital negative, high/low filter, morphology and so on, provide algorithms that can extract letter from stones.

  • PDF

Speech Activity Detection using Lip Movement Image Signals (입술 움직임 영상 선호를 이용한 음성 구간 검출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.289-297
    • /
    • 2010
  • In this paper, A method to prevent the external acoustic noise from being misrecognized as the speech recognition object is presented in the speech activity detection process for the speech recognition. Also this paper confirmed besides the acoustic energy to the lip movement image signals. First of all, the successive images are obtained through the image camera for personal computer and the lip movement whether or not is discriminated. The next, the lip movement image signal data is stored in the shared memory and shares with the speech recognition process. In the mean time, the acoustic energy whether or not by the utterance of a speaker is verified by confirming data stored in the shared memory in the speech activity detection process which is the preprocess phase of the speech recognition. Finally, as a experimental result of linking the speech recognition processor and the image processor, it is confirmed to be normal progression to the output of the speech recognition result if face to the image camera and speak. On the other hand, it is confirmed not to the output the result of the speech recognition if does not face to the image camera and speak. Also, the initial feature values under off-line are replaced by them. Similarly, the initial template image captured while off-line is replaced with a template image captured under on-line, so the discrimination of the lip movement image tracking is raised. An image processing test bed was implemented to confirm the lip movement image tracking process visually and to analyze the related parameters on a real-time basis. As a result of linking the speech and image processing system, the interworking rate shows 99.3% in the various illumination environments.

Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera (깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.751-756
    • /
    • 2012
  • In this paper, we propose a foreground extraction and depth map generation method using a time-of-flight (TOF) depth camera. Although, the TOF depth camera captures the scene's depth information in real-time, it has a built-in noise and distortion. Therefore, we perform several preprocessing steps such as image enhancement, segmentation, and 3D warping, and then use the TOF depth data to generate the depth-discontinuity regions. Then, we extract the foreground object and generate the depth map as of the color image. The experimental results show that the proposed method efficiently generates the depth map even for the object boundary and textureless regions.

Visibility detection approach to road scene foggy images

  • Guo, Fan;Peng, Hui;Tang, Jin;Zou, Beiji;Tang, Chenggong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4419-4441
    • /
    • 2016
  • A cause of vehicle accidents is the reduced visibility due to bad weather conditions such as fog. Therefore, an onboard vision system should take visibility detection into account. In this paper, we propose a simple and effective approach for measuring the visibility distance using a single camera placed onboard a moving vehicle. The proposed algorithm is controlled by a few parameters and mainly includes camera parameter estimation, region of interest (ROI) estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point may be measured in practice. Thus, combined with the estimated camera parameters, the visibility distance of the input foggy image can be computed with a single camera and just the presence of road and sky in the scene. To assess the accuracy of the proposed approach, a reference target based visibility detection method is also introduced. The comparative study and quantitative evaluation show that the proposed method can obtain good visibility detection results with relatively fast speed.

The Accuracy of Stereo Digital Camera Photogrammetry (스테레오 디지털 카메라를 이용한 사진측량의 정확도)

  • Kim, Gi-Hong;Youn, Jun-Hee;Park, Ha-Jin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.6
    • /
    • pp.663-668
    • /
    • 2010
  • In this study a stereo digital camera system was developed. Using this system, we can collect informations such as coordinates, lengths of all objects shown in the photo image just by taking digital photograph in field. This system has the advantage of obtaining stereo images with settled exterior orientation parameters, while the accuracy slightly worsen because in a close range photogrammetry with stereo digital camera system, the base line distance is restricted within about 1m. We took images with various exposure distances and angles to objects for experimental error assessment, and analyzed the affection of image coordinates errors.

Human Detection in the Images of a Single Camera for a Corridor Navigation Robot (복도 주행 로봇을 위한 단일 카메라 영상에서의 사람 검출)

  • Kim, Jeongdae;Do, Yongtae
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.238-246
    • /
    • 2013
  • In this paper, a robot vision technique is presented to detect obstacles, particularly approaching humans, in the images acquired by a mobile robot that autonomously navigates in a narrow building corridor. A single low-cost color camera is attached to the robot, and a trapezoidal area is set as a region of interest (ROI) in front of the robot in the camera image. The lower parts of a human such as feet and legs are first detected in the ROI from their appearances in real time as the distance between the robot and the human becomes smaller. Then, the human detection is confirmed by detecting his/her face within a small search region specified above the part detected in the trapezoidal ROI. To increase the credibility of detection, a final decision about human detection is made when a face is detected in two consecutive image frames. We tested the proposed method using images of various people in corridor scenes, and could get promising results. This method can be used for a vision-guided mobile robot to make a detour for avoiding collision with a human during its indoor navigation.

Depth Map Using New Single Lens Stereo (단안렌즈 스테레오를 이용한 깊이 지도)

  • Changwun Ku;Junghee Jeon;Kim, Choongwon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.5
    • /
    • pp.1157-1163
    • /
    • 2000
  • In this paper, we present a novel and practical stereo vision system that uses only one camera and four mirrors placed in front of the camera. The equivalent of a stereo pair of images are formed as left and right halves of a single CCD image by using four mirrors placed in front of the ten of a CCD camera. An object arbitrary point in 3D space is transformed into two virtual points by the four mirrors. As in the conventional stereo system, the displacement between the two conjugate image points of the two virtual points is directly related to the depth of the object point. This system has the following advantages over traditional two camera stereo that identical system parameters, easy calibration and easy acquisition of stereo data.

  • PDF

Development of a Camera-based Position Measurement System for the RTGC with Environment Conditions (실외 주행환경을 고려한 카메라 기반의 RTGC 위치계측시스템 개발)

  • Kawai, Hideki;Kim, Young-Bok;Choi, Yong-Woon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.9
    • /
    • pp.892-896
    • /
    • 2011
  • This paper describes a camera-based position measurement system for automatic tracking control of a rubber Tired Gantry Crane (RTGC). An automatic tracking control of RTGC depends on the ability to measure its displacement and angle from a guide line that the RTGC has to follow. The measurement system proposed in this paper is composed of a camera and a PC that are mounted on the right upper between front and rear tires of the RTGC's side. The measurement accuracy of the system is affected by disturbances such as cracks and stains of the guide line, shadows, and halation from the light fluctuation. To overcome the disturbances, both side edges of the guide line are detected as two straight lines from an input image taken by the camera, and parameters of the straight lines are determined by using Hough transform. The displacement and angle of the RTGC from the guide line can be obtained from these parameters with the robustness against the disturbances. From the experiments with the disturbances, we found the accurate displacement and the angle from the guide line that have the standard deviations of 0.95 pixels and 0.22 degrees, respectively.

Development and Test of the Remote Operator Visual Support System Based on Virtual Environment (가상환경기반 원격작업자 시각지원시스템 개발 및 시험)

  • Song, T.G.;Park, B.S.;Choi, K.H.;Lee, S.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.429-439
    • /
    • 2008
  • With a remote operated manipulator system, the situation at a remote site can be rendered through remote visualized image to the operator. Then the operator can quickly realize situations and control the slave manipulator by operating a master input device based on the information of the virtual image. In this study, the remote operator visual support system (ROVSS) was developed for viewing support of a remote operator to perform the remote task effectively. A visual support model based on virtual environment was also inserted and used to fulfill the need of this study. The framework for the system was created by Windows API based on PC and the library of 3D graphic simulation tool such as ENVISION. To realize this system, an operation test environment for a limited operating site was constructed by using experimental robot operation. A 3D virtual environment was designed to provide accurate information about the rotation of robot manipulator, the location and distance of operation tool through the real time synchronization. In order to show the efficiency of the visual support, we conducted the experiments by four methods such as the direct view, the camera view, the virtual view and camera view plus virtual view. The experimental results show that the method of camera view plus virtual view has about 30% more efficiency than the method of camera view.