• Title/Summary/Keyword: stereo sensor

Search Result 199, Processing Time 0.024 seconds

Camera Identification of DIBR-based Stereoscopic Image using Sensor Pattern Noise (센서패턴잡음을 이용한 DIBR 기반 입체영상의 카메라 판별)

  • Lee, Jun-Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.1
    • /
    • pp.66-75
    • /
    • 2016
  • Stereoscopic image generated by depth image-based rendering(DIBR) for surveillance robot and camera is appropriate in a low bandwidth network. The image is very important data for the decision-making of a commander and thus its integrity has to be guaranteed. One of the methods used to detect manipulation is to check if the stereoscopic image is taken from the original camera. Sensor pattern noise(SPN) used widely for camera identification cannot be directly applied to a stereoscopic image due to the stereo warping in DIBR. To solve this problem, we find out a shifted object in the stereoscopic image and relocate the object to its orignal location in the center image. Then the similarity between SPNs extracted from the stereoscopic image and the original camera is measured only for the object area. Thus we can determine the source of the camera that was used.

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

Epipolar Resampling for High Resolution Satellite Imagery Based on Parallel Projection (평행투영 기반의 고해상도 위성영상 에피폴라 재배열)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Chang, Hwi-Jeong;Jeong, Ji-Yeon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.4
    • /
    • pp.81-88
    • /
    • 2007
  • The geometry of satellite image captured by linear CCD sensor is different from that of frame camera image. The fact that the exterior orientation parameters for satellite image with linear CCD sensor varies from scan line by scan line, causes the difference of image geometry between frame and linear CCD sensor. Therefore, we need the epipolar geometry for linear CCD image which differs from that of frame camera image. In this paper, we proposed a method of resampling linear CCD satellite image in epipolar geometry under the assumption that image is not formed in perspective projection but in parallel projection, and the sensor model is a 2D affine sensor model based on parallel projection. For the experiment, IKONOS stereo images, which are high resolution linear CCD images, were used and tested. As results, the spatial accuracy of 2D affine sensor model is investigated and the accuracy of epipolar resampled image with RFM was presented.

  • PDF

Accuracy Assessment of 3D Geo-positioning for SPOT-5 HRG Stereo Images Using Orbit-Attitude Model (궤도기반 모델을 이용한 SPOT-5 HGR 입체영상의 3차원 위치결정 정확도 평가)

  • Wie, Gwang-Jae;Kim, Deok-In;Lee, Ha-Joon;Jang, Yong-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.529-534
    • /
    • 2009
  • In this study, we investigate the feasibility of modeling entire image strips that has been acquired from the same orbital segments. We tested sensor models based on satellite orbit and attitude with different sets(Type1 ~ Type4) of unknowns. We checked the accuracy of orbit modeling by establishing sensor models of one scene using control points extracted from the scene and by applying the models to adjacent scenes within the same orbital segments. Results indicated that modeling of individual scenes with 1st or 2nd order unknowns was recommended. We tested the accuracy of around control points, digital map using the HIST-DPW (Hanjin Information Systems & Telecommunication Digital Photogrammetric Workstation) As a result, we showed that the orbit-based sensor model is a suitable sensor model for making 1/25,000 digital map.

A Feasibility Study for Mapping Using The KOMPSAT-2 Stereo Imagery (아리랑위성 2호 입체영상을 이용한 지도제작 가능성 연구)

  • Lee, Kwang-Jae;Kim, Youn-Soo;Seo, Hyun-Duck
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.1
    • /
    • pp.197-210
    • /
    • 2012
  • The KOrea Multi-Purpose SATellite(KOMPSAT)-2 has a capability to provide a cross-track stereo imagery using two different orbits for generating various spatial information. However, in order to fully realize the potential of the KOMPSAT-2 stereo imagery in terms of mapping, various tests are necessary. The purpose of this study is to evaluate the possibility of mapping using the KOMPSAT-2 stereo imagery. For this, digital plotting was conducted based on the stereoscopic images. Also the Digital Elevation Model(DEM) and an ortho-image were generated using digital plotting results. An accuracy of digital plotting, DEM, and ortho-image were evaluated by comparing with the existing data. Consequently, we found that horizontal and vertical error of the modeling results based on the Rational Polynomial Coefficient(RPC) was less than 1.5 meters compared with the Global Positioning System(GPS) survey results. The maximum difference of vertical direction between the plotted results in this study and the existing digital map on the scale of 1/5,000 was more than 5 meters according as the topographical characteristics. Although there were some irregular parallax on the images, we realized that it was possible to interpret and plot at least seventy percent of the layer which was required the digital map on the scale of 1/5,000. Also an accuracy of DEM, which was generated based on the digital plotting, was compared with the existing LiDAR DEM. We found that the ortho-images, which were generated using the extracted DEM in this study, sufficiently satisfied with the requirement of the geometric accuracy for an ortho-image map on the scale of 1/5,000.

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

Localization Algorithm for Lunar Rover using IMU Sensor and Vision System (IMU 센서와 비전 시스템을 활용한 달 탐사 로버의 위치추정 알고리즘)

  • Kang, Hosun;An, Jongwoo;Lim, Hyunsoo;Hwang, Seulwoo;Cheon, Yuyeong;Kim, Eunhan;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • In this paper, we propose an algorithm that estimates the location of lunar rover using IMU and vision system instead of the dead-reckoning method using IMU and encoder, which is difficult to estimate the exact distance due to the accumulated error and slip. First, in the lunar environment, magnetic fields are not uniform, unlike the Earth, so only acceleration and gyro sensor data were used for the localization. These data were applied to extended kalman filter to estimate Roll, Pitch, Yaw Euler angles of the exploration rover. Also, the lunar module has special color which can not be seen in the lunar environment. Therefore, the lunar module were correctly recognized by applying the HSV color filter to the stereo image taken by lunar rover. Then, the distance between the exploration rover and the lunar module was estimated through SIFT feature point matching algorithm and geometry. Finally, the estimated Euler angles and distances were used to estimate the current position of the rover from the lunar module. The performance of the proposed algorithm was been compared to the conventional algorithm to show the superiority of the proposed algorithm.

A Study on Robot OLP Compensation Based on Image Based Visual Servoing in the Virtual Environment (가상 환경에서의 영상 기반 시각 서보잉을 통한 로봇 OLP 보상)

  • Shin Chan-Bai;Lee Jeh-Woon;Kim Jin-Dae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.248-254
    • /
    • 2006
  • It is necessary to improve the exactness and adaptation of the working environment for the intelligent robot system. The vision sensor have been studied for a long time at this points. However, it has many processes and difficulties for the real usages. This paper proposes a visual servoing in the virtual environment to support OLP(Off-Line-Programming) path compensation and supplement the problem of complexity of the old kinematical calibration. Initial robot path could be compensated by pixel differences between real and virtual image. This method removes the varies calibrations and 3D reconstruction process in real working space. To show the validity of the proposed approach, virtual space servoing with stereo camera is carried out with WTK and openGL library for a KUKA-6R manipulator and updated real robot path.

Illumination Invariant Ranging Sensor Based on Structured Light Image (조명잡음에 강인한 구조광 영상기반 거리측정 센서)

  • Shin, Jin;Yi, Soo-Yeong
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.24 no.12
    • /
    • pp.122-130
    • /
    • 2010
  • This paper presents an active ranging system based on laser structured-light image. The structured-light image processing is computationally efficient in comparison with the conventional stereo image processing, since the burdensome correspondence problem is avoidable. In order to achieve robustness against environmental illumination noise, an efficient image processing algorithm, i.e., integration of difference images with structured-light modulation is proposed. Distance equation from the measured structured light pixel distance and system parameter calibration are addressed in this paper. Experiments and analysis are carried out to verify performance of the proposed ranging system.

Analysis of Ground Height from Automatic Correlation Matching Result Considering Density Measure of Tree (수목차폐율을 고려한 자동상관매칭 수치고도 결과 분석)

  • Eo, Yang-Dam
    • Spatial Information Research
    • /
    • v.15 no.2
    • /
    • pp.181-187
    • /
    • 2007
  • To make digital terrain data, automatic correlation matching by stereo airborne/satellite images has been researched. The result of automatic correlation matching has a limit on extracting exact ground height because of angle of sensor, tree of height. Therefore, the amount of editing works depend on the distribution of spatial feature in images as well as image quality. This paper shows that the automatic correlation matching result was affected by density and height of tree.

  • PDF