• Title/Summary/Keyword: three dimensional vision

Search Result 221, Processing Time 0.038 seconds

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

From Broken Visions to Expanded Abstractions (망가진 시선으로부터 확장된 추상까지)

  • Hattler, Max
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.697-712
    • /
    • 2017
  • In recent years, film and animation for cinematic release have embraced stereoscopic vision and the three-dimensional depth it creates for the viewer. The maturation of consumer-level virtual reality (VR) technology simultaneously spurred a wave of media productions set within 3D space, ranging from computer games to pornographic videos, to Academy Award-nominated animated VR short film Pearl. All of these works rely on stereoscopic fusion through stereopsis, that is, the perception of depth produced by the brain from left and right images with the amount of binocular parallax that corresponds to our eyes. They aim to emulate normal human vision. Within more experimental practices however, a fully rendered 3D space might not always be desirable. In my own abstract animation work, I tend to favour 2D flatness and the relative obfuscation of spatial relations it affords, as this underlines the visual abstraction I am pursuing. Not being able to immediately understand what is in front and what is behind can strengthen the desired effects. In 2015, Jeffrey Shaw challenged me to create a stereoscopic work for Animamix Biennale 2015-16, which he co-curated. This prompted me to question how stereoscopy, rather than hyper-defining space within three dimensions, might itself be used to achieve a confusion of spatial perception. And in turn, how abstract and experimental moving image practices can benefit from stereoscopy to open up new visual and narrative opportunities, if used in ways that break with, or go beyond stereoscopic fusion. Noteworthy works which exemplify a range of non-traditional, expanded approaches to binocular vision will be discussed below, followed by a brief introduction of the stereoscopic animation loop III=III which I created for Animamix Biennale. The techniques employed in these works might serve as a toolkit for artists interested in exploring a more experimental, expanded engagement with stereoscopy.

Shape Design of Heat Dissipating Flow Control Structure Within a DVR using Parametric Study (매개변수 연구 기법을 이용한 DVR 내부 방열 유동제어 구조물의 형상 설계)

  • Jung, Byeongyoon;Lee, Kyunghoon;Park, Soonok;Yoo, Jeonghoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.165-171
    • /
    • 2018
  • In this study, the shape of the flow control structure within a DVR was designed for heat dissipation of the CPU. The proposed design consists of three thin metal plates, which directly controls the air flow inside the DVR box and forces the air to pass through the CPU, thereby efficiently dissipating heat from the CPU. The shape of the structure was determined using parametric studies. To verify the design result, we carried out a three-dimensional time dependent numerical analysis using a commercial fluid dynamics analysis package FlowVision. As a result of experiments with a real DVR equipment, it is confirmed that the temperature of the CPU is significantly reduced compared to the initial model.

Pose-normalized 3D Face Modeling for Face Recognition

  • Yu, Sun-Jin;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.984-994
    • /
    • 2010
  • Pose variation is a critical problem in face recognition. Three-dimensional(3D) face recognition techniques have been proposed, as 3D data contains depth information that may allow problems of pose variation to be handled more effectively than with 2D face recognition methods. This paper proposes a pose-normalized 3D face modeling method that translates and rotates any pose angle to a frontal pose using a plane fitting method by Singular Value Decomposition(SVD). First, we reconstruct 3D face data with stereo vision method. Second, nose peak point is estimated by depth information and then the angle of pose is estimated by a facial plane fitting algorithm using four facial features. Next, using the estimated pose angle, the 3D face is translated and rotated to a frontal pose. To demonstrate the effectiveness of the proposed method, we designed 2D and 3D face recognition experiments. The experimental results show that the performance of the normalized 3D face recognition method is superior to that of an un-normalized 3D face recognition method for overcoming the problems of pose variation.

Error analysis of 3-D surface parameters from space encoding range imaging (공간 부호화 레인지 센서를 이용한 3차원 표면 파라미터의 에러분석에 관한 연구)

  • 정흥상;권인소;조태훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.375-378
    • /
    • 1997
  • This research deals with a problem of reconstructing 3D surface structures from their 2D projections, which is an important research topic in computer vision. In order to provide robust reconstruction algorithm, that is reliable even in the presence of uncertainty in the range images, we first present a detailed model and analysis of several error sources and their effects on measuring three-dimensional surface properties using the space encoded range imaging technique. Our approach has two key elements. The first is the error modeling for the space encoding range sensor and its propagation to the 3D surface reconstruction problem. The second key element in our approach is the algorithm for removing outliers in the range image. Such analyses, to our knowledge, have never attempted before. Experimental results show that our approach is significantly reliable.

  • PDF

Displacement Measurement of an Existing Long Span Steel Box-Girder using TLS(Terrestrial Laser Scanning) Displacement measurement Model (TLS 변위계측모델을 이용한 장스팬 철골 박스형 거더의 변위 계측)

  • Lee, Hong-Mn;Park, Hyo-Seon;Lee, Im-Pyeong;Kwon, Yun-Han
    • 한국방재학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.53-56
    • /
    • 2007
  • It was previously introduced a new displacement measuring technique using terrestrial laser scanning (TLS) that remotely samples the surface of an object using laser pulses and generates the three-dimensional (3D) coordinates of numerous points on the surface. In this paper, for an assessment of the capabilities of the measuring technique about existing structures, the field tests for vertical displacement measurement of an existing long span steel box-girder are experimentally carried out. The performance of the technique is evaluated by comparing the displacements obtained from TLS system and displacements directly measured from linear variable displacement transducer (LVDT).

  • PDF

An Improved Stereo Matching Algorithm with Robustness to Noise Based on Adaptive Support Weight

  • Lee, Ingyu;Moon, Byungin
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.256-267
    • /
    • 2017
  • An active research area in computer vision, stereo matching is aimed at obtaining three-dimensional (3D) information from a stereo image pair captured by a stereo camera. To extract accurate 3D information, a number of studies have examined stereo matching algorithms that employ adaptive support weight. Among them, the adaptive census transform (ACT) algorithm has yielded a relatively strong matching capability. The drawbacks of the ACT, however, are that it produces low matching accuracy at the border of an object and is vulnerable to noise. To mitigate these drawbacks, this paper proposes and analyzes the features of an improved stereo matching algorithm that not only enhances matching accuracy but also is also robust to noise. The proposed algorithm, based on the ACT, adopts the truncated absolute difference and the multiple sparse windows method. The experimental results show that compared to the ACT, the proposed algorithm reduces the average error rate of depth maps on Middlebury dataset images by as much as 2% and that is has a strong robustness to noise.

The study on ON-LINE and OFF-LINE systems of electronic image editing by time-code adressing method (전자 영상 편집에 있어서 time-code adressing 방식에 의한 ON-LINE, OFF-LINE system에 대한 연구)

  • BongJoKim
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.13 no.1
    • /
    • pp.93-104
    • /
    • 1995
  • Liquid crystal-polymer composite(LCPC) films promising new materials for both projection displays and vision product. LCPC films consist of a continous liquid crystal phase embedded in a three- dimensional network of polymer matrix. The liquid crystal in these LC phases can be elecrtrically switched giving rise to an opaque scattering off-stats and a transparent, non-scatting on- stats. In this work. a premixture is composed of LC, UV-curable monomer and photonitiator. LCPC films are formed by photopolymerization induced phase separation from this premixture. In conclusion, structure and electro-optical properties of LCPC films strongly depends on the selection of monomer, LC content and curing rate.

  • PDF