• 제목/요약/키워드: 3D Cameras

검색결과 505건 처리시간 0.038초

스테레오 PIV (Stereoscopic PIV)

  • 도덕희;이원제;조경래;편용범;김동혁
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 추계학술대회논문집B
    • /
    • pp.394-399
    • /
    • 2001
  • A new stereoscopic PIV is introduced. The system works with CCD cameras, stereoscopic photogrammetry, and a 3D-PTV principle. Virtual images are produced for the construction of a benchmark testing tool of PIV techniques. The arrangement of the two cameras is based on angular position. The calibration of cameras and the pair-matching of the three-dimensional velocity vectors are based on 3D-PTV technique.

  • PDF

체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘 (Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects)

  • 김경진;박병서;김동욱;서영호
    • 방송공학회논문지
    • /
    • 제24권5호
    • /
    • pp.765-774
    • /
    • 2019
  • 본 논문에서는 다중 RGB-D 카메라의 포인트 클라우드 정합 알고리즘을 제안한다. 일반적으로 컴퓨터 비전 분야에서는 카메라의 위치를 정밀하게 추정하는 문제에 많은 관심을 두고 있다. 기존의 3D 모델 생성 방식들은 많은 카메라 대수나 고가의 3D Camera를 필요로 한다. 또한 2차원 이미지를 통해 카메라 외부 파라미터를 얻는 기존의 방식은 큰 오차를 가지고 있다. 본 논문에서는 저가의 RGB-D 카메라 8대를 사용하여 전방위 3차원 모델을 생성하기 위해 깊이 이미지와 함수 최적화 방식을 이용하여 유효한 범위 내의 오차를 갖는 좌표 변환 파라미터를 구하는 방식을 제안한다.

Stereocopic-PIV 개발과 원주근접 후류 계측 (Development of Stereocopic-PIV and its Application to the Measurement of the Near Wake of a Circular Cylinder)

  • 도덕희;김동혁;조경래;이원재;편용범
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 춘계학술대회논문집E
    • /
    • pp.555-559
    • /
    • 2001
  • A new stereoscopic PIV is developed using two CCD cameras, stereoscopic photogrammetry, and a 3D-PTV principle. The wake of a circular cylinder is measured by the developed stereoscopic PIV technique. The B mode vortical structure of the wake over the Reynolds number 300 is clearly seen by the developed technique. The arrangement of the two cameras is based on angular position. The calibration of cameras and the pair-matching of the three-dimensional velocity vectors are based on 3D-PTV technique.

  • PDF

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • 한국멀티미디어학회논문지
    • /
    • 제11권6호
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF

다중 스테레오 카메라를 이용한 3차원 모델링 시스템 (A 3D Modeling System Using Multiple Stereo Cameras)

  • 김한성;손광훈
    • 대한전자공학회논문지SP
    • /
    • 제44권1호
    • /
    • pp.1-9
    • /
    • 2007
  • 본 논문에서는 임의 시점에서의 장면을 생성하기 위한 3차원 모델링 및 렌더링 시스템을 제안한다. 제안되는 시스템은 공간상에 설치된 복수의 스테레오 카메라와 PC들로 구성되며 UDP를 이용해 연결되어 각 카메라에서 획득되고 분석된 영상 데이터들을 모델링 PC로 전송해 실시간으로 3차원 모델을 생성하고, 이로부터 사용자가 원하는 위치에서의 장면을 생성해 디스플레이 하게 된다. 제안된 알고리듬은 성능 평가 결과 기존의 알고리듬보다 좋은 성능을 보였으며, 구현된 시스템은 실시간으로 사용자에게 원하는 시점에서의 영상을 자연스럽게 제공함을 실험을 통해 검증하였다.

Analyzing the Influence of Spatial Sampling Rate on Three-dimensional Temperature-field Reconstruction

  • Shenxiang Feng;Xiaojian Hao;Tong Wei;Xiaodong Huang;Pan Pei;Chenyang Xu
    • Current Optics and Photonics
    • /
    • 제8권3호
    • /
    • pp.246-258
    • /
    • 2024
  • In aerospace and energy engineering, the reconstruction of three-dimensional (3D) temperature distributions is crucial. Traditional methods like algebraic iterative reconstruction and filtered back-projection depend on voxel division for resolution. Our algorithm, blending deep learning with computer graphics rendering, converts 2D projections into light rays for uniform sampling, using a fully connected neural network to depict the 3D temperature field. Although effective in capturing internal details, it demands multiple cameras for varied angle projections, increasing cost and computational needs. We assess the impact of camera number on reconstruction accuracy and efficiency, conducting butane-flame simulations with different camera setups (6 to 18 cameras). The results show improved accuracy with more cameras, with 12 cameras achieving optimal computational efficiency (1.263) and low error rates. Verification experiments with 9, 12, and 15 cameras, using thermocouples, confirm that the 12-camera setup as the best, balancing efficiency and accuracy. This offers a feasible, cost-effective solution for real-world applications like engine testing and environmental monitoring, improving accuracy and resource management in temperature measurement.

비젼 제어시스템에 사용된 카메라의 최적개수에 대한 실험적 연구 (An Experimental Study on the Optimal Number of Cameras used for Vision Control System)

  • 장완식;김경석;김기영;안힘찬
    • 한국공작기계학회논문집
    • /
    • 제13권2호
    • /
    • pp.94-103
    • /
    • 2004
  • The vision system model used for this study involves the six parameters that permits a kind of adaptability in that relationship between the camera space location of manipulable visual cues and the vector of robot joint coordinates is estimated in real time. Also this vision control method requires the number of cameras to transform 2-D camera plane from 3-D physical space, and be used irrespective of location of cameras, if visual cues are displayed in the same camera plane. Thus, this study is to investigate the optimal number of cameras used for the developed vision control system according to the change of the number of cameras. This study is processed in the two ways : a) effectiveness of vision system model b) optimal number of cameras. These results show the evidence of the adaptability of the developed vision control method using the optimal number of cameras.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

절차적 멀티카메라 기하 및 색상 정보 보정 툴킷 (Procedural Geometry Calibration and Color Correction ToolKit for Multiple Cameras)

  • Kang, Hoonjong;Jo, Dongsik
    • 한국정보통신학회논문지
    • /
    • 제25권4호
    • /
    • pp.615-618
    • /
    • 2021
  • Recently, 3D reconstruction of real objects with multi-cameras has been widely used for many services such as VR/AR, motion capture, and plenoptic video generation. For accurate 3D reconstruction, geometry and color matching between multiple cameras will be needed. However, previous calibration and correction methods for geometry (internal and external parameters) and color (intensity) correction is difficult for non-majors to perform manually. In this paper, we propose a toolkit with procedural geometry calibration and color correction among cameras with different positions and types. Our toolkit consists of an easy user interface and turned out to be effective in setting up multi-cameras for reconstruction.

다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성 (Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera)

  • 김경진;박병서;김동욱;권순철;서영호
    • 방송공학회논문지
    • /
    • 제25권3호
    • /
    • pp.439-448
    • /
    • 2020
  • 본 논문에서는 다시점 RGB-D 카메라의 포인트 클라우드 정합을 위한 수정된 최적화 알고리즘을 제안한다. 일반적으로 컴퓨터 비전 분야에서는 카메라의 위치를 정밀하게 추정하는 것은 매우 중요하다. 기존의 연구에서 제안된 3D 모델 생성 방식들은 많은 카메라 대수나 고가의 3차원 Camera를 필요로 한다. 또한 2차원 이미지를 통해 카메라 외부 파라미터를 얻는 방식들은 큰 오차를 가지고 있다. 본 논문에서는 저가의 RGB-D 카메라를 8개 사용하여 전방위 자유시점을 제공할 수 있는 3차원 포인트 클라우드 및 매쉬 모델을 생성하기 위한 정합 기법을 제안하고자 한다. RGB영상과 함께 깊이지도 기반의 함수 최적화 방식을 이용하고, 초기 파라미터를 구하지 않으면서 고품질의 3차원 모델을 생성할 수 있는 좌표 변환 파라미터를 구하는 방식을 제안한다.