• 제목/요약/키워드: 3D vision

검색결과 932건 처리시간 0.028초

랜드마크 기반 비전항법의 오차특성을 고려한 INS/비전 통합 항법시스템 (INS/Vision Integrated Navigation System Considering Error Characteristics of Landmark-Based Vision Navigation)

  • 김영선;황동환
    • 제어로봇시스템학회논문지
    • /
    • 제19권2호
    • /
    • pp.95-101
    • /
    • 2013
  • The paper investigates the geometric effect of landmarks to the navigation error in the landmark based 3D vision navigation and introduces the INS/Vision integrated navigation system considering its effect. The integrated system uses the vision navigation results taking into account the dilution of precision for landmark geometry. Also, the integrated system helps the vision navigation to consider it. An indirect filter with feedback structure is designed, in which the position and the attitude errors are measurements of the filter. Performance of the integrated system is evaluated through the computer simulations. Simulation results show that the proposed algorithm works well and that better performance can be expected when the error characteristics of vision navigation are considered.

영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법 (Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry)

  • 이용주;김도완;박만우
    • 한국BIM학회 논문집
    • /
    • 제5권2호
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.

스테레오 시각 정보를 이용한 4각보행 로보트의 3차원 위치 및 자세 검출 (3-D Positioning Using Stereo Vision and Guide-Mark Pattern For A Quadruped Walking Robot)

  • 윤정남;권호열;서일홍
    • 대한전자공학회논문지
    • /
    • 제27권8호
    • /
    • pp.1188-1200
    • /
    • 1990
  • In this paper, the 3-D positioning problem for a quadruped walking robot is investigated. In order to determine the robot's exterior position and orentation in a worls coordinate system, a stereo 3-D positioning algorithm is proposed. The proposed algorithm uses a Guide-Mark Pattern (GMP) specialy designed for fast and reliable extraction of 3-D robot position information from the uncontrolled working environment. Some experimental results along with error analysis and several means of reducing the effects of vision processing error in the proposed algorithm are disscussed.

  • PDF

다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선 (Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map)

  • 김시종;안광호;성창훈;정명진
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법 (Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing)

  • 이상훈;송진모;배종수
    • 한국군사과학기술학회지
    • /
    • 제18권3호
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • 한국측량학회지
    • /
    • 제30권6_2호
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

비전을 이용한 곡면변형률 측정의 정확도 및 정밀도 향상에 관한 연구 (A Study on the Improvement of Accuracy and Precision in the Vision-Based Surface-Strain Measurement)

  • 김두수;김형종
    • 소성∙가공
    • /
    • 제8권3호
    • /
    • pp.294-305
    • /
    • 1999
  • A vision-based surface-strain measurement system has been still improved since the authors devel-oped the first version of it. New algorithms for the subpixel measurement and surface smoothing are introduced to improve the accuracy and precision in the present study. The effects of these algorithms are investigated by error analysis. And the equations required to calculate 3D surface-strain of a shell element are derived from the shape function of a linear solid finite-element. The influences of external factors on the measurement error are also examined, and several trials are made to obtain possible optimal condition which may minimize the error.

  • PDF

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • 제3권2호
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.

3-D 비젼센서를 위한 고속 자동선택 알고리즘 (High Speed Self-Adaptive Algorithms for Implementation in a 3-D Vision Sensor)

  • P.미셰;A.벤스하이르;이상국
    • 센서학회지
    • /
    • 제6권2호
    • /
    • pp.123-130
    • /
    • 1997
  • 이 논문은 다음과 같은 두가지 요소로 구성되는 독창적인 stereo vision system을 논술한다. declivity라는 새로운 개념을 도입한 자동선택 영상 분할처리 (self-adaptive image segmentation process) 와 자동선택 결정변수 (self-adaptive decision parameters) 를 응용하여 설계된 신속한 stereo matching algorithm. 현재, 실내 image의 depth map을 완성하는데 SUN-IPX 에서 3sec가 소요되나 연구중인 DSP Chip의 조합은 이 시간을 1초 이하로 단축시킬 수 있을 것이다.

  • PDF

솔더 페이스트의 고속, 고정밀 검사를 위한 이차원/삼차원 복합 광학계 및 알고리즘 구현 (An implementation of 2D/3D Complex Optical System and its Algorithm for High Speed, Precision Solder Paste Vision Inspection)

  • 조상현;최흥문
    • 대한전자공학회논문지SP
    • /
    • 제41권3호
    • /
    • pp.139-146
    • /
    • 2004
  • 본 논문에서는 솔더페이스트의 이차원 및 삼차원 자동검사를 함께 할 수 있는 복합 검사 광학계와 그 구동유닛을 단일 프로브 시스템으로 구현하고, 그를 위한 효과적인 비젼검사 알고리즘을 제안하였다. 솔더페이스트의 이차원 검사에는 One-pass Run Length 레이블링 알고리즘을 제안하여 입력 영상으로부터 솔더 페이스트 형상을 효과적으로 추출하도록 하였고, 고속 검사를 위한 프로브의 최적 이동 경로도 구하였으며, 삼차원 검사에는 기존의 레이져 슬릿빔(slit-beam) 방식 대신 격자 투영식 모아레 간섭계에 기반한 위상이동 알고리즘을 도입하여 고정밀 검사가 가능토록 하였다. 전체 소프트웨어 구현에는 MMX 병렬처리기법도 적용함으로써 더욱 고속화 하였다. 10㎜×10㎜의 단위 측정영역(field of view: FOV)에 대하여 x, y 축으로 10㎛ Z축으로 l ㎛의 분해능을 가지는 이차원 및 삼차원 복합 광학 검사 시스템을 제작하여 실험한 결과, 한 FOV에 대한 솔더페이스트의 이차원 및 삼차원 검사를 영상포착 후 각각 평균 11msec와 15msec의 짧은 시간에 처리할 수 있었고, ±1㎛의 두께 측정 정밀도를 얻을 수 있었다.