• Title/Summary/Keyword: Image-Based Point Cloud

Search Result 109, Processing Time 0.024 seconds

3D Image Scan Automation Planning based on Mobile Rover (이동식 로버 기반 스캔 자동화 계획에 대한 연구)

  • Kang, Tae-Wook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.8
    • /
    • pp.1-7
    • /
    • 2019
  • When using conventional 3D image scanning methods, it is common for image scanning to be done manually, which is labor-intensive. Scanning a space made up of complicated equipment or scanning a narrow space that is difficult for the user to enter, is problematic, resulting in quality degradation due to the presence of shadow areas. This paper proposes a method to scan an image using a rover equipped with a scanner in areas where it is difficult for a person to enter. To control the scan path precisely, the 3D image remote scan automation method based on the rover move rule definition is described. Through the study, the user can automate the 3D scan plan in a desired manner by defining the rover scan path as the rule base.

Development of the Program for Reconnaissance and Exploratory Drones based on Open Source (오픈 소스 기반의 정찰 및 탐색용 드론 프로그램 개발)

  • Chae, Bum-sug;Kim, Jung-hwan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.33-40
    • /
    • 2022
  • With the recent increase in the development of military drones, they are adopted and used as the combat system of battalion level or higher. However, it is difficult to use drones that can be used in battles below the platoon level due to the current conditions for the formation of units in the Korean military. In this paper, therefore, we developed a program drones equipped with a thermal imaging camera and LiDAR sensor for reconnaissance and exploration that can be applied in battles below the platoon level. Using these drones, we studied the possibility and feasibility of drones for small-scale combats that can find hidden enemies, search for an appropriate detour through image processing and conduct reconnaissance and search for battlefields, hiding and cover-up through image processing. In addition to the purpose of using the proposed drone to search for an enemies lying in ambush in the battlefield, it can be used as a function to check the optimal movement path when a combat unit is moving, or as a function to check the optimal place for cover-up or hiding. In particular, it is possible to check another route other than the route recommended by the program because the features of the terrain can be checked from various viewpoints through 3D modeling. We verified the possiblity of flying by designing and assembling in a form of adding LiDAR and thermal imaging camera module to a drone assembled based on racing drone parts, which are open source hardware, and developed autonomous flight and search functions which can be used even by non-professional drone operators based on open source software, and then installed them to verify their feasibility.

Terrain Geometry from Monocular Image Sequences

  • McKenzie, Alexander;Vendrovsky, Eugene;Noh, Jun-Yong
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.98-108
    • /
    • 2008
  • Terrain reconstruction from images is an ill-posed, yet commonly desired Structure from Motion task when compositing visual effects into live-action photography. These surfaces are required for choreography of a scene, casting physically accurate shadows of CG elements, and occlusions. We present a novel framework for generating the geometry of landscapes from extremely noisy point cloud datasets obtained via limited resolution techniques, particularly optical flow based vision algorithms applied to live-action video plates. Our contribution is a new statistical approach to remove erroneous tracks ('outliers') by employing a unique combination of well established techniques-including Gaussian Mixture Models (GMMs) for robust parameter estimation and Radial Basis Functions (REFs) for scattered data interpolation-to exploit the natural constraints of this problem. Our algorithm offsets the tremendously laborious task of modeling these landscapes by hand, automatically generating a visually consistent, camera position dependent, thin-shell surface mesh within seconds for a typical tracking shot.

Real Object Recognition Based Mobile Augmented Reality Game (현실 객체 인식 기반 모바일 증강현실 게임)

  • Lee, Dong-Chun;Lee, Hun-Joo
    • Journal of Korea Game Society
    • /
    • v.17 no.4
    • /
    • pp.17-24
    • /
    • 2017
  • This paper describes the general process of making augmented reality game for real objects without markers. In this paper, point cloud data created by using slam technology is edited using a separate editing tool to optimize performance in mobile environment. Also, in the game execution stage, a lot of load is generated due to the extraction of feature points and the matching of descriptors. In order to reduce this, optical flow is used to track the matched feature points in the previous input image.

Development of Image Manipulation System based on Reconstructed Point-cloud Model (재구성된 포인트 클라우드 모델 기반 이미지 편집 시스템 개발)

  • Yoon, Hyun-Wook;Hong, Kwang-Jin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.465-468
    • /
    • 2018
  • 현재 사용되고 있는 보편적인 이미지 편집 방식은 이미지 내부 일부 영역을 선택 및 추출하는 방식으로 객체를 배경과 분리한다. 객체가 분리되는 과정에서 객체가 있었던 곳에서는 빈 영역이 발생하게 되는데, 이 문제를 해결하기 위해 인접한 영역을 가져와서 채우거나, 딥러닝을 적용하여 유사한 이미지로 채우는 방식이 가장 보편적이다. 그러나 이러한 방식은 배경에서 유실된 부분을 인공적인 방법으로 채우기 때문에 완벽하게 복원하기가 힘들다. 따라서 본 논문에서는 미리 해당 이미지에 대한 3 차원 정보를 가공 및 저장함으로써 편집으로 인해 유실되는 부분을 3 차원 정보로 부터 복구할 수 있는 아이디어를 제안한다.

2D Image based 3D Data Generation Method for Background Information of Virtual Reality (가상 환경의 배경 정보를 위한 2D 영상 기반의 3D 데이터 생성 방법)

  • Rhee, Seongbae;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.331-334
    • /
    • 2021
  • 가상 현실(VR: Virtual Reality) 기술은 대표적인 몰입형 미디어 기술로써, 컴퓨터 등을 통해 인공적으로 만들어낸 실제와 유사한 특정 환경, 상황 혹은 그 기술 자체를 의미한다. VR 기술은 비교적 간단한 장비를 착용한 것만으로 가상 세계에 구현된 모든 공간을 체험할 수 있기 때문에 사용자가 신체적 불편하더라도 손쉽게 유명 관광 명소를 여행할 수 있다. 또한, 실제 작전지역을 가상 세계에 반영함으로써, 가상 세계에서 안전한 군사 훈련이 가능하다. 이와 같은 활용을 가능하게 하기 위해서는 가상 세계의 배경 정보가 실제 세계의 모습과 흡사한 실사 그래픽으로 구성되어야 한다. 그러나 실사 그래픽을 제작하는 것은 제작 난이도가 높고 제작비용이 비싸다는 제한 사항으로 인하여 실사 그래픽을 바탕으로 한 VR 콘텐츠의 수는 부족하다. 이에 본 논문에서는 일반 카메라를 통해서 촬영한 단일 영상 또는 다시점 영상을 통해서 Point Cloud 데이터를 생성하고, 이를 가상 세계의 배경 정보로 활용하기 위한 방법을 제안하고자 한다.

  • PDF

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

Three-dimensional Geometrical Scanning System Using Two Line Lasers (2-라인 레이저를 사용한 3차원 형상 복원기술 개발)

  • Heo, Sang-Hu;Lee, Chung Ghiu
    • Korean Journal of Optics and Photonics
    • /
    • v.27 no.5
    • /
    • pp.165-173
    • /
    • 2016
  • In this paper, we propose a three-dimensional (3D) scanning system based on two line lasers. This system uses two line lasers with different wavelengths as light sources. 532-nm and 630-nm line lasers can compensate for missing scan data generated by geometrical occlusion. It also can classify two laser planes by using the red and green channels. For automatic registration of scanning data, we control a stepping motor and divide the motor's rotational degree of freedom into micro-steps. To this end, we design a control printed circuit board for the laser and stepping motor, and use an image processing board. To compute a 3D point cloud, we obtain 200 and 400 images with laser lines and segment lines on the images at different degrees of rotation. The segmented lines are thinned for one-to-one matching of an image pixel with a 3D point.

Fast Joint Normal Estimation Method for V-PCC Encoder (V-PCC 부호화기를 위한 고속 결합 법선 추정 방법)

  • Kim, Yong-Hwan;Kim, Yura
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.246-249
    • /
    • 2022
  • 최근 들어 세계적으로 크게 관심을 받는 메타버스 및 몰입형(가상현실, 확장현실, 및 라이트필드) 콘텐츠 서비스의 응용 범위를 확대하기 위해서는 3D 객체의 실시간 전송을 위한 압축 기술이 필요하다. ISO/IEC 23090 MPEG-I Part 5 로 2021 년 표준화 완료된 V-PCC (Video-based Point Cloud Compression)는 이러한 산업계의 관심 및 필요에 의해서 국제 표준화된 동적 3D 포인트 클라우드 객체 부호화 기술이다. V-PCC 기술의 압축 성능은 기존 산업계 기술에 비해 매우 우수하나, 부호화기의 연산 복잡도가 매우 높다는 단점을 가지고 있다. 본 논문에서는 V-PCC 부호화기에서 가장 높은 연산 복잡도를 갖는 법선 추정 알고리즘의 결합 고속화 기법을 제안한다. 법선 추정은 2 개의 알고리즘으로 구성되어 있다. 첫번째는 "방향을 무시하는 법선 추정 알고리즘(normal estimation)"이고, 두번째는 첫번째 알고리즘에서 추정된 법선들을 대상으로 하는 "법선 방향 추정 알고리즘(normal orientation)"이다. 본 논문에서 제안하는 고속화 기법은 2 개 알고리즘을 결합하여 첫번째 법선 추정 알고리즘에서 획득한 부가 정보를 두번째 법선 방향 추정 알고리즘에서 활용함으로써 연산량을 대폭 줄이고, 또한 법선 방향 추정 알고리즘 내의 우선순위 큐 자료구조를 변경하여 추가적인 고속화를 달성한다. 7 개 테스트 영상에 대한 실험 결과, 압축 효율 저하 없이 법선 방향 추정 알고리즘의 속도를 평균 89.2% 향상시킬 수 있다.

  • PDF

LiDAR Chip for Automated Geo-referencing of High-Resolution Satellite Imagery (라이다 칩을 이용한 고해상도 위성영상의 자동좌표등록)

  • Lee, Chang No;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.4_1
    • /
    • pp.319-326
    • /
    • 2014
  • The accurate geo-referencing processes that apply ground control points is prerequisite for effective end use of HRSI (High-resolution satellite imagery). Since the conventional control point acquisition by human operator takes long time, demands for the automated matching to existing reference data has been increasing its popularity. Among many options of reference data, the airborne LiDAR (Light Detection And Ranging) data shows high potential due to its high spatial resolution and vertical accuracy. Additionally, it is in the form of 3-dimensional point cloud free from the relief displacement. Recently, a new matching method between LiDAR data and HRSI was proposed that is based on the image projection of whole LiDAR data into HRSI domain, however, importing and processing the large amount of LiDAR data considered as time-consuming. Therefore, we wmotivated to ere propose a local LiDAR chip generation for the HRSI geo-referencing. In the procedure, a LiDAR point cloud was rasterized into an ortho image with the digital elevation model. After then, we selected local areas, which of containing meaningful amount of edge information to create LiDAR chips of small data size. We tested the LiDAR chips for fully-automated geo-referencing with Kompsat-2 and Kompsat-3 data. Finally, the experimental results showed one-pixel level of mean accuracy.