• Title/Summary/Keyword: Image Navigation

Search Result 704, Processing Time 0.03 seconds

Bundle Adjustment and 3D Reconstruction Method for Underwater Sonar Image (수중 영상 소나의 번들 조정과 3차원 복원을 위한 운동 추정의 모호성에 관한 연구)

  • Shin, Young-Sik;Lee, Yeong-jun;Cho, Hyun-Taek;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.2
    • /
    • pp.51-59
    • /
    • 2016
  • In this paper we present (1) analysis of imaging sonar measurement for two-view relative pose estimation of an autonomous vehicle and (2) bundle adjustment and 3D reconstruction method using imaging sonar. Sonar has been a popular sensor for underwater application due to its robustness to water turbidity and visibility in water medium. While vision based motion estimation has been applied to many ground vehicles for motion estimation and 3D reconstruction, imaging sonar addresses challenges in relative sensor frame motion. We focus on the fact that the sonar measurement inherently poses ambiguity in its measurement. This paper illustrates the source of the ambiguity in sonar measurements and summarizes assumptions for sonar based robot navigation. For validation, we synthetically generated underwater seafloor with varying complexity to analyze the error in the motion estimation.

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

Direct Geo-referencing for Laser Mapping System

  • Kim, Seong-Baek;Lee, Seung-yong;Kim, Min-Soo
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.423-427
    • /
    • 2002
  • Contrary to the traditional text-based information, 4S(GIS,GNSS,SIIS,ITS) information can contribute to the citizen's welfare in upcoming era. Recently, GSIS(Geo-Spatial Information System) has been applied and stressed out in various fields. As analyzed the data from GSIS arena, the position information of objects and targets is crucial and critical. Therefore, several methods of getting and knowing position are proposed and developed. From this perspective, Position collection and processing are the heart of 4S technology. We develop 4S-Van that enables real-time acquisition of position and attribute information and accurate image data in remote site. In this study, the configuration of 4S-Van equipped with GPS, INS, CCD and eye-safe laser scanner is shown and the merits of DGPS/INS integration approach for geo-referencing is briefly discussed. The algorithm of DGPS/INS integration fur determination of six parameters of motion is eccential in the 4S-Van to avoid or simplify the complicated computation such as photogrammetric triangulation. 4S-Van has the application of Laser-Mobile Mapping System for three-dimensional data acquisition that merges the texture information from CCD camera. The technique is also applied in the fields of virtual reality, car navigation, computer games, planning and management, city transportation, mobile communication, etc.

  • PDF

Requirements Analysis of Image-Based Positioning Algorithm for Vehicles

  • Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.397-402
    • /
    • 2019
  • Recently, with the emergence of autonomous vehicles and the increasing interest in safety, a variety of research has been being actively conducted to precisely estimate the position of a vehicle by fusing sensors. Previously, researches were conducted to determine the location of moving objects using GNSS (Global Navigation Satellite Systems) and/or IMU (Inertial Measurement Unit). However, precise positioning of a moving vehicle has lately been performed by fusing data obtained from various sensors, such as LiDAR (Light Detection and Ranging), on-board vehicle sensors, and cameras. This study is designed to enhance kinematic vehicle positioning performance by using feature-based recognition. Therefore, an analysis of the required precision of the observations obtained from the images has carried out in this study. Velocity and attitude observations, which are assumed to be obtained from images, were generated by simulation. Various magnitudes of errors were added to the generated velocities and attitudes. By applying these observations to the positioning algorithm, the effects of the additional velocity and attitude information on positioning accuracy in GNSS signal blockages were analyzed based on Kalman filter. The results have shown that yaw information with a precision smaller than 0.5 degrees should be used to improve existing positioning algorithms by more than 10%.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

On the Study of the Motion Response of a Vessel Moored in the Region Sheltered by Inclined Breakwaters (경사진 방파제에 계류된 선체 운동응답에 관한 연구)

  • Cho, I.H.;Hong, S.Y.;Hong, S.W.
    • Journal of Korean Port Research
    • /
    • v.6 no.2
    • /
    • pp.33-42
    • /
    • 1992
  • In this paper we investigate the motion response of a moored ship in the fluid region sheltered by inclined breakwaters. The matched asymptotic expansion technique is employed to analyze the wave fields scattered by the inclined breakwaters. Fluid domain is subdivided into the ocean, entrance and sheltered regions. Unknown coefficients contained in each region can be determined by matching at the intermediate zone between two neighboring regions. The wave field generated by the ship motion can be analyzed in terms of Green's function method. To obtain the velocity jump across the ship associated with the symmetric motion modes, the sheltered region is further divided into near field of the ship and the rest field. The image method is introduced to consider the effect of the pier near the ship. The integral equation for the velocity jump is derived by the flux matching between the inner region and the outer region of a moored ship. Throughout the numerical calculation it is found that the inclined angle width of entrance of breakwaters as well as the location of moored vessel play an important role in the motion response of a moored ship.

  • PDF

Cone-beam computed tomography in endodontics: from the specific technical considerations of acquisition parameters and interpretation to advanced clinical applications

  • Nestor Rios-Osorio;Sara Quijano-Guauque;Sandra Brinez-Rodriguez;Gustavo Velasco-Flechas;Antonieta Munoz-Solis;Carlos Chavez;Rafael Fernandez-Grisales
    • Restorative Dentistry and Endodontics
    • /
    • v.49 no.1
    • /
    • pp.1.1-1.18
    • /
    • 2024
  • The implementation of imaging methods that enable sensitive and specific observation of anatomical structures has been a constant in the evolution of endodontic therapy. Cone-beam computed tomography (CBCT) enables 3-dimensional (3D) spatial anatomical navigation in the 3 volumetric planes (sagittal, coronal and axial) which translates into great accuracy for the identification of endodontic pathologies/conditions. CBCT interpretation consists of 2 main components: (i) the generation of specific tasks of the image and (ii) the subsequent interpretation report. A systematic and reproducible method to review CBCT scans can improve the accuracy of the interpretation process, translating into greater precision in terms of diagnosis and planning of endodontic clinical procedures. MEDLINE (PubMed), Web of Science, Google Scholar, Embase and Scopus were searched from inception to March 2023. This narrative review addresses the theoretical concepts, elements of interpretation and applications of the CBCT scan in endodontics. In addition, the contents and rationale for reporting 3D endodontic imaging are discussed.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

VILODE : A Real-Time Visual Loop Closure Detector Using Key Frames and Bag of Words (VILODE : 키 프레임 영상과 시각 단어들을 이용한 실시간 시각 루프 결합 탐지기)

  • Kim, Hyesuk;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.5
    • /
    • pp.225-230
    • /
    • 2015
  • In this paper, we propose an effective real-time visual loop closure detector, VILODE, which makes use of key frames and bag of visual words (BoW) based on SURF feature points. In order to determine whether the camera has re-visited one of the previously visited places, a loop closure detector has to compare an incoming new image with all previous images collected at every visited place. As the camera passes through new places or locations, the amount of images to be compared continues growing. For this reason, it is difficult for a visual loop closure detector to meet both real-time constraint and high detection accuracy. To address the problem, the proposed system adopts an effective key frame selection strategy which selects and compares only distinct meaningful ones from continuously incoming images during navigation, and so it can reduce greatly image comparisons for loop detection. Moreover, in order to improve detection accuracy and efficiency, the system represents each key frame image as a bag of visual words, and maintains indexes for them using DBoW database system. The experiments with TUM benchmark datasets demonstrates high performance of the proposed visual loop closure detector.