• Title/Summary/Keyword: Landmark

Search Result 602, Processing Time 0.027 seconds

Development of Localization using Artificial and Natural Landmark for Indoor Mobile Robots (실내 이동 로봇을 위한 자연 표식과 인공 표식을 혼합한 위치 추정 기법 개발)

  • Ahn, Joonwoo;Shin, Seho;Park, Jaeheung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.4
    • /
    • pp.205-216
    • /
    • 2016
  • The localization of the robot is one of the most important factors of navigating mobile robots. The use of featured information of landmarks is one approach to estimate the location of the robot. This approach can be classified into two categories: the natural-landmark-based and artificial-landmark-based approach. Natural landmarks are suitable for any environment, but they may not be sufficient for localization in the less featured or dynamic environment. On the other hand, artificial landmarks may generate shaded areas due to space constraints. In order to improve these disadvantages, this paper presents a novel development of the localization system by using artificial and natural-landmarks-based approach on a topological map. The proposed localization system can recognize far or near landmarks without any distortion by using landmark tracking system based on top-view image transform. The camera is rotated by distance of landmark. The experiment shows a result of performing position recognition without shading section by applying the proposed system with a small number of artificial landmarks in the mobile robot.

Effect of Voxel Size on the Accuracy of Landmark Identification in Cone-Beam Computed Tomography Images

  • Lee, Kyung-Min;Davami, Kamran;Hwang, Hyeon-Shik;Kang, Byung-Cheol
    • Journal of Korean Dental Science
    • /
    • v.12 no.1
    • /
    • pp.20-28
    • /
    • 2019
  • Purpose: This study was performed to evaluate the effect of voxel size on the accuracy of landmark identification in cone-beam computed tomography (CBCT) images. Materials and Methods: CBCT images were obtained from 15 dry human skulls with two different voxel sizes; 0.39 mm and 0.10 mm. Three midline landmarks and eight bilateral landmarks were identified by 5 examiners and were recorded as three-dimensional coordinates. In order to compare the accuracy of landmark identification between large and small voxel size images, the difference between best estimate (average value of 5 examiners' measurements) and each examiner's value were calculated and compared between the two images. Result: Landmark identification errors showed a high variability according to the landmarks in case of large voxel size images. The small voxel size images showed small errors in all landmarks. The landmark identification errors were smaller for all landmarks in the small voxel size images than in the large voxel size images. Conclusion: The results of the present study indicate that landmark identification errors could be reduced by using smaller voxel size scan in CBCT images.

A comparative study of the reproducibility of landmark identification on posteroanterior and anteroposterior cephalograms generated from cone-beam computed tomography scans

  • Na, Eui-Ri;Aljawad, Hussein;Lee, Kyung-Min;Hwang, Hyeon-Shik
    • The korean journal of orthodontics
    • /
    • v.49 no.1
    • /
    • pp.41-48
    • /
    • 2019
  • Objective: This in-vivo study aimed to compare landmark identification errors in anteroposterior (AP) and posteroanterior (PA) cephalograms generated from cone-beam computed tomography (CBCT) scan data in order to examine the feasibility of using AP cephalograms in clinical settings. Methods: AP and PA cephalograms were generated from CBCT scans obtained from 25 adults. Four experienced and four inexperienced examiners were selected depending on their experience levels in analyzing frontal cephalograms. They identified six cephalometric landmarks on AP and PA cephalograms. The errors incurred in positioning the cephalometric landmarks on the AP and PA cephalograms were calculated by using the straight-line distance and the horizontal and vertical components as parameters. Results: Comparison of the landmark identification errors in CBCT-generated frontal cephalograms revealed that landmark-dependent differences were greater than experienceor projection-dependent differences. Comparisons of landmark identification errors in the horizontal and vertical directions revealed larger errors in identification of the crista galli and anterior nasal spine in the vertical direction and the menton in the horizontal direction, in comparison with the other landmarks. Comparison of landmark identification errors between the AP and PA projections in CBCT-generated images revealed a slightly higher error rate in the AP projections, with no inter-examiner differences. Statistical testing of the differences in landmark identification errors between AP and PA cephalograms showed no statistically significant differences for all landmarks. Conclusions: The reproducibility of CBCT-generated AP cephalograms is comparable to that of PA cephalograms; therefore, AP cephalograms can be generated reliably from CBCT scan data in clinical settings.

Deep Learning-based Gaze Direction Vector Estimation Network Integrated with Eye Landmark Localization (딥 러닝 기반의 눈 랜드마크 위치 검출이 통합된 시선 방향 벡터 추정 네트워크)

  • Joo, Heeyoung;Ko, Min-Soo;Song, Hyok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.748-757
    • /
    • 2021
  • In this paper, we propose a gaze estimation network in which eye landmark position detection and gaze direction vector estimation are integrated into one deep learning network. The proposed network uses the Stacked Hourglass Network as a backbone structure and is largely composed of three parts: a landmark detector, a feature map extractor, and a gaze direction estimator. The landmark detector estimates the coordinates of 50 eye landmarks, and the feature map extractor generates a feature map of the eye image for estimating the gaze direction. And the gaze direction estimator estimates the final gaze direction vector by combining each output result. The proposed network was trained using virtual synthetic eye images and landmark coordinate data generated through the UnityEyes dataset, and the MPIIGaze dataset consisting of real human eye images was used for performance evaluation. Through the experiment, the gaze estimation error showed a performance of 3.9, and the estimation speed of the network was 42 FPS (Frames per second).

Efficient Visual Place Recognition by Adaptive CNN Landmark Matching

  • Chen, Yutian;Gan, Wenyan;Zhu, Yi;Tian, Hui;Wang, Cong;Ma, Wenfeng;Li, Yunbo;Wang, Dong;He, Jixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4084-4104
    • /
    • 2021
  • Visual place recognition (VPR) is a fundamental yet challenging task of mobile robot navigation and localization. The existing VPR methods are usually based on some pairwise similarity of image descriptors, so they are sensitive to visual appearance change and also computationally expensive. This paper proposes a simple yet effective four-step method that achieves adaptive convolutional neural network (CNN) landmark matching for VPR. First, based on the features extracted from existing CNN models, the regions with higher significance scores are selected as landmarks. Then, according to the coordinate positions of potential landmarks, landmark matching is improved by removing mismatched landmark pairs. Finally, considering the significance scores obtained in the first step, robust image retrieval is performed based on adaptive landmark matching, and it gives more weight to the landmark matching pairs with higher significance scores. To verify the efficiency and robustness of the proposed method, evaluations are conducted on standard benchmark datasets. The experimental results indicate that the proposed method reduces the feature representation space of place images by more than 75% with negligible loss in recognition precision. Also, it achieves a fast matching speed in similarity calculation, satisfying the real-time requirement.

Generation and Detection of Cranial Landmark

  • Heo, Suwoong;Kang, Jiwoo;Kim, Yong Oock;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 2015
  • Purpose When a surgeon examines the morphology of skull of patient, locations of craniometric landmarks of 3D computed tomography(CT) volume are one of the most important information for surgical purpose. The locations of craniometric landmarks can be found manually by surgeon from the 3D rendered volume or 2D sagittal, axial, and coronal slices which are taken by CT. Since there are many landmarks on the skull, finding these manually is time-consuming, exhaustive, and occasionally inexact. These inefficiencies raise a demand for a automatic localization technique for craniometric landmark points. So in this paper, we propose a novel method through which we can automatically find these landmark points, which are useful for surgical purpose. Materials and Methods At first, we align the experimental data (CT volumes) using Frankfurt Horizontal Plane (FHP) and Mid Sagittal Plane(MSP) which are defined by 3 and 2 cranial landmark points each. The target landmark of our experiment is the anterior nasal spine. Prior to constructing a statistical cubic model which would be used for detecting the location of the landmark from a given CT volume, reference points for the anterior nasal spine were manually chosen by a surgeon from several CT volume sets. The statistical cubic model is constructed by calculating weighted intensity means of these CT sets around the reference points. By finding the location where similarity function (squared difference function) has the minimal value with this model, the location of the landmark can be found from any given CT volume. Results In this paper, we used 5 CT volumes to construct the statistical cubic model. The 20 CT volumes including the volumes, which were used to construct the model, were used for testing. The range of age of subjects is up to 2 years (24 months) old. The found points of each data are almost close to the reference point which were manually chosen by surgeon. Also it has been seen that the similarity function always has the global minimum at the detection point. Conclusion Through the experiment, we have seen the proposed method shows the outstanding performance in searching the landmark point. This algorithm would make surgeons efficiently work with morphological informations of skull. We also expect the potential of our algorithm for searching the anatomic landmarks not only cranial landmarks.

An improved algorithm for Detection of Elephant Flows (개선된 Elephant Flows 발견 알고리즘)

  • Joung, Jinoo;Choi, Yunki;Son, Sunghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37B no.9
    • /
    • pp.849-858
    • /
    • 2012
  • We proposed a scheme to accurately detect elephant flows. Along the ever increasing traffic trend, certain flows occupy the network heavily in terms of time and network bandwidth. These flows are called elephant flows. Elephant flows raises complicated issues to manage for Internet traffics and services. One of the methods to identify elephant flows is the Landmark LRU cache scheme, which improved the previous method of Least Recently Used scheme. We proposed a cache update algorithm, to further improve the existing Landmark LRU. The proposed scheme improves the accuracy to detect elephant flow while maintaining efficiency of Landmark LRU. We verified our algorithm by simulating on Sangmyung University's wireless real network traces and evaluated the improvement.

Map Integration Method using Relative Location (상대적 위치를 이용한 지도통합 방법 : 랜드마크 선정을 중심으로)

  • Kim, Jung-Ok;Park, Jae-June;Yu, Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.3-4
    • /
    • 2010
  • Map integration usually involves matching the common spatial objects in different datasets. There have been recent studies on object matching using relative location as defined by spatial relationships between the object and its neighbor landmark. Therefore the landmark selection process is an important part of map integration using relative location. In this research, we describe an approach to determine landmarks automatically in different geospatial datasets.

  • PDF

INS/Vision Integrated Navigation System Considering Error Characteristics of Landmark-Based Vision Navigation (랜드마크 기반 비전항법의 오차특성을 고려한 INS/비전 통합 항법시스템)

  • Kim, Youngsun;Hwang, Dong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.2
    • /
    • pp.95-101
    • /
    • 2013
  • The paper investigates the geometric effect of landmarks to the navigation error in the landmark based 3D vision navigation and introduces the INS/Vision integrated navigation system considering its effect. The integrated system uses the vision navigation results taking into account the dilution of precision for landmark geometry. Also, the integrated system helps the vision navigation to consider it. An indirect filter with feedback structure is designed, in which the position and the attitude errors are measurements of the filter. Performance of the integrated system is evaluated through the computer simulations. Simulation results show that the proposed algorithm works well and that better performance can be expected when the error characteristics of vision navigation are considered.

Landmark Detection Based on Sensor Fusion for Mobile Robot Navigation in a Varying Environment

  • Jin, Tae-Seok;Kim, Hyun-Sik;Kim, Jong-Wook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.281-286
    • /
    • 2010
  • We propose a space and time based sensor fusion method and a robust landmark detecting algorithm based on sensor fusion for mobile robot navigation. To fully utilize the information from the sensors, first, this paper proposes a new sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable an accurate measurement. Exploration of an unknown environment is an important task for the new generation of mobile robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. The newly proposed, STSF (Space and Time Sensor Fusion) scheme is applied to landmark recognition for mobile robot navigation in an unstructured environment as well as structured environment, and the experimental results demonstrate the performances of the landmark recognition.