• Title/Summary/Keyword: Vision based localization

Search Result 137, Processing Time 0.028 seconds

Extended Support Vector Machines for Object Detection and Localization

  • Feyereisl, Jan;Han, Bo-Hyung
    • The Magazine of the IEIE
    • /
    • v.39 no.2
    • /
    • pp.45-54
    • /
    • 2012
  • Object detection is a fundamental task for many high-level computer vision applications such as image retrieval, scene understanding, activity recognition, visual surveillance and many others. Although object detection is one of the most popular problems in computer vision and various algorithms have been proposed thus far, it is also notoriously difficult, mainly due to lack of proper models for object representation, that handle large variations of object structure and appearance. In this article, we review a branch of object detection algorithms based on Support Vector Machines (SVMs), a well-known max-margin technique to minimize classification error. We introduce a few variations of SVMs-Structural SVMs and Latent SVMs-and discuss their applications to object detection and localization.

  • PDF

Navigation and Localization of Mobile Robot Based on Vision and Sensor Network Using Fuzzy Rules (퍼지 규칙을 이용한 비전 및 무선 센서 네트워크 기반의 이동로봇의 자율 주행 및 위치 인식)

  • Heo, Jun-Young;Kang, Geun-Tack;Lee, Won-Chang
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.673-674
    • /
    • 2008
  • This paper presents a new navigation algorithm of an autonomous mobile robot with vision and IR sensors, Zigbee Sensor Network using fuzzy rules. We also show that the developed mobile robot with the proposed algorithm is navigating very well in complex unknown environments.

  • PDF

Global Localization of Mobile Robots Using Omni-directional Images (전방위 영상을 이용한 이동 로봇의 전역 위치 인식)

  • Han, Woo-Sup;Min, Seung-Ki;Roh, Kyung-Shik;Yoon, Suk-June
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.4
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

Autonomous Ground Vehicle Localization Filter Design Using Landmarks with Non-Unique Features (비고유 특징을 갖는 의미정보를 이용한 지상 자율이동체 측위 기법)

  • Kim, Chan-Yeong;Hong, Daniel;Ra, Won-Sang
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1486-1495
    • /
    • 2018
  • This paper investigates the autonomous ground vehicle (AGV) localization filter design problem under GNSS-denied environments. It is assumed that the given landmarks do not have unique features due to the lack of a prior knowledge on them. For such case, the AGV may have difficulties in distinguishing the position measurement of the detected landmark from those of other landmarks with the same feature, hence the conventional localization filters are not applicable. To resolve this technical issue, the localization filter design problem is formulated as a special form of the data association determining whether the detected feature is actually originated from which landmark. The measurement hypotheses generated by landmarks with the same feature are evaluated by the nearest neighbor data association scheme to reduce the computational burden. The position measurement corresponding to the landmark with the most probable hypothesis is used for localization filter. Through the experiments in real-driving condition, it is shown that the proposed method provides satisfactory localization performance in spite of using non-unique landmarks.

Comparative Analysis of VT-ADL Model Performance Based on Variations in the Loss Function (Loss Function 변화에 따른 VT-ADL 모델 성능 비교 분석)

  • Namjung Kim;Changjoon Park;Junhwi Park;Jaehyun Lee;Jeonghwan Gwak
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.41-43
    • /
    • 2024
  • 본 연구에서는 Vision Transformer 기반의 Anomaly Detection and Localization (VT-ADL) 모델에 초점을 맞추고, 손실 함수의 변경이 MVTec 데이터셋에 대한 이상 검출 및 지역화 성능에 미치는 영향을 비교 분석한다. 기존의 손실 함수를 KL Divergence와 Log-Likelihood Loss의 조합인 VAE Loss로 대체하여, 성능 변화를 심층적으로 조사했다. 실험을 통해 VAE Loss로의 전환은 VT-ADL 모델의 이상 검출 능력을 현저히 향상시키며, 특히 PRO-score에서 기존 대비 약 5%의 개선을 보였다는 점을 확인하였다. 이러한 결과는 손실 함수의 최적화가 VT-ADL 모델의 전반적인 성능에 중요한 영향을 미칠 수 있음을 시사한다. 또한, 이 연구는 Vision Transformer 기반 모델의 이상 검출과 지역화 작업에 있어서 손실 함수 선택의 중요성을 강조하며, 향후 관련 연구에 유용한 기준을 제공할 수 있을 것으로 기대된다.

  • PDF

Development of a Hovering Robot System for Calamity Observation

  • Kang, M.S.;Park, S.;Lee, H.G.;Won, D.H.;Kim, T.J.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.580-585
    • /
    • 2005
  • A QRT(Quad-Rotor Type) hovering robot system is developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV(Unmanned Aerial Vehicle) is equipped with four propellers driven by each electric motor, an embedded controller using a DSP, INS(Inertial Navigation System) using 3-axis rate gyros, a CCD camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. The developed hovering robot shows stable flying performances under the adoption of RIC(Robust Internal-loop Compensator) based disturbance compensation and the vision based localization method. The UAV can also avoid obstacles using eight IR and four ultrasonic range sensors. The VTOL(Vertical Take-Off and Landing) flying object flies into indoor fire spots and sends the images captured by the CCD camera to the operator. This kind of small-sized UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment.

  • PDF

Indoor Localization by Matching of the Types of Vertices (모서리 유형의 정합을 이용한 실내 환경에서의 자기위치검출)

  • Ahn, Hyun-Sik
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.6
    • /
    • pp.65-72
    • /
    • 2009
  • This paper presents a vision based localization method for indoor mobile robots using the types of vertices from a monocular image. In the images captured from a camera of a robot, the types of vertices are determined by searching vertical edges and their branch edges with a geometric constraints. For obtaining correspondence between the comers of a 2-D map and the vertex of images, the type of vertices and geometrical constraints induced from a geometric analysis. The vertices are matched with the comers by a heuristic method using the type and position of the vertices and the comers. With the matched pairs, nonlinear equations derived from the perspective and rigid transformations are produced. The pose of the robot is computed by solving the equations using a least-squares optimization technique. Experimental results show that the proposed localization method is effective and applicable to the localization of indoor environments.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Localization of a Monocular Camera using a Feature-based Probabilistic Map (특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법)

  • Kim, Hyungjin;Lee, Donghwa;Oh, Taekjun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.