• Title/Summary/Keyword: Camera localization

Search Result 200, Processing Time 0.024 seconds

Counting and Localizing Occupants using IR-UWB Radar and Machine Learning

  • Ji, Geonwoo;Lee, Changwon;Yun, Jaeseok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.1-9
    • /
    • 2022
  • Localization systems can be used with various circumstances like measuring population movement and rescue technology, even in security technology (like infiltration detection system). Vision sensors such as camera often used for localization is susceptible with light and temperature, and can cause invasion of privacy. In this paper, we used ultra-wideband radar technology (which is not limited by aforementioned problems) and machine learning techniques to measure the number and location of occupants in other indoor spaces behind the wall. We used four different algorithms and compared their results, including extremely randomized tree for four different situations; detect the number of occupants in a classroom, split the classroom into 28 locations and check the position of occupant, select one out of the 28 locations, divide it into 16 fine-grained locations, and check the position of occupant, and checking the positions of two occupants (existing in different locations). Overall, four algorithms showed good results and we verified that detecting the number and location of occupants are possible with high accuracy using machine learning. Also we have considered the possibility of service expansion using the oneM2M standard platform and expect to develop more service and products if this technology is used in various fields.

Feature point extraction using scale-space filtering and Tracking algorithm based on comparing texturedness similarity (스케일-스페이스 필터링을 통한 특징점 추출 및 질감도 비교를 적용한 추적 알고리즘)

  • Park, Yong-Hee;Kwon, Oh-Seok
    • Journal of Internet Computing and Services
    • /
    • v.6 no.5
    • /
    • pp.85-95
    • /
    • 2005
  • This study proposes a method of feature point extraction using scale-space filtering and a feature point tracking algorithm based on a texturedness similarity comparison, With well-defined operators one can select a scale parameter for feature point extraction; this affects the selection and localization of the feature points and also the performance of the tracking algorithm. This study suggests a feature extraction method using scale-space filtering, With a change in the camera's point of view or movement of an object in sequential images, the window of a feature point will have an affine transform. Traditionally, it is difficult to measure the similarity between correspondence points, and tracking errors often occur. This study also suggests a tracking algorithm that expands Shi-Tomasi-Kanade's tracking algorithm with texturedness similarity.

  • PDF

Loop Closure in a Line-based SLAM (직선기반 SLAM에서의 루프결합)

  • Zhang, Guoxuan;Suh, Il-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.120-128
    • /
    • 2012
  • The loop closure problem is one of the most challenging issues in the vision-based simultaneous localization and mapping community. It requires the robot to recognize a previously visited place from current camera measurements. While the loop closure often relies on visual bag-of-words based on point features in the previous works, however, in this paper we propose a line-based method to solve the loop closure in the corridor environments. We used both the floor line and the anchored vanishing point as the loop closing feature, and a two-step loop closure algorithm was devised to detect a known place and perform the global pose correction. We propose an anchored vanishing point as a novel loop closure feature, as it includes position information and represents the vanishing points in bi-direction. In our system, the accumulated heading error is reduced using an observation of a previously registered anchored vanishing points firstly, and the observation of known floor lines allows for further pose correction. Experimental results show that our method is very efficient in a structured indoor environment as a suitable loop closure solution.

Development of Patrol Robot using DGPS and Curb Detection (DGPS와 연석추출을 이용한 순찰용 로봇의 개발)

  • Kim, Seung-Hun;Kim, Moon-June;Kang, Sung-Chul;Hong, Suk-Kyo;Roh, Chi-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.2
    • /
    • pp.140-146
    • /
    • 2007
  • This paper demonstrates the development of a mobile robot for patrol. We fuse differential GPS, angle sensor and odometry data using the framework of extended Kalman filter to localize a mobile robot in outdoor environments. An important feature of road environment is the existence of curbs. So, we also propose an algorithm to find out the position of curbs from laser range finder data using Hough transform. The mobile robot builds the map of the curbs of roads and the map is used fur tracking and localization. The patrol robot system consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The remote control station receives and displays the image data. Also, the patrol robot system can be used in two modes, teleoperated or autonomous. In teleoperated mode, the teleoperator commands the mobile robot based on the image data. On the other hand, in autonomous mode, the mobile robot has to autonomously track the predefined waypoints. So, we have designed a path tracking controller to track the path. We have been able to confirm that the proposed algorithms show proper performances in outdoor environment through experiments in the road.

A Development of the Autonomous Driving System based on a Precise Digital Map (정밀 지도에 기반한 자율 주행 시스템 개발)

  • Kim, Byoung-Kwang;Lee, Cheol Ha;Kwon, Surim;Jung, Changyoung;Chun, Chang Hwan;Park, Min Woo;Na, Yongcheon
    • Journal of Auto-vehicle Safety Association
    • /
    • v.9 no.2
    • /
    • pp.6-12
    • /
    • 2017
  • An autonomous driving system based on a precise digital map is developed. The system is implemented to the Hyundai's Tucsan fuel cell car, which has a camera, smart cruise control (SCC) and Blind spot detection (BSD) radars, 4-Layer LiDARs, and a standard GPS module. The precise digital map has various information such as lanes, speed bumps, crosswalks and land marks, etc. They can be distinguished as lane-level. The system fuses sensed data around the vehicle for localization and estimates the vehicle's location in the precise map. Objects around the vehicle are detected by the sensor fusion system. Collision threat assessment is performed by detecting dangerous vehicles on the precise map. When an obstacle is on the driving path, the system estimates time to collision and slow down the speed. The vehicle has driven autonomously in the Hyundai-Kia Namyang Research Center.

AVM Stop-line Detection based Longitudinal Position Correction Algorithm for Automated Driving on Urban Roads (AVM 정지선인지기반 도심환경 종방향 측위보정 알고리즘)

  • Kim, Jongho;Lee, Hyunsung;Yoo, Jinsoo;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.12 no.2
    • /
    • pp.33-39
    • /
    • 2020
  • This paper presents an Around View Monitoring (AVM) stop-line detection based longitudinal position correction algorithm for automated driving on urban roads. Poor positioning accuracy of low-cost GPS has many problems for precise path tracking. Therefore, this study aims to improve the longitudinal positioning accuracy of low-cost GPS. The algorithm has three main processes. The first process is a stop-line detection. In this process, the stop-line is detected using Hough Transform from the AVM camera. The second process is a map matching. In the map matching process, to find the corrected vehicle position, the detected line is matched to the stop-line of the HD map using the Iterative Closest Point (ICP) method. Third, longitudinal position of low-cost GPS is updated using a corrected vehicle position with Kalman Filter. The proposed algorithm is implemented in the Robot Operating System (ROS) environment and verified on the actual urban road driving data. Compared to low-cost GPS only, Test results show the longitudinal localization performance was improved.

Odor Source Tracking of Mobile Robot with Vision and Odor Sensors (비전과 후각 센서를 이용한 이동로봇의 냄새 발생지 추적)

  • Ji, Dong-Min;Lee, Jeong-Jun;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.698-703
    • /
    • 2006
  • This paper proposes an approach to search for the odor source using an autonomous mobile robot equipped with vision and odor sensors. The robot is initially navigating around the specific area with vision system until it looks for an object in the camera image. The robot approaches the object found in the field of view and checks it with the odor sensors if it is releasing odor. If so, the odor is classified and localized with the classification algorithm based on neural network The AMOR(Autonomous Mobile Olfactory Robot) was built up and used for the experiments. Experimental results on the classification and localization of odor sources show the validity of the proposed algorithm.

Study on Three-dimension Reconstruction to Low Resolution Image of Crops (작물의 저해상도 이미지에 대한 3차원 복원에 관한 연구)

  • Oh, Jang-Seok;Hong, Hyung-Gil;Yun, Hae-Yong;Cho, Yong-Jun;Woo, Seong-Yong;Song, Su-Hwan;Seo, Kap-Ho;Kim, Dae-Hee
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.18 no.8
    • /
    • pp.98-103
    • /
    • 2019
  • A more accurate method of feature point extraction and matching for three-dimensional reconstruction using low-resolution images of crops is proposed herein. This method is important in basic computer vision. In addition to three-dimensional reconstruction from exact matching, map-making and camera location information such as simultaneous localization and mapping can be calculated. The results of this study suggest applicable methods for low-resolution images that produce accurate results. This is expected to contribute to a system that measures crop growth condition.

Three-dimensional Map Construction of Indoor Environment Based on RGB-D SLAM Scheme

  • Huang, He;Weng, FuZhou;Hu, Bo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.2
    • /
    • pp.45-53
    • /
    • 2019
  • RGB-D SLAM (Simultaneous Localization and Mapping) refers to the technology of using deep camera as a visual sensor for SLAM. In view of the disadvantages of high cost and indefinite scale in the construction of maps for laser sensors and traditional single and binocular cameras, a method for creating three-dimensional map of indoor environment with deep environment data combined with RGB-D SLAM scheme is studied. The method uses a mobile robot system equipped with a consumer-grade RGB-D sensor (Kinect) to acquire depth data, and then creates indoor three-dimensional point cloud maps in real time through key technologies such as positioning point generation, closed-loop detection, and map construction. The actual field experiment results show that the average error of the point cloud map created by the algorithm is 0.0045m, which ensures the stability of the construction using deep data and can accurately create real-time three-dimensional maps of indoor unknown environment.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.