• Title/Summary/Keyword: autonomous map building

Search Result 60, Processing Time 0.025 seconds

Analysis of Applicability of Visual SLAM for Indoor Positioning in the Building Construction Site (Visual SLAM의 건설현장 실내 측위 활용성 분석)

  • Kim, Taejin;Park, Jiwon;Lee, Byoungmin;Bae, Kangmin;Yoon, Sebeen;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.11a
    • /
    • pp.47-48
    • /
    • 2022
  • The positioning technology that measures the position of a person or object is a key technology to deal with the location of the real coordinate system or converge the real and virtual worlds, such as digital twins, augmented reality, virtual reality, and autonomous driving. In estimating the location of a person or object at an indoor construction site, there are restrictions that it is impossible to receive location information from the outside, the communication infrastructure is insufficient, and it is difficult to install additional devices. Therefore, this study tested the direct sparse odometry algorithm, one of the visual Simultaneous Localization and Mapping (vSLAM) that estimate the current location and surrounding map using only image information, at an indoor construction site and analyzed its applicability as an indoor positioning technology. As a result, it was found that it is possible to properly estimate the surrounding map and the current location even in the indoor construction site, which has relatively few feature points. The results of this study can be used as reference data for researchers related to indoor positioning technology for construction sites in the future.

  • PDF

Mobile Robot Navigation using a Dynamic Multi-sensor Fusion

  • Kim, San-Ju;Jin, Tae-Seok;Lee, Oh-Keol;Lee, Jang-Myung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.240-243
    • /
    • 2003
  • In this study, as the preliminary step far developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results from the simulations run.

  • PDF

Integrated Path Planning and Collision Avoidance for an Omni-directional Mobile Robot

  • Kim, Dong-Hun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.3
    • /
    • pp.210-217
    • /
    • 2010
  • This paper presents integrated path planning and collision avoidance for an omni-directional mobile robot. In this scheme, the autonomous mobile robot finds the shortest path by the descendent gradient of a navigation function to reach a goal. In doing so, the robot based on the proposed approach attempts to overcome some of the typical problems that may pose to the conventional robot navigation. In particular, this paper presents a set of analysis for an omni-directional mobile robot to avoid trapped situations for two representative scenarios: 1) Ushaped deep narrow obstacle and 2) narrow passage problem between two obstacles. The proposed navigation scheme eliminates the nonfeasible area for the two cases by the help of the descendent gradient of the navigation function and the characteristics of an omni-directional mobile robot. The simulation results show that the proposed navigation scheme can effectively construct a path-planning system in the capability of reaching a goal and avoiding obstacles despite possible trapped situations under uncertain world knowledge.

A Study on the Sensor Fusion Method to Improve Localization of a Mobile Robot (이동로봇의 위치추정 성능개선을 위한 센서융합기법에 관한 연구)

  • Jang, Chul-Woong;Jung, Ki-Ho;Kong, Jung-Shik;Jang, Mun-Suk;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.317-318
    • /
    • 2007
  • One of the important factors of the autonomous mobile robot is to build a map for surround environment and estimate its localization. This paper suggests a sensor fusion method of laser range finder and monocular vision sensor for the simultaneous localization and map building. The robot observes the comer points in the environment as features using the laser range finder, and extracts the SIFT algorithm with the monocular vision sensor. We verify the improved localization performance of the mobile robot from the experiment.

  • PDF

Development of lane-level location data exchange framework based on high-precision digital map (정밀전자지도 기반의 차로 수준의 위치정보 교환 프레임워크 개발)

  • Yang, Inchul;Jeon, Woo Hoon
    • Journal of Digital Contents Society
    • /
    • v.19 no.8
    • /
    • pp.1617-1623
    • /
    • 2018
  • It is necessary to develop a next generation location referencing method with higher accuracy as advanced technologies such as autonomous vehicles require higher accuracy of location data. Thus, we proposed a framework for a lane-level location referencing method (L-LRM) based on high-precision digital road network map, and developed a tool which is capable of analyzing and evaluating the proposed method. Firstly, the necessity and definition of location referencing method was presented, followed by the proposal of an L-LRM framework with a fundamental structure of high-precision digital road network map for the method. Secondly, an architecture of the analysis and evaluation tool was described and then the Windows application program was developed using C/C++ programming language. Finally, we demonstrated the performance of the proposed framework and the application program using two different high precision digital maps with randomly generated road event data.

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

Simulation of Mobile Robot Navigation based on Multi-Sensor Data Fusion by Probabilistic Model

  • Jin, Tae-seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.4
    • /
    • pp.167-174
    • /
    • 2018
  • Presently, the exploration of an unknown environment is an important task for the development of mobile robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems, In mobile robotics, multi-sensor data fusion(MSDF) became useful method for navigation and collision avoiding. Moreover, their applicability for map building and navigation has exploited in recent years. In this paper, as the preliminary step for developing a multi-purpose autonomous carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as ultrasonic sensor, IR sensor for mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within indoor environments. Simulation results with a mobile robot will demonstrate the effectiveness of the discussed methods.

Development of Range Sensor Based Integrated Navigation System for Indoor Service Robots (실내용 서비스 로봇을 위한 거리 센서 기반의 통합 자율 주행 시스템 개발)

  • Kim Gunhee;Kim Munsang;Chung Woojin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.9
    • /
    • pp.785-798
    • /
    • 2004
  • This paper introduces the development of a range sensor based integrated navigation system for a multi-functional indoor service robot, called PSR (Public Service Robot System). The proposed navigation system includes hardware integration for sensors and actuators, the development of crucial navigation algorithms like mapping, localization, and path planning, and planning scheme such as error/fault handling. Major advantages of the proposed system are as follows: 1) A range sensor based generalized navigation system. 2) No need for the modification of environments. 3) Intelligent navigation-related components. 4) Framework supporting the selection of multiple behaviors and error/fault handling schemes. Experimental results are presented in order to show the feasibility of the proposed navigation system. The result of this research has been successfully applied to our three service robots in a variety of task domains including a delivery, a patrol, a guide, and a floor cleaning task.

A PATH PLANNING of SMEARING ROBOT on Auto CAD

  • Hyun, Woong-Keun;Shin, Dong-Soo
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1999.10a
    • /
    • pp.539-543
    • /
    • 1999
  • This paper describes a sweeping path planning algorithm for an autonomous smearing robot on commercial autoCAD system. An automatic planner generates a sweeping path pattern by proposed five basic procedures, (1) interfacing architectural CAD system, (2) off-line obstacle map building, (3) scanning the whole workspace for subgoals of sweeping line, (4) tracking sequence of the subgoals, and (5) obstacle avoiding. A sweeping path is planned by sequentially connecting the tracking points in such a way that (1) the connected line segments should be crossed, (2) the total tracking points should be as short as possible, (3) the tracking line should not pass through the obstacle. Feasibility of the developed techniques has been demonstrated on real architectural CAD draft.

  • PDF