• Title/Summary/Keyword: mobile vision system

Search Result 292, Processing Time 0.031 seconds

A Study on Development of Visual Navigation System based on Neural Network Learning

  • Shin, Suk-Young;Lee, Jang-Hee;You, Yang-Jun;Kang, Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.1
    • /
    • pp.1-8
    • /
    • 2002
  • It has been integrated into several navigation systems. This paper shows that system recognizes difficult indoor roads without any specific marks such as painted guide line or tape. In this method the robot navigates with visual sensors, which uses visual information to navigate itself along the read. The Neural Network System was used to learn driving pattern and decide where to move. In this paper, I will present a vision-based process for AMR(Autonomous Mobile Robot) that is able to navigate on the indoor read with simple computation. We used a single USB-type web camera to construct smaller and cheaper navigation system instead of expensive CCD camera.

Internal Teleoperation of an Autonomous Mobile Robot (인터넷을 이용한 자율운행로봇의 원격운용)

  • 박태현;강근택;이원창
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.45-45
    • /
    • 2000
  • This paper proposes a remote control system that combines computer network and an autonomous mobile robot. We control remotely an autonomous mobile robot with vision via the internet to guide it under unknown environments in the real time. The main feature of this system is that local operators need a World Wide Web browser and a computer connected to the internet communication network and so they can command the robot in a remote location through our Home Page. The hardware architecture of this system consists of an autonomous mobile robot, workstation, and local computers. The software architecture of this system includes the server part for communication between user and robot and the client part for the user interface and a robot control system. The server and client parts are developed using Java language which is suitable to internet application and supports multi-platform. Furthermore, this system offers an image compression method using motion JPEG concept which reduces large time delay that occurs in network during image transmission.

  • PDF

Path finding via VRML and VISION overlay for Autonomous Robotic (로봇의 위치보정을 통한 경로계획)

  • Sohn, Eun-Ho;Park, Jong-Ho;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.527-529
    • /
    • 2006
  • In this paper, we find a robot's path using a Virtual Reality Modeling Language and overlay vision. For correct robot's path we describe a method for localizing a mobile robot in its working environment using a vision system and VRML. The robt identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Hybrid Inertial and Vision-Based Tracking for VR applications (가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹)

  • Gu, Jae-Pil;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Ik-Jae;Gu, Yeol-Hoe
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

Object Recognition for Mobile Robot using Context-based Bi-directional Reasoning (상황 정보 기반 양방향 추론 방법을 이용한 이동 로봇의 물체 인식)

  • Lim, G.H.;Ryu, G.G.;Suh, I.H.;Kim, J.B.;Zhang, G.X.;Kang, J.H.;Park, M.K.
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.6-8
    • /
    • 2007
  • In this paper, We propose reasoning system for object recognition and space classification using not only visual features but also contextual information. It is necessary to perceive object and classify space in real environments for mobile robot. especially vision based. Several visual features such as texture, SIFT. color are used for object recognition. Because of sensor uncertainty and object occlusion. there are many difficulties in vision-based perception. To show the validities of our reasoning system. experimental results will be illustrated. where object and space are inferred by bi -directional rules even with partial and uncertain information. And the system is combined with top-down and bottom-up approach.

  • PDF

Odor Cognition and Source Tracking of an Intelligent Robot based upon Wireless Sensor Network (센서 네트워크 기반 지능 로봇의 냄새 인식 및 추적)

  • Lee, Jae-Yeon;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.49-54
    • /
    • 2011
  • In this paper, we represent a mobile robot which can recognize chemical odor, measure concentration, and track its source indoors. The mobile robot has the function of smell that can sort several gases in experiment such as ammonia, ethanol, and their mixture with neural network algorithm and measure each gas concentration with fuzzy rules. In addition, it can not only navigate to the desired position with vision system by avoiding obstacles but also transmit odor information and warning messages earned from its own operations to other nodes by multi-hop communication in wireless sensor network. We suggest the way of odor sorting, concentration measurement, and source tracking for a mobile robot in wireless sensor network using a hybrid algorithm with vision system and gas sensors. The experimental studies prove that the efficiency of the proposed algorithm for odor recognition, concentration measurement, and source tracking.

A Novel Depth Measurement Technique for Collision Avoidance Mobile Robot (이동로봇의 장애물과의 충돌방지를 위한 새로운 3차원 거리 인식 방법)

  • 송재홍;나상익;김형석
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.291-294
    • /
    • 2002
  • A simple computer vision technology to measure the middle-ranged depth with mono camera and plain mirror is proposed The proposed system is structured wiか the rotating mirror in front of the fixed mono camera In contrast to the previous stereo vision system in which the disparity of the closer object is larger than that of the distant object, the pixel movement caused by the rotating mirror is bigger for the pixels of the distant object in the proposed system Being inspired by such feature in the proposed system the principle of the depth measurement based on the relation of the pixel movement and the distance of object have been investigated. Also, the factors to influence the precision of the measurement are analysed The benefits of the proposed system are low price and less chance of occlusion. The robustness for practical usage is an additional benefit of the proposed vision system.

  • PDF

A Study on the Sensor Fusion Method to Improve Localization of a Mobile Robot (이동로봇의 위치추정 성능개선을 위한 센서융합기법에 관한 연구)

  • Jang, Chul-Woong;Jung, Ki-Ho;Kong, Jung-Shik;Jang, Mun-Suk;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.317-318
    • /
    • 2007
  • One of the important factors of the autonomous mobile robot is to build a map for surround environment and estimate its localization. This paper suggests a sensor fusion method of laser range finder and monocular vision sensor for the simultaneous localization and map building. The robot observes the comer points in the environment as features using the laser range finder, and extracts the SIFT algorithm with the monocular vision sensor. We verify the improved localization performance of the mobile robot from the experiment.

  • PDF

Three Dimensional Obstacle Detection for Indoor Navigation (실내 주행을 위한 3차원 장애물 검출)

  • Ko, Bok-Kyong;Woo, Dong-Min
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1251-1253
    • /
    • 1996
  • The vision processing system for mobile robots requires real time processing and reliability for the purpose of safe navigation. But, general types of vision systems are not appropriate owing to the correspondence problem which correlates the points out of two images. To determine the obstacle area, we use correspondences of line segments between two perspective images sequentially acquired by camera. To simplify the correspondence, the matching of line segments are performed in the navigation space, based on the assumption that mobile robot should be navigated in the flat surface and the motion of mobile robot between two frames should be approximately known.

  • PDF

Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic (3차원 공간 맵핑을 통한 로봇의 경로 구현)

  • Son, Eun-Ho;Kim, Young-Chul;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.