• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.03 seconds

Visual Navigation by Neural Network Learning (신경망 학습에 의한 영상처리 네비게이션)

  • Shin, Suk-Young;Hoon Kang
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.263-266
    • /
    • 2001
  • It has been integrated into several navigation systems. This paper shows that system recognizes difficult indoor roads and open area without any specific mark such as painted guide line or tape. In this method, Robot navigates with visual sensors, which uses visual information to navigate itself along the road. An Artificial Neural Network System was used to decide where to move. It is designed with USB web camera as visual sensor.

  • PDF

A Study of Lane Extraction using Sobel Intensity Profile (Sobel Intensity Profile을 이용한 차선 추출에 관한 연구)

  • Park, Tae-Jun;Cho, Jae-Soo;Cho, Tai-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.228-230
    • /
    • 2009
  • Lane extraction is basically required for a driving car to understand its external road environments via a camera. In this paper, a lane extraction method using "Sobel Intensity Profile" is described. The Sobel intensity profile is obtained using only vertical edge components of Sobel edge outputs, and used to yield fitted lines for lanes. The RANAC algorithm is applied to fit lines using only inliers. Experimental results have shown the reliability of the proposed lane extraction method.

  • PDF

Experimental study on practical automatic snowplows

  • Ahn, Doo-Sung;Choi, Jae-Weon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.160.1-160
    • /
    • 2001
  • In this study, control technique of two types of automatic snowplow was experimentally investigated. One is a remote-controlled snowplow used for removing snow around houses, and the other is an autonomous snowplow for use in wide, open spaces such as a parking lot of a large-scale retail store. A commercially available snowplow was modified to enable remote control by the use of a personal handy-phone system. The autonomous controller utilizes a vision sensor that consists of a CCD video camera and a computer for image processing. In addition, design of a practical landmark was examined.

  • PDF

A Study on target tracking system for a mobile robot using ultrasonic sensors

  • Kim, Hon-Hui;Han, Dong-Hui;Ha, Yun-Su
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.134.5-134
    • /
    • 2001
  • The capability of environment recognition is very important for mobile robot. Especially, a function of target tracking is necessary in monitoring and watching an object using mobile robot. In general, vision sensors such as CCD camera and laser range finder were used for tracking of a moving target. However, they are not only affected by intensity of illumination in environment but also require high performance processors to process large amount of data. Therefore, in this paper, we propose the construction of target tracking system for mobile robot using only ultrasonic sensors to cope with these problems.

  • PDF

Supporting plane for intelligent robot system (지능 로보트 시스템에 있어서 지면의 이용에 관한 연구)

  • 박경택
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.990-995
    • /
    • 1991
  • The integration of intelligent robots into manufacturing systems should positively impact the product quality and productivity. A new theory of object location and recognition using the supporting plane is presented. The unknown supporting points are determined by image coordinates, known camera parameters, and joint coordinates of the robot manipulators. This is developed by using the geometrical interpretation of perspective projection and the geometrical constraints of industrial environments. This can be applied to solve typical robot vision problems such as determination of position, orientation, and recognition of objects.

  • PDF

NAVUNGATION CONTROL OF A MOBILE ROBOT (이동로보트의 궤도관제기법)

  • 홍문성;이상용;한민용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.226-229
    • /
    • 1989
  • This paper presents a navigation control method for a vision guided robot. The robot is equipped with one camera, an IBM/AT compatible PC, and a sonar system. The robot can either follow track specified on a monitor screen or navigate to a destination avoiding any obstacles on its way. The robot finds its current position as well as its moving direction by taking an image of a circular pattern placed on the ceiling.

  • PDF

A camera calibration technique and landscape simulation

  • Fujimoto, Kazutaka;Watase, Motoaki;Yamamoto, Masayuki;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.295-298
    • /
    • 1995
  • In this paper, one simple technique to calibrate the system setting of the three-dimensional measuring system is presented. Due to this technique, the three-dimensional shape of the huge structures and the buildings can be readily obtained. This technique is applied to the three-dimensional landscape simulation. Two examples are shown in this paper.

  • PDF

Gaze Detection Using Two Neural Networks (다중 신경망을 이용한 사용자의 응시 위치 추출)

  • 박강령;이정준;이동재;김재희
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.587-590
    • /
    • 1999
  • Gaze detection is to locate the position on a monitor screen where a user is looking at. We implement it by a computer vision system setting a camera above a monitor, and a user move (rotates and or translates) her face to gaze at a different position on the monitor. Up to now, we have tried several different approaches and among them the Two Neural Network approach shows the best result which is described in this paper (1.7 inch error for test data including facial rotation. 3.1 inch error for test data including facial rotation and translation).

  • PDF

The Design of a Network based Visual Agent Platform for Tangible Space (실감 만남을 위한 네트워크 기반 Visual Agent Platform 설계)

  • Kim, Hyun-Ki;Choy, Ick;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.258-260
    • /
    • 2006
  • In this paper, we designed a embedded system that will perform a primary role of Tangible Space implementation. This hardware includes function of image capture through camera interface, image process and sending off image information by LAN (local area network) or WLAN(wireless local area network). We define this hardware as a network based Visual Agent Platform for Tangible Space

  • PDF

The Development of a Network based Visual Agent Platform for Tangible Space (실감 만남을 위한 네트워크 기반 Visual Agent Platform 개발)

  • Kim, Hyun-Ki;Choy, Ick;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.172-174
    • /
    • 2007
  • In this paper, we designed a embedded system that will perform a primary role of Tangible Space implementation. This hardware includes function of image capture through camera interface, image process and sending off image information by LAN(local area network) or WLAN(wireless local area network). We define this hardware as a network based Visual Agent Platform for Tangible Space, This Visual Agent Platform includes the software that is RTLinux and CORBA

  • PDF