• Title/Summary/Keyword: Vision sensor

Search Result 833, Processing Time 0.023 seconds

Autonomous Control System of Compact Model-helicopter

  • Kang, Chul-Ung;Jun Satake;Takakazu Ishimatsu;Yoichi Shimomoto;Jun Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.95-99
    • /
    • 1998
  • We introduce an autonomous flying system using a model-helicopter. A feature of the helicopter is that autonomous flight is realized on the low-cost compact model-helicopter. Our helicopter system is divided into two parts. One is on the helicopter, and the other is on the land. The helicopter is loaded with a vision sensor and an electronic compass including a tilt sensor. The control system on the land monitors the helicopter movement and controls. We firstly introduce the configuration of our helicopter system with a vision sensor and an electronic compass. To determine the 3-D position and posture of helicopter, a technique of image recognition using a monocular image is described based on the idea of the sensor fusion of vision and electronic compass. Finally, we show an experiment result, which we obtained in the hovering. The result shows the effectiveness of our system in the compact model-helicopter.

  • PDF

Control Method of Mobile Robots for Avoiding Slip and Turnover on Sloped Terrain Using a Gyro/Vision Sensor Module (Gyro/Vision Sensor Module을 이용한 주행 로봇의 미끄러짐 및 넘어짐 회피 제어 기법)

  • Lee Jeong-Hee;Park Jae-Byung;Lee Beom-Hee
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.8
    • /
    • pp.669-677
    • /
    • 2005
  • This acticle describes the control method of mobile robots for avoiding slip and turnover on sloped terrain. An inexpensive gyro/vision sensor module is suggested for obtaining the information of terrain at present and future. Using the terrain information and the robot state, the maximum limit velocity of the forward velocity of the robot is defined fur avoiding slip and turnover of the robot. Simultaneously the maximum value of the robot velocity is reflected to an operator in the form of reflective force on a forte feedback joystick. Consequently the operator can recognize the maximum velocity of the robot determined by the terrain information and the robot state. In this point of view, the inconsistency of the robot movement and the user's command caused by the limit velocity of the robot can be compensated by the reflective force. The experimenal results show the effectiveness of the suggested method.

Resolution improvement of a CMOS vision chip for edge detection by separating photo-sensing and edge detection circuits (수광 회로와 윤곽 검출 회로의 분리를 통한 윤곽 검출용 시각칩의 해상도 향상)

  • Kong, Jae-Sung;Suh, Sung-Ho;Kim, Sang-Heon;Shin, Jang-Kyoo;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.112-119
    • /
    • 2006
  • Resolution of an image sensor is very significant parameter to improve. It is hard to improve the resolution of the CMOS vision chip for edge detection based on a biological retina using a resistive network because the vision chip contains additional circuits such as a resistive network and some processing circuits comparing with general image sensors such as CMOS image sensor (CIS). In this paper, we proved the problem of low resolution by separating photo-sensing and signal processing circuits. This type of vision chips occurs a problem of low operation speed because the signal processing circuits should be commonly used in a row of the photo-sensors. The low speed problem of operation was proved by using a reset decoder. A vision chip for edge detection with $128{\times}128$ pixel array has been designed and fabricated by using $0.35{\mu}m$ 2-poly 4-metal CMOS technology. The fabricated chip was integrated with optical lens as a camera system and investigated with real image. By using this chip, we could achieved sufficient edge images for real application.

Navigation and Localization of Mobile Robot Based on Vision and Sensor Network Using Fuzzy Rules (퍼지 규칙을 이용한 비전 및 무선 센서 네트워크 기반의 이동로봇의 자율 주행 및 위치 인식)

  • Heo, Jun-Young;Kang, Geun-Tack;Lee, Won-Chang
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.673-674
    • /
    • 2008
  • This paper presents a new navigation algorithm of an autonomous mobile robot with vision and IR sensors, Zigbee Sensor Network using fuzzy rules. We also show that the developed mobile robot with the proposed algorithm is navigating very well in complex unknown environments.

  • PDF

Obstacle Recognition Using the Vision and Ultrasonic Sensor in a Mobile Robot (영상과 초음파 정보를 이용한 이동로보트의 장애물 인식)

  • 박민기;박민용
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.9
    • /
    • pp.1154-1161
    • /
    • 1995
  • In this paper, a new method is proposed where the vision and ultrasonic sensor are used to recognize obstacles and to obtain its position and size. Ultrasonic snsors are used to obtain the actual navigation path width of the mobile robot. In conjunction with camera images of the path, recognition of obstacles and the determination of its distance, direction, and width are carried out. The characteristics of the sensors and the mobile robots used generally make it difficult to recognize all environments; accordingly, a restricted environment is employed for this study.

  • PDF

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

Feature Extraction for Vision Based Micromanipulation

  • Jang, Min-Soo;Lee, Seok-Joo;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.5-41
    • /
    • 2002
  • This paper presents a feature extraction algorithm for vision-based micromanipulation. In order to guarantee of the accurate micromanipulation, most of micromanipulation systems use vision sensor. Vision data from an optical microscope or high magnification lens have vast information, however, characteristics of micro image such as emphasized contour, texture, and noise are make it difficult to apply macro image processing algorithms to micro image. Grasping points extraction is very important task in micromanipulation because inaccurate grasping points can cause breakdown of micro gripper or miss of micro objects. To solve those problems and extract grasping points for micromanipulation...

  • PDF

Development of Vision Inspector for Simulating Image Acquisition in Automated Optical Inspection System (Automated Optical Inspection 시스템의 이미지 획득과정을 전산모사하는 Vision Inspector 개발)

  • Jeong, Sang-Cheol;Go, Nak-Hun;Kim, Dae-Chan;Seo, Seung-Won;Choe, Tae-Il;Lee, Seung-Geol
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2008.07a
    • /
    • pp.403-404
    • /
    • 2008
  • This report described the development of Vision Inspector program which can simulate numerically the image acquisition process of Machine Vision System for automatic optical inspection of any products. The program consists of an illuminator, a product to be inspected, and a camera with image sensor, and the final image obtained by ray tracing.

  • PDF

Development of a Grinding Robot System for the Engine Cylinder Liner's Oil Groove (실린더 라이너 오일그루브 가공 로봇 시스템 개발)

  • Noh, Tae-Yang;Lee, Yun-Sik;Jung, Chang-Wook;Oh, Yong-Chul
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.6
    • /
    • pp.614-619
    • /
    • 2009
  • An engine for marine propulsion and power generation consists of several cylinder liner-piston sets. And the oil groove is on the cylinder liner inside wall for the lubrication between a piston and cylinder. The machining process of oil groove has been carried by manual work so far, because of the diversity of the shape. Recently, we developed an automatic grinding robot system for oil groove machining of engine cylinder liners. It can covers various types of oil grooves and adjust its position by itself. The grinding robot system consists of a robot, a machining tool head, sensors and a control system. The robot automatically recognizes the cylinder liner's inside configuration by using a laser displacement sensor and a vision sensor after the cylinder liner is placed on a set-up equipment.

A 3-D Position Compensation Method of Industrial Robot Using Block Interpolation (블록 보간법을 이용한 산업용 로봇의 3차원 위치 보정기법)

  • Ryu, Hang-Ki;Woo, Kyung-Hang;Choi, Won-Ho;Lee, Jae-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.3
    • /
    • pp.235-241
    • /
    • 2007
  • This paper proposes a self-calibration method of robots those are used in industrial assembly lines. The proposed method is a position compensation using laser sensor and vision camera. Because the laser sensor is cross type laser sensor which can scan a horizontal and vertical line, it is efficient way to detect a feature of vehicle and winding shape of vehicle's body. For position compensation of 3-Dimensional axis, we applied block interpolation method. For selecting feature point, pattern matching method is used and 3-D position is selected by Euclidean distance mapping between 462 feature values and evaluated feature point. In order to evaluate the proposed algorithm, experiments are performed in real industrial vehicle assembly line. In results, robot's working point can be displayed 3-D points. These points are used to diagnosis error of position and reselecting working point.