• Title/Summary/Keyword: Vision sensor

Search Result 833, Processing Time 0.023 seconds

Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects

  • Jin, Taeseok
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.24-29
    • /
    • 2013
  • In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the "physical sensor fusion" method, which generates the trajectory of a robot based upon the environment model and sensory data, a "command fusion" method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Real-time People Occupancy Detection by Camera Vision Sensor (카메라 비전 센서를 활용하는 실시간 사람 점유 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.774-784
    • /
    • 2017
  • Occupancy sensors installed in buildings and households turn off the light if the space is vacant. Currently PIR (pyroelectric infra-red) motion sensors have been utilized. Recently, the researches using camera sensors have been carried out in order to overcome the demerit of PIR that can not detect static people. If the tradeoff of cost and performance is satisfied, the camera sensors are expected to replace the current PIRs. In this paper, we propose vision sensor-based occupancy detection being composed of tracking, recognition and detection. Our softeware is designed to meet the real-time processing. In experiments, 14.5fps is achieved at 15fps USB input. Also, the detection accuracy reached 82.0%.

A Study for Vision-based Estimation Algorithm of Moving Target Using Aiming Unit of Unguided Rocket (무유도 로켓의 조준 장치를 이용한 영상 기반 이동 표적 정보 추정 기법 연구)

  • Song, Jin-Mo;Lee, Sang-Hoon;Do, Joo-Cheol;Park, Tai-Sun;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.20 no.3
    • /
    • pp.315-327
    • /
    • 2017
  • In this paper, we present a method for estimating of position and velocity of a moving target by using the range and the bearing measurements from multiple sensors of aiming unit. In many cases, conventional low cost gyro sensor and a portable laser range finder(LRF) degrade the accuracy of estimation. To enhance these problems, we propose two methods. The first is background image tracking and the other is principal component analysis (PCA). The background tracking is used to assist the low cost gyro censor. And the PCA is used to cope with the problems of a portable LRF. In this paper, we prove that our method is robust with respect to low-frequency, biased and noisy inputs. We also present a comparison between our method and the extended Kalman filter(EKF).

Measurement of Hot WireRod Cross-Section by Vision System (비전시스템에 의한 열간 선재 단면 측정)

  • Park, Joong-Jo;Tak, Young-Bong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.12
    • /
    • pp.1106-1112
    • /
    • 2000
  • In this paper, we present a vision system which measures the cross-section of a hot wire-rod in the steel plant. We developed a mobile vision system capable of accurate measurement, which is strong to vibration and jolt when moving. Our system uses green laser light sources and CCD cameras as a sensor, where laser sheet beams form a cross-section contour on the surface of the hot wire-rod and the reflected light from the wire-rode is imaged on the CCD cameras. We use four lasers and four cameras to obtain the image with the complete cross-section contour without an occlusion region. We also perform camera calibrations to obtain each cameras physical parameters by using a single calibration pattern sheet. In our measuring algorithm, distorted four-camera images are corrected by using the camera calibration information and added to generate an image with the complete cross-section contour of the wire-rod. Then, from this image, the cross-section contour of the wire-rod is extracted by preprocessing and segmentation, and its height, width and area are measured.

  • PDF

Development and application of a vision-based displacement measurement system for structural health monitoring of civil structures

  • Lee, Jong Jae;Fukuda, Yoshio;Shinozuka, Masanobu;Cho, Soojin;Yun, Chung-Bang
    • Smart Structures and Systems
    • /
    • v.3 no.3
    • /
    • pp.373-384
    • /
    • 2007
  • For structural health monitoring (SHM) of civil infrastructures, displacement is a good descriptor of the structural behavior under all the potential disturbances. However, it is not easy to measure displacement of civil infrastructures, since the conventional sensors need a reference point, and inaccessibility to the reference point is sometimes caused by the geographic conditions, such as a highway or river under a bridge, which makes installation of measuring devices time-consuming and costly, if not impossible. To resolve this issue, a visionbased real-time displacement measurement system using digital image processing techniques is developed. The effectiveness of the proposed system was verified by comparing the load carrying capacities of a steel-plate girder bridge obtained from the conventional sensor and the present system. Further, to simultaneously measure multiple points, a synchronized vision-based system is developed using master/slave system with wireless data communication. For the purpose of verification, the measured displacement by a synchronized vision-based system was compared with the data measured by conventional contact-type sensors, linear variable differential transformers (LVDT) from a laboratory test.

Test of Vision Stabilizer for Unmanned Vehicle Using Virtual Environment and 6 Axis Motion Simulator (가상 환경 및 6축 모션 시뮬레이터를 이용한 무인차량 영상 안정화 장치 시험)

  • Kim, Sunwoo;Ki, Sun-Ock;Kim, Sung-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.2
    • /
    • pp.227-233
    • /
    • 2015
  • In this study, an indoor test environment was developed for studying the vision stabilizer of an unmanned vehicle, using a virtual environment and a 6-axis motion simulator. The real driving environment was replaced by a virtual environment based on the Aberdeen Proving Ground bump test course for military tank testing. The vehicle motion was reproduced by a 6-axis motion simulator. Virtual reality driving courses were displayed in front of the vision stabilizer, which was located on the top of the motion simulator. The performance of the stabilizer was investigated by checking the image of the camera, and the pitch and roll angles of the stabilizer captured by the IMU sensor of the camera.

Development of Multi-Laser Vision System For 3D Surface Scanning (3 차원 곡면 데이터 획득을 위한 멀티 레이져 비젼 시스템 개발)

  • Lee, J.H.;Kwon, K.Y.;Lee, H.C.;Doe, Y.C.;Choi, D.J.;Park, J.H.;Kim, D.K.;Park, Y.J.
    • Proceedings of the KSME Conference
    • /
    • 2008.11a
    • /
    • pp.768-772
    • /
    • 2008
  • Various scanning systems have been studied in many industrial areas to acquire a range data or to reconstruct an explicit 3D model. Currently optical technology has been used widely by virtue of noncontactness and high-accuracy. In this paper, we describe a 3D laser scanning system developped to reconstruct the 3D surface of a large-scale object such as a curved-plate of ship-hull. Our scanning system comprises of 4ch-parallel laser vision modules using a triangulation technique. For multi laser vision, calibration method based on least square technique is applied. In global scanning, an effective method without solving difficulty of matching problem among the scanning results of each camera is presented. Also minimal image processing algorithm and robot-based calibration technique are applied. A prototype had been implemented for testing.

  • PDF

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.