• Title/Summary/Keyword: vision-based system

Search Result 1,694, Processing Time 0.036 seconds

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

Design and Fabrication of Multi-rotor system for Vision based Autonomous Landing (영상 기반 자동 착륙용 멀티로터 시스템 설계 및 개발)

  • Kim, Gyou-Beom;Song, Seung-Hwa;Yoon, Kwang-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.6
    • /
    • pp.141-146
    • /
    • 2012
  • This paper introduces development of multi-rotor system and vision based autonomous landing system. Multi-rotor platform is modeled by rigid body motion with Newton Euler concept. Also Multi-rotor platform is simulated and tuned by LQR control algorithm. Vision based Autonomous Landing system uses a single camera that is mounted Multi-rotor system. Augmented reality algorithm is used as marker detection algorithm and autonomous landing code is test with GCS for the precision landing.

Dynamic 3D Worker Pose Registration for Safety Monitoring in Manufacturing Environment based on Multi-domain Vision System (다중 도메인 비전 시스템 기반 제조 환경 안전 모니터링을 위한 동적 3D 작업자 자세 정합 기법)

  • Ji Dong Choi;Min Young Kim;Byeong Hak Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.303-310
    • /
    • 2023
  • A single vision system limits the ability to accurately understand the spatial constraints and interactions between robots and dynamic workers caused by gantry robots and collaborative robots during production manufacturing. In this paper, we propose a 3D pose registration method for dynamic workers based on a multi-domain vision system for safety monitoring in manufacturing environments. This method uses OpenPose, a deep learning-based posture estimation model, to estimate the worker's dynamic two-dimensional posture in real-time and reconstruct it into three-dimensional coordinates. The 3D coordinates of the reconstructed multi-domain vision system were aligned using the ICP algorithm and then registered to a single 3D coordinate system. The proposed method showed effective performance in a manufacturing process environment with an average registration error of 0.0664 m and an average frame rate of 14.597 per second.

Study on the Target Tracking of a Mobile Robot Using Active Stereo-Vision System (능동 스테레오 비젼을 시스템을 이용한 자율이동로봇의 목표물 추적에 관한 연구)

  • 이희명;이수희;이병룡;양순용;안경관
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.915-919
    • /
    • 2003
  • This paper presents a fuzzy-motion-control based tracking algorithm of mobile robots, which uses the geometrical information derived from the active stereo-vision system mounted on the mobile robot. The active stereo-vision system consists of two color cameras that rotates in two angular dimensions. With the stereo-vision system, the center position and depth information of the target object can be calculated. The proposed fuzzy motion controller is used to calculate the tracking velocity and angular position of the mobile robot, which makes the mobile robot keep following the object with a constant distance and orientation.

  • PDF

Design and Experimental Evaluation of Action Level in a Hybrid Control Structure for Vision Based Soccer Robot (비젼기반 축구로봇시스템을 위한 복합제어구조에서 행위계층설계 및 시험적 평가)

  • Shim, Hyun-Sik;Sung, Yoon-Gyeoung;Kim, Jong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.2
    • /
    • pp.197-206
    • /
    • 2000
  • A hybrid control structure for vision-based soccer robot system is considered. The structure is om-posed of four levels such as the role action behavior and execution levels. The control structure which is a combination of hierarchical and behavioral structures can efficiently meet the behavior and design specifications of a soccer robot system Among the four levels only the design of the action level is proposed in the paper and is experimentally evaluated. Design hypothesis and evaluation method are presented to improve the reliability and accomplishment of the robot system. Due to the essential element of soccer robot system design a systematic design procedure of the action level is proposed With the proposed structure and algorithm of the action level the excellent result was shown at the MiroSot'98 held in France.

  • PDF

A Computer Vision-Based Banknote Recognition System for the Blind with an Accuracy of 98% on Smartphone Videos

  • Sanchez, Gustavo Adrian Ruiz
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.67-72
    • /
    • 2019
  • This paper proposes a computer vision-based banknote recognition system intended to assist the blind. This system is robust and fast in recognizing banknotes on videos recorded with a smartphone on real-life scenarios. To reduce the computation time and enable a robust recognition in cluttered environments, this study segments the banknote candidate area from the background utilizing a technique called Pixel-Based Adaptive Segmenter (PBAS). The Speeded-Up Robust Features (SURF) interest point detector is used, and SURF feature vectors are computed only when sufficient interest points are found. The proposed algorithm achieves a recognition accuracy of 98%, a 100% true recognition rate and a 0% false recognition rate. Although Korean banknotes are used as a working example, the proposed system can be applied to recognize other countries' banknotes.

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Particle Filter Based Feature Points Tracking for Vision Based Navigation System (영상기반항법을 위한 파티클 필터 기반의 특징점 추적 필터 설계)

  • Won, Dae-Hee;Sung, Sang-Kyung;Lee, Young-Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.1
    • /
    • pp.35-42
    • /
    • 2012
  • In this study, a feature-points-tracking algorithm is suggested using a particle filter for vision based navigation system. By applying a dynamic model of the feature point, the tracking performance is increased in high dynamic condition, whereas a conventional KLT (Kanade-Lucas-Tomasi) cannot give a solution. Futhermore, the particle filter is introduced to cope with irregular characteristics of vision data. Post-processing of recorded vision data shows that the tracking performance of suggested algorithm is more robust than that of KLT in high dynamic condition.

Development of an FPGA-based Sealer Coating Inspection Vision System for Automotive Glass Assembly Automation Equipment (자동차 글라스 조립 자동화설비를 위한 FPGA기반 실러 도포검사 비전시스템 개발)

  • Ju-Young Kim;Jae-Ryul Park
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.320-327
    • /
    • 2023
  • In this study, an FPGA-based sealer inspection system was developed to inspect the sealer applied to install vehicle glass on a car body. The sealer is a liquid or paste-like material that promotes adhesion such as sealing and waterproofing for mounting and assembling vehicle parts to a car body. The system installed in the existing vehicle design parts line does not detect the sealer in the glass rotation section and takes a long time to process. This study developed a line laser camera sensor and an FPGA vision signal processing module to solve this problem. The line laser camera sensor was developed such that the resolution and speed of the camera for data acquisition could be modified according to the irradiation angle of the laser. Furthermore, it was developed considering the mountability of the entire system to prevent interference with the sealer ejection machine. In addition, a vision signal processing module was developed using the Zynq-7020 FPGA chip to improve the processing speed of the algorithm that converted the profile to the sealer shape image acquired from a 2D camera and calculated the width and height of the sealer using the converted profile. The performance of the developed sealer application inspection system was verified by establishing an experimental environment identical to that of an actual automobile production line. The experimental results confirmed the performance of the sealer application inspection at a level that satisfied the requirements of automotive field standards.

Development of the Driving path Estimation Algorithm for Adaptive Cruise Control System and Advanced Emergency Braking System Using Multi-sensor Fusion (ACC/AEBS 시스템용 센서퓨전을 통한 주행경로 추정 알고리즘)

  • Lee, Dongwoo;Yi, Kyongsu;Lee, Jaewan
    • Journal of Auto-vehicle Safety Association
    • /
    • v.3 no.2
    • /
    • pp.28-33
    • /
    • 2011
  • This paper presents driving path estimation algorithm for adaptive cruise control system and advanced emergency braking system using multi-sensor fusion. Through data collection, yaw rate filtering based road curvature and vision sensor road curvature characteristics are analyzed. Yaw rate filtering based road curvature and vision sensor road curvature are fused into the one curvature by weighting factor which are considering characteristics of each curvature data. The proposed driving path estimation algorithm has been investigated via simulation performed on a vehicle package Carsim and Matlab/Simulink. It has been shown via simulation that the proposed driving path estimation algorithm improves primary target detection rate.