• Title/Summary/Keyword: Vision sensor

Search Result 832, Processing Time 0.027 seconds

The Vision-based Autonomous Guided Vehicle Using a Virtual Photo-Sensor Array (VPSA) for a Port Automation (가상 포토센서 배열을 탑재한 항만 자동화 자을 주행 차량)

  • Kim, Soo-Yong;Park, Young-Su;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.2
    • /
    • pp.164-171
    • /
    • 2010
  • We have studied the port-automation system which is requested by the steep increment of cost and complexity for processing the freight. This paper will introduce a new algorithm for navigating and controlling the autonomous Guided Vehicle (AGV). The camera has the optical distortion in nature and is sensitive to the external ray, the weather, and the shadow, but it is very cheap and flexible to make and construct the automation system for the port. So we tried to apply to the AGV for detecting and tracking the lane using the CCD camera. In order to make the error stable and exact, this paper proposes new concept and algorithm for obtaining the error is generated by the Virtual Photo-Sensor Array (VPSA). VPSAs are implemented by programming and very easy to use for the various autonomous systems. Because the load of the computation is light, the AGV utilizes the maximal performance of the CCD camera and enables the CPU to take multi-tasks. We experimented on the proposed algorithm using the mobile robot and confirmed the stable and exact performance for tracking the lane.

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

Selective Extended Kalman Filter based Attitude Estimation (선택적 확장 칼만 필터 방식의 자세 추정)

  • Yun, In-Yong;Shim, Jae-Ryong;Kim, Joong-Kyu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.973-975
    • /
    • 2016
  • In this paper, we propose a selective extended Kalman filter based accurate pose estimation of the rigid body using a sensor fusion method. The pose of a rigid body can be estimated roughly by the Gauss-Newton method using the acceleration data and geomagnetic data, which can be refined with vision information and the gyro sensor information. However strong external interference noise makes the rough pose estimation difficult. In this paper, according to the measurement level of the external interference noise, the extended Kalman filter selectively uses mostly vision and gyro sensor information to increase the estimation credibility under strong interference noise environment.

  • PDF

A study on the characteristic analysis and correction of non-linear bias error of an infrared range finder sensor for a mobile robot (이동로봇용 적외선 레인지 파인더센서의 특성분석 및 비선형 편향 오차 보정에 관한 연구)

  • 하윤수;김헌희
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.27 no.5
    • /
    • pp.641-647
    • /
    • 2003
  • The use of infrared range-finder sensor as the environment recognition system for mobile robot have the advantage of low sensing cost compared with the use of other vision sensor such as laser finder CCD camera. However, it is not easy to find the previous works on the use of infrared range-finder sensor for a mobile robot because of the non-linear characteristic of that. This paper describes the error due to non-linearity of a sensor and the correction of it using neural network. The neural network consists of multi-layer perception and Levenberg-Marquardt algorithm is applied to learning it. The effectiveness of the proposed algorithm is verified from experiment.

Indoor Positioning System using Incident Angle Detection of Infrared sensor (적외선 센서의 입사각을 이용한 실내 위치인식 시스템)

  • Kim, Su-Yong;Choi, Ju-Yong;Lee, Man-Hyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.991-996
    • /
    • 2010
  • In this paper, a new indoor positioning system based on incident angle measurement of infrared sensor has been suggested. Though there have been various researches on indoor positioning systems using vision sensor or ultrasonic sensor, they have not only advantages, but also disadvantages. In a new positioning system, there are three infrared emitters on fixed known positions. An incident angle sensor measures the angle differences between each two emitters. Mathematical problems to determine the position with angle differences and position information of emitters has been solved. Simulations and experiments have been implemented to show the performance of this new positioning system. The results of simulation were good. Since there existed problems of noise and signal conditioning, the experimented has been implemented in limited area. But the results were acceptable. This new positioning method can be applied to any indoor systems that need absolute position information.

Recent Advances in Structural Health Monitoring

  • Feng, Maria Q.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.27 no.6
    • /
    • pp.483-500
    • /
    • 2007
  • Emerging sensor-based structural health monitoring (SHM) technology can play an important role in inspecting and securing the safety of aging civil infrastructure, a worldwide problem. However, implementation of SHM in civil infrastructure faces a significant challenge due to the lack of suitable sensors and reliable methods for interpreting sensor data. This paper reviews recent efforts and advances made in addressing this challenge, with example sensor hardware and software developed in the author's research center. It is proposed to integrate real-time continuous monitoring using on structure sensors for global structural integrity evaluation with targeted NDE inspection for local damage assessment.

Vehicle Displacement Estimation By GPS and Vision Sensor (영상센서/GPS에 기반한 차량의 이동변위 추정)

  • Kim, Min-Woo;Lim, Joon-Hoo;Park, Je-Doo;Kim, Hee-Sung;Lee, Hyung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.3
    • /
    • pp.417-425
    • /
    • 2012
  • It is well known that GPS cannot provide positioning results if sufficient number of visible satellites are not available. To overcome this weak point, attentions have been recently moved to hybrid positioning methods that augments GPS with other sensors. As an extension of hybrid positiong methods, this paper proposes a new method that combines GPS and vision sensor to improve availability and accuracy of land vehicle positioning. The proposed method does not require any external map information and can provide position solutions if more than 2 navigation satellites are visible. To evaluate the performance of the proposed method, an experiment result with real measurements is provided and a result shows that accumulated error of n-axis is almost 2.5meters and that of e-axis is almost 3meters in test section.

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF

Analysis of 3D Motion Recognition using Meta-analysis for Interaction (기존 3차원 인터랙션 동작인식 기술 현황 파악을 위한 메타분석)

  • Kim, Yong-Woo;Whang, Min-Cheol;Kim, Jong-Hwa;Woo, Jin-Cheol;Kim, Chi-Jung;Kim, Ji-Hye
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.6
    • /
    • pp.925-932
    • /
    • 2010
  • Most of the research on three-dimensional interaction field have showed different accuracy in terms of sensing, mode and method. Furthermore, implementation of interaction has been a lack of consistency in application field. Therefore, this study is to suggest research trends of three-dimensional interaction using meta-analysis. Searching relative keyword in database provided with 153 domestic papers and 188 international papers covering three-dimensional interaction. Analytical coding tables determined 18 domestic papers and 28 international papers for analysis. Frequency analysis was carried out on method of action, element, number, accuracy and then verified accuracy by effect size of the meta-analysis. As the results, the effect size of sensor-based was higher than vision-based, but the effect size was extracted to small as 0.02. The effect size of vision-based using hand motion was higher than sensor-based using hand motion. Therefore, implementation of three-dimensional sensor-based interaction and vision-based using hand motions more efficient. This study was significant to comprehensive analysis of three-dimensional motion recognition for interaction and suggest to application directions of three-dimensional interaction.

A Study on the Improvement of Vehicle Recognition Rate of Vision System (Vision 시스템의 차량 인식률 향상에 관한 연구)

  • Oh, Ju-Taek;Lee, Sang-Yong;Lee, Sang-Min;Kim, Young-Sam
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.3
    • /
    • pp.16-24
    • /
    • 2011
  • The vehicle electronic control system is being developed as the legal and social demand for ensuring driver's safety is rising. The various Driver Assistance Systems with various sensors such as radars, camera, and lasers are in practical use because of the falling price of hardware and the high performance of sensor and processer. In the preceding study of this research, the program was developed to recognize the experiment vehicle's driving lane and the cars nearby or approaching the experiment vehicle throughout the images taken by CCD camera. In addition, the 'dangerous driving analysis program' which is Vision System basis was developed to analyze the cause and consequence of dangerous driving. However, the Vision system developed in the previous studyhad poor recognition rate of lane and vehicles at the time of passing a tunnel, sunrise, or sunset. Therefore, through mounting the brightness response algorithm to the Vision System, the present study is aimed to analyze the causes of driver's dangerous driving clearly by improving the recognition rate of lane and vehicle, regardless of when and where it is.