• 제목/요약/키워드: vision-based sensor

검색결과 424건 처리시간 0.034초

An embedded vision system based on an analog VLSI Optical Flow vision sensor

  • Becanovic, Vlatako;Matsuo, Takayuki;Stocker, Alan A.
    • 한국정보기술응용학회:학술대회논문집
    • /
    • 한국정보기술응용학회 2005년도 6th 2005 International Conference on Computers, Communications and System
    • /
    • pp.285-288
    • /
    • 2005
  • We propose a novel programmable miniature vision module based on a custom designed analog VLSI (aVLSI) chip. The vision module consists of the optical flow vision sensor embedded with commercial off-the-shelves digital hardware; in our case is the Intel XScale PXA270 processor enforced with a programmable gate array device. The aVLSI sensor provides gray-scale imager data as well as smooth optical flow estimates, thus each pixel gives a triplet of information that can be continuously read out as three independent images. The particular computational architecture of the custom designed sensor, which is fully parallel and also analog, allows for efficient real-time estimations of the smooth optical flow. The Intel XScale PXA270 controls the sensor read-out and furthermore allows, together with the programmable gate array, for additional higher level processing of the intensity image and optical flow data. It also provides the necessary standard interface such that the module can be easily programmed and integrated into different vision systems, or even form a complete stand-alone vision system itself. The low power consumption, small size and flexible interface of the proposed vision module suggests that it could be particularly well suited as a vision system in an autonomous robotics platform and especially well suited for educational projects in the robotic sciences.

  • PDF

위성 영상감시 센서망을 위한 스마트 비젼 센서 (Smart Vision Sensor for Satellite Video Surveillance Sensor Network)

  • 김원호;임재유
    • 한국위성정보통신학회논문지
    • /
    • 제10권2호
    • /
    • pp.70-74
    • /
    • 2015
  • 본 논문은 위성통신 기반의 위성 영상감시 센서 네트워크 적용을 위한 스마트 비젼 센서에 대해 기술한다. 스마트 비젼센서 단말은 현장에서 산불, 연기, 침입자 움직임 등의 이벤트를 자동감지하면서 높은 성능 신뢰도, 견고한 하드웨어 내구성, 용이한 유지보수, 끊김없는 통신유지 기능들이 요구된다. 이러한 요구사항들을 만족시키기 위하여 스마트 비젼 센서가 내장된 초소형 위성통신 단말을 제안하며 위성 송수신 기능과 더불어 고 신뢰도의 임베디드 영상분석 및 영상압축 기능을 처리한다. 제안하는 비젼 센서 알고리즘의 컴퓨터 시뮬레이션과 비젼 센서 시제품 시험을 통하여 영상감시 성능을 검증하였으며 실용성을 확인하였다.

용접자동화를 위한 주사빔을 이용한 시각센서에 관한 연구 (A Study on the Vision Sensor Using Scanning Beam for Welding Process Automation)

  • 유원상;나석주
    • 대한기계학회논문집A
    • /
    • 제20권3호
    • /
    • pp.891-900
    • /
    • 1996
  • The vision sensor which is based on the optical triangulation theory with the laser as an auxiliary light source can detect not only the seam position but the shape of seam. In this study, a vision sensor using the scanning laser beam was investigated. To design the vision sensor which considers the reflectivity of the sensing object and satisfies the desired resolution and measuring range, the equation of the focused laser beam which has a Gaussian irradiance profile was firstly formulated, Secondly, the image formaing sequence, and thirdly the relation between the displacement in the measuring surface and the displacement in the camera plane was formulated. Therefore, the focused beam diameter in the measuring range could be determined and the influence of the relative location between the laser and camera plane could be estimated. The measuring range and the resolution of the vision sensor which was based on the Scheimpflug's condition could also be calculated. From the results mentioned above a vision sensor was developed, and an adequate calibration technique was proposed. The image processing algorithm which and recognize the center of joint and its shape informaitons was investigated. Using the developed vision sensor and image processing algorithm, the shape informations was investigated. Using the developed vision sensor and image processing algorithm, the shape informations of the vee-, butt- and lap joint were extracted.

금형 개조 용접시 시각 센서를 이용한 대상물 위치 파악에 관한 연구 (A Study on Vision Sensor-based Measurement of Die Location for Its Remodeling)

  • 김지태;나석주
    • 한국정밀공학회지
    • /
    • 제17권10호
    • /
    • pp.141-146
    • /
    • 2000
  • We introduce the algorithms of 3-D position estimation using a laser sensor for automatic die remodeling. First, a vision sensor based on the optical triangulation was used to collect the range data of die surface. Second, line vector equations were constructed by the measured range data, and an analytic algorithm was proposed for recognizing the die location with these vector equations. This algorithm could make the transformation matrix without any specific corresponding points. To ascertain this algorithm, folded SUS plate was measured by the laser vision sensor attached to a 3-axis cartesian manipulator and the transformation matrix was calculated.

  • PDF

레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구 (A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors)

  • 장성우;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제22권8호
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.

ACC/AEBS 시스템용 센서퓨전을 통한 주행경로 추정 알고리즘 (Development of the Driving path Estimation Algorithm for Adaptive Cruise Control System and Advanced Emergency Braking System Using Multi-sensor Fusion)

  • 이동우;이경수;이재완
    • 자동차안전학회지
    • /
    • 제3권2호
    • /
    • pp.28-33
    • /
    • 2011
  • This paper presents driving path estimation algorithm for adaptive cruise control system and advanced emergency braking system using multi-sensor fusion. Through data collection, yaw rate filtering based road curvature and vision sensor road curvature characteristics are analyzed. Yaw rate filtering based road curvature and vision sensor road curvature are fused into the one curvature by weighting factor which are considering characteristics of each curvature data. The proposed driving path estimation algorithm has been investigated via simulation performed on a vehicle package Carsim and Matlab/Simulink. It has been shown via simulation that the proposed driving path estimation algorithm improves primary target detection rate.

융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구 (Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information)

  • 최재영;김성관
    • 제어로봇시스템학회논문지
    • /
    • 제18권8호
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반 (Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation)

  • 이상재;현종길;권연수;심재훈;문병인
    • 센서학회지
    • /
    • 제28권2호
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.