• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.034 seconds

Efficient Digitizing in Reverse Engineering By Sensor Fusion (역공학에서 센서융합에 의한 효율적인 데이터 획득)

  • Park, Young-Kun;Ko, Tae-Jo;Kim, Hrr-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.9
    • /
    • pp.61-70
    • /
    • 2001
  • This paper introduces a new digitization method with sensor fusion for shape measurement in reverse engineering. Digitization can be classified into contact and non-contact type according to the measurement devices. Important thing in digitization is speed and accuracy. The former is excellent in speed and the latter is good for accuracy. Sensor fusion in digitization intends to incorporate the merits of both types so that the system can be automatized. Firstly, non-contact sensor with vision system acquires coarse 3D point data rapidly. This process is needed to identify and loco]ice the object located at unknown position on the table. Secondly, accurate 3D point data can be automatically obtained using scanning probe based on the previously measured coarse 3D point data. In the research, a great number of measuring points of equi-distance were instructed along the line acquired by the vision system. Finally, the digitized 3D point data are approximated to the rational B-spline surface equation, and the free-formed surface information can be transferred to a commercial CAD/CAM system via IGES translation in order to machine the modeled geometric shape.

  • PDF

Design and Performance Evaluation of a Complementary Filter for Inverted Pendulum Control with Inertial Sensors (관성센서를 이용한 도립진자의 제어를 위한 상보필터 설계 및 성능평가)

  • Nakashima, Toshitaka;Chang, Mun-Che;Hong, Suk-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.544-546
    • /
    • 2004
  • This paper designs and evaluates a complementary filter for fusion of inertial sensor signals. Specifically, the designed filter is applied to inverted pendulum control where the pendulum's angle information is obtained from low-cost tilt and gyroscope sensors instead of an optical encoder. The complementary filter under consideration is a conventional one which consists of low- and high-pass filters. However, to improve the performance of the filter on the gyroscope, we use an integrator in the filter's outer loop. Frequency responses are obtained with both tilt and gyroscope sensors. Based on the frequency response results, we determine appropriate parameter values for the filter. The performance of the designed complementary filter is evaluated by applying the filter to inverted pendulum control. Experiments show that the performance of the designed filter is comparable to that of an optical encoder and low-cost inertial sensors can be used for inverted pendulum control with the heir of sensor fusion.

  • PDF

Automatic Image Registration Based on Extraction of Corresponding-Points for Multi-Sensor Image Fusion (다중센서 영상융합을 위한 대응점 추출에 기반한 자동 영상정합 기법)

  • Choi, Won-Chul;Jung, Jik-Han;Park, Dong-Jo;Choi, Byung-In;Choi, Sung-Nam
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.4
    • /
    • pp.524-531
    • /
    • 2009
  • In this paper, we propose an automatic image registration method for multi-sensor image fusion such as visible and infrared images. The registration is achieved by finding corresponding feature points in both input images. In general, the global statistical correlation is not guaranteed between multi-sensor images, which bring out difficulties on the image registration for multi-sensor images. To cope with this problem, mutual information is adopted to measure correspondence of features and to select faithful points. An update algorithm for projective transform is also proposed. Experimental results show that the proposed method provides robust and accurate registration results.

Sensor fusion based ambulatory system for indoor localization

  • Lee, Min-Yong;Lee, Soo-Yong
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.4
    • /
    • pp.278-284
    • /
    • 2010
  • Indoor localization for pedestrian is the key technology for caring the elderly, the visually impaired and the handicapped in health care districts. It also becomes essential for the emergency responders where the GPS signal is not available. This paper presents newly developed pedestrian localization system using the gyro sensors, the magnetic compass and pressure sensors. Instead of using the accelerometer, the pedestrian gait is estimated from the gyro sensor measurements and the travel distance is estimated based on the gait kinematics. Fusing the gyro information and the magnetic compass information for heading angle estimation is presented with the error covariance analysis. A pressure sensor is used to identify the floor the pedestrian is walking on. A complete ambulatory system is implemented which estimates the pedestrian's 3D position and the heading.

Fusion of DEMs Generated from Optical and SAR Sensor

  • Jin, Kveong-Hyeok;Yeu, Yeon;Hong, Jae-Min;Yoon, Chang-Rak;Yeu, Bock-Mo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.10 no.5 s.23
    • /
    • pp.53-65
    • /
    • 2002
  • The most widespread techniques for DEM generation are stereoscopy for optical sensor images and SAR interferometry(InSAR) for SAR images. These techniques suffer from certain sensor and processing limitations, which can be overcome by the synergetic use of both sensors and DEMs respectively. This study is associated with improvements of accuracy with consistency of image's characteristics between two different DEMs coming from stereoscopy for the optical images and interferometry for SAR images. The MWD(Multiresolution Wavelet Decomposition) and HPF(High-Pass Filtering), which take advantage of the complementary properties of SAR and stereo optical DEMs, will be applied for the fusion process. DEM fusion is tested with two sets of SPOT and ERS-l/-2 satellite imagery and for the analysis of results, DEM generated from digital topographic map(1 to 5000) is used. As a result of an integration of DEMs, it can more clearly portray topographic slopes and tilts when applying the strengths of DEM of SAR image to DEM of an optical satellite image and in the case of HPF, the resulting DEM.

  • PDF

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Precise assembly task using sensor fusion technology (센서퓨젼 기술을 이용한 정밀조립작업)

  • 이종길;이범희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.287-292
    • /
    • 1993
  • We use three sensors such as a vision sensor, a proximity sensor, and a force/torque sensor fused by fuzzy logic in a peg-in-hole task. The vision and proximity sensors are usually used for gross motion control and the information is used here to position the peg around the hole. The force/torque sensor is used for fine motion control and the information is used to insert the peg into the hole precisely. Throughout the task, the information of all the three sensors is fused by a fuzzy logic controller. Some simulation results are also presented for verification.

  • PDF

Precise Positioning Algorithm Development for Quadrotor Flying Robots Using Dual Extended Kalman Filter (듀얼 확장 칼만 필터를 이용한 쿼드로터 비행로봇 위치 정밀도 향상 알고리즘 개발)

  • Seung, Ji-Hoon;Lee, Deok-Jin;Ryu, Ji-Hyoung;Chong, Kil To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.2
    • /
    • pp.158-163
    • /
    • 2013
  • The fusion of the GPS (Global Positioning System) and DR (Dead Reckoning) is widely used for position and latitude estimation of vehicles such as a mobile robot, aerial vehicle and marine vehicle. Among the many types of aerial vehicles, grater focus is given on the quad-rotor and accuracy of the position information is becoming more important. In order to exactly estimate the position information, we propose the fusion method of GPS and Gyroscope sensor using the DEKF (Dual Extended Kalman Filter). The DEKF has an advantage of simultaneously estimating state value and a parameter of dynamical system. It can also be used even if state value is not available. In order to analyze the performance of DEKF, the computer simulation for estimating the position, the velocity and the angle in a circle trajectory of quad-rotor was done. As it can be seen from the simulation results using own proposed DEKF instead of EKF on own fusion method in the navigation of a quad-rotor gave better performance values.

Parking Space Detection based on Camera and LIDAR Sensor Fusion (카메라와 라이다 센서 융합에 기반한 개선된 주차 공간 검출 시스템)

  • Park, Kyujin;Im, Gyubeom;Kim, Minsung;Park, Jaeheung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.170-178
    • /
    • 2019
  • This paper proposes a parking space detection method for autonomous parking by using the Around View Monitor (AVM) image and Light Detection and Ranging (LIDAR) sensor fusion. This method consists of removing obstacles except for the parking line, detecting the parking line, and template matching method to detect the parking space location information in the parking lot. In order to remove the obstacles, we correct and converge LIDAR information considering the distortion phenomenon in AVM image. Based on the assumption that the obstacles are removed, the line filter that reflects the thickness of the parking line and the improved radon transformation are applied to detect the parking line clearly. The parking space location information is detected by applying template matching with the modified parking space template and the detected parking lines are used to return location information of parking space. Finally, we propose a novel parking space detection system that returns relative distance and relative angle from the current vehicle to the parking space.