• Title/Summary/Keyword: multi-sensor information fusion

Search Result 116, Processing Time 0.025 seconds

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Multi Sources Track Management Method for Naval Combat Systems (다중 센서 및 다중 전술데이터링크 환경 하에서의 표적정보 처리 기법)

  • Lee, Ho Chul;Kim, Tae Su;Shin, Hyung Jo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.126-131
    • /
    • 2014
  • This paper is concerned with a track management method for a naval combat system which receives the tracks information from multi-sensors and multi-tactical datalinks. Since the track management of processing the track information from diverse sources can be formulated as a data fusion problem, this paper will deal with the data fusion architecture, track association and track information determination algorithm for the track management of naval combat systems.

Implementation of a Real-time Data fusion Algorithm for Flight Test Computer (비행시험통제컴퓨터용 실시간 데이터 융합 알고리듬의 구현)

  • Lee, Yong-Jae;Won, Jong-Hoon;Lee, Ja-Sung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.8 no.4 s.23
    • /
    • pp.24-31
    • /
    • 2005
  • This paper presents an implementation of a real-time multi-sensor data fusion algorithm for Flight Test Computer. The sensor data consist of positional information of the target from a radar, a GPS receiver and an INS. The data fusion algorithm is designed by the 21st order distributed Kalman Filter which is based on the PVA model with sensor bias states. A fault detection and correction logics are included in the algorithm for bad measurements and sensor faults. The statistical parameters for the states are obtained from Monte Carlo simulations and covariance analysis using test tracking data. The designed filter is verified by using real data both in post processing and real-time processing.

Hierarchical Behavior Control of Mobile Robot Based on Space & Time Sensor Fusion(STSF)

  • Han, Ho-Tack
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.314-320
    • /
    • 2006
  • Navigation in environments that are densely cluttered with obstacles is still a challenge for Autonomous Ground Vehicles (AGVs), especially when the configuration of obstacles is not known a priori. Reactive local navigation schemes that tightly couple the robot actions to the sensor information have proved to be effective in these environments, and because of the environmental uncertainties, STSF(Space and Time Sensor Fusion)-based fuzzy behavior systems have been proposed. Realization of autonomous behavior in mobile robots, using STSF control based on spatial data fusion, requires formulation of rules which are collectively responsible for necessary levels of intelligence. This collection of rules can be conveniently decomposed and efficiently implemented as a hierarchy of fuzzy-behaviors. This paper describes how this can be done using a behavior-based architecture. The approach is motivated by ethological models which suggest hierarchical organizations of behavior. Experimental results show that the proposed method can smoothly and effectively guide a robot through cluttered environments such as dense forests.

Combining Geostatistical Indicator Kriging with Bayesian Approach for Supervised Classification

  • Park, No-Wook;Chi, Kwang-Hoon;Moon, Wooil-M.;Kwon, Byung-Doo
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.382-387
    • /
    • 2002
  • In this paper, we propose a geostatistical approach incorporated to the Bayesian data fusion technique for supervised classification of multi-sensor remote sensing data. Traditional spectral based classification cannot account for the spatial information and may result in unrealistic classification results. To obtain accurate spatial/contextual information, the indicator kriging that allows one to estimate the probability of occurrence of classes on the basis of surrounding observations is incorporated into the Bayesian framework. This approach has its merit incorporating both the spectral information and spatial information and improves the confidence level in the final data fusion task. To illustrate the proposed scheme, supervised classification of multi-sensor test remote sensing data set was carried out.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Development of Multi-purpose Smart Sensor Using Presence Sensor (재실 감지 센서를 이용한 다용도 스마트 센서 개발)

  • Cha, Joo-Heon;Yong, Heong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.1
    • /
    • pp.103-109
    • /
    • 2015
  • This paper introduces a multi-purpose smart fusion sensor. Normally, this type of sensor can contribute to energy savings specifically related to lighting and heating/air conditioning systems by detecting individuals in an office building. If a fire occurs, the sensor can provide information regarding the presence and location of residents in the building to a management center. The system consists of four sensors: a thermopile sensor for detecting heat energy, an ultrasonic sensor for measuring the distance of objects from the sensor, a fire detection sensor, and a passive infrared sensor for detecting temperature change. The system has a wireless communication module to provide the management center with control information for lighting and heating/air conditioning systems. We have also demonstrated the usefulness of the proposed system by applying it to a real environment.

Federated Information Mode-Matched Filters in ACC Environment

  • Kim Yong-Shik;Hong Keum-Shik
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.2
    • /
    • pp.173-182
    • /
    • 2005
  • In this paper, a target tracking algorithm for tracking maneuvering vehicles is presented. The overall algorithm belongs to the category of an interacting multiple-model (IMM) algorithm used to detect multiple targets using fused information from multiple sensors. First, two kinematic models are derived: a constant velocity model for linear motions, and a constant-speed turn model for curvilinear motions. Fpr the constant-speed turn model, a nonlinear information filter is used in place of the extended Kalman filter. Being equivalent to the Kalman filter (KF) algebraically, the information filter is extended to N-sensor distributed dynamic systems. The model-matched filter used in multi-sensor environments takes the form of a federated nonlinear information filter. In multi-sensor environments, the information-based filter is easier to decentralize, initialize, and fuse than a KF-based filter. In this paper, the structural features and information sharing principle of the federated information filter are discussed. The performance of the suggested algorithm using a Monte Carlo simulation under the two patterns is evaluated.

Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder (환경변화에 강인한 단안카메라 레이더 적외선거리계 센서 융합 기반 교통정보 수집 시스템 개발)

  • Byun, Ki-hoon;Kim, Se-jin;Kwon, Jang-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.36-54
    • /
    • 2017
  • The purpose of this paper is to develop a multi-sensor fusion-based traffic information acquisition system with robust to environmental changes. it combines the characteristics of each sensor and is more robust to the environmental changes than the video detector. Moreover, it is not affected by the time of day and night, and has less maintenance cost than the inductive-loop traffic detector. This is accomplished by synthesizing object tracking informations based on a radar, vehicle classification informations based on a video detector and reliable object detections of a infrared range finder. To prove the effectiveness of the proposed system, I conducted experiments for 6 hours over 5 days of the daytime and early evening on the pedestrian - accessible road. According to the experimental results, it has 88.7% classification accuracy and 95.5% vehicle detection rate. If the parameters of this system is optimized to adapt to the experimental environment changes, it is expected that it will contribute to the advancement of ITS.