• Title/Summary/Keyword: multiple sensor fusion

Search Result 92, Processing Time 0.024 seconds

DIND Data Fusion with Covariance Intersection in Intelligent Space with Networked Sensors

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.1
    • /
    • pp.41-48
    • /
    • 2007
  • Latest advances in network sensor technology and state of the art of mobile robot, and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. In this study, as the preliminary step for developing a multi-purpose "Intelligent Space" platform to implement advanced technologies easily to realize smart services to human. We will give an explanation for the ISpace system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the DIND data fusion with CI of Intelligent Space. We will conclude by discussing some possible future extensions of ISpace. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions tracking multiple objects, human detection and motion assessment, with the results from the simulations run.

A Study on Biometric Model for Information Security (정보보안을 위한 생체 인식 모델에 관한 연구)

  • Jun-Yeong Kim;Se-Hoon Jung;Chun-Bo Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.317-326
    • /
    • 2024
  • Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion.

Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector (퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류)

  • 이상훈
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.329-339
    • /
    • 2003
  • In this study, an approach of image fusion in decision level has been proposed for unsupervised image classification using the images acquired from multiple sensors with different characteristics. The proposed method applies separately for each sensor the unsupervised image classification scheme based on spatial region growing segmentation, which makes use of hierarchical clustering, and computes iteratively the maximum likelihood estimates of fuzzy class vectors for the segmented regions by EM(expected maximization) algorithm. The fuzzy class vector is considered as an indicator vector whose elements represent the probabilities that the region belongs to the classes existed. Then, it combines the classification results of each sensor using the fuzzy class vectors. This approach does not require such a high precision in spatial coregistration between the images of different sensors as the image fusion scheme of pixel level does. In this study, the proposed method has been applied to multispectral SPOT and AIRSAR data observed over north-eastern area of Jeollabuk-do, and the experimental results show that it provides more correct information for the classification than the scheme using an augmented vector technique, which is the most conventional approach of image fusion in pixel level.

Cooperative Localization for Multiple Mobile Robots using Constraints Propagation Techniques on Intervals (제약 전파 기법을 적용한 다중 이동 로봇의 상호 협동 위치 추정)

  • Jo, Kyoung-Hwan;Jang, Choul-Soo;Lee, Ji-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.3
    • /
    • pp.273-283
    • /
    • 2008
  • This article describes a cooperative localization technique of multiple robots sharing position information of each robot. In case of conventional methods such as EKF, they need to linearization process. Consequently, they are not able to guarantee that their result is range containing true value. In this paper, we propose a method to merge the data of redundant sensors based on constraints propagation techniques on intervals. The proposed method has a merit guaranteeing true value. Especially, we apply the constraints propagation technique fusing wheel encoders, a gyro, and an inexpensive GPS receiver. In addition, we utilize the correlation between GPS data in common workspace to improve localization performance for multiple robots. Simulation results show that proposed method improve considerably localization performance of multiple robots.

Effects of Geographic Information on the Performance of Multiple Ground Target Tracking System Using Multiple Sensors (다중 센서에 의한 다중 지상 표적 추적시 지형 정보가 미치는 영향)

  • Kim, In-Teak;Lee, Eung-Gi;Kim, Woong-Su
    • Journal of Advanced Navigation Technology
    • /
    • v.2 no.1
    • /
    • pp.43-52
    • /
    • 1998
  • In this paper, we have investigated the effects of geographic information on the performance of multiple ground target tracking system using multiple sensors. Geographic information is utilized in two cases: association and masking target measurement. Virtually no improvement is observed to the overall performance of tracking system when we applied mobility to the association procedure. Masking target measurement based on mobility produces desirable result that the number of false tracks is reduced. Since geographic information can be regarded as an additional sensor in sensor fusion paradigm, careful usage is required.

  • PDF

Fingerprint Fusion Based on Minutiae and Ridge for Enrollment (등록 지문의 정보 융합에 관한 연구)

  • 이동재;최경택;이상훈;김재희
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.93-100
    • /
    • 2004
  • This paper presents a method to integrate the multiple impressions of a finger for improving fingerprint verification performance. Small-sized sensor has advantage that it can be used in many application fields. However, sufficiently large impression of fingerprint is not available due to the small sensing area, and this degrades the verification performance of the system. The proposed method overcomes this problem by combining the information of fingerprints for enrollment. To combine the fingerprints, the alignment process is important first of all. In the proposed algorithm multiple impressions of a finger are coarsely aligned using the corresponding minutiae pairs and then are finely aligned using the Distance Map. We construct an integrated template for enrollment in aligned coordinate system Since this integrated template represents the enlarged finger region, the problem that is occurred by using small sensor can be overcome. Experimental results show that the use of the integrated template of multiple impressions improves the performance of the fingerprint verification system.

Segment-based Image Classification of Multisensor Images

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.611-622
    • /
    • 2012
  • This study proposed two multisensor fusion methods for segment-based image classification utilizing a region-growing segmentation. The proposed algorithms employ a Gaussian-PDF measure and an evidential measure respectively. In remote sensing application, segment-based approaches are used to extract more explicit information on spatial structure compared to pixel-based methods. Data from a single sensor may be insufficient to provide accurate description of a ground scene in image classification. Due to the redundant and complementary nature of multisensor data, a combination of information from multiple sensors can make reduce classification error rate. The Gaussian-PDF method defines a regional measure as the PDF average of pixels belonging to the region, and assigns a region into a class associated with the maximum of regional measure. The evidential fusion method uses two measures of plausibility and belief, which are derived from a mass function of the Beta distribution for the basic probability assignment of every hypothesis about region classes. The proposed methods were applied to the SPOT XS and ENVISAT data, which were acquired over Iksan area of of Korean peninsula. The experiment results showed that the segment-based method of evidential measure is greatly effective on improving the classification via multisensor fusion.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

A Tracking Algorithm for Autonomous Navigation of AGVs: Federated Information Filter

  • Kim, Yong-Shik;Hong, Keum-Shik
    • Journal of Navigation and Port Research
    • /
    • v.28 no.7
    • /
    • pp.635-640
    • /
    • 2004
  • In this paper, a tracking algorithm for autonomous navigation of automated guided vehicles (AGVs) operating in container terminals is presented. The developed navigation algorithm takes the form of a federated information filter used to detect other AGVs and avoid obstacles using fused information from multiple sensors. Being equivalent to the Kalman filter (KF) algebraically, the information filter is extended to N-sensor distributed dynamic systems. In multi-sensor environments, the information-based filter is easier to decentralize, initialize, and fuse than a KF-based filter. It is proved that the information state and the information matrix of the suggested filter, which are weighted in terms of an information sharing factor, are equal to those of a centralized information filter under the regular conditions. Numerical examples using Monte Carlo simulation are provided to compare the centralized information filter and the proposed one.

Efficiently Managing Collected from External Wireless Sensors on Smart Devices Using a Sensor Virtualization Framework

  • Lee, Byung-Bog;Hong, SangGi;Lee, Kyeseon;Kim, Naesoo;Ko, JeongGil
    • Information and Communications Magazine
    • /
    • v.30 no.10
    • /
    • pp.79-85
    • /
    • 2013
  • By interacting with external wireless sensors, smartphones can gather high-fidelity data on the surrounding environment to develop various environment-aware, personalized applications. In this work we introduce the sensor virtualization module (SVM), which virtualizes external sensors so that smartphone applications can easily utilize a large number of external sensing resources. Implemented on the Android platform, our SVM simplifies the management of external sensors by abstracting them as virtual sensors to provide the capability of resolving conflicting data requests from multiple applications and also allowing sensor data fusion for data from different sensors to create new customized sensors elements. We envision our SVM to open the possibilities of designing novel personalized smartphone applications.