• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.028 seconds

Multi-sensor Fusion Based Guidance and Navigation System Design of Autonomous Mine Disposal System Using Finite State Machine (유한 상태 기계를 이용한 자율무인기뢰처리기의 다중센서융합기반 수중유도항법시스템 설계)

  • Kim, Ki-Hun;Choi, Hyun-Taek;Lee, Chong-Moo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.33-42
    • /
    • 2010
  • This research propose a practical guidance system considering ocean currents in real sea operation. Optimality of generated path is not an issue in this paper. Way-points from start point to possible goal positions are selected by experienced human supervisors considering major ocean current axis. This paper also describes the implementation of a precise underwater navigation solution using multi-sensor fusion technique based on USBL, GPS, DVL and AHRS measurements in detail. To implement the precise, accurate and frequent underwater navigation solution, three strategies are chosen. The first one is the heading alignment angle identification to enhance the performance of standalone dead-reckoning algorithm. The second one is that absolute position is fused timely to prevent accumulation of integration error, where the absolute position can be selected between USBL and GPS considering sensor status. The third one is introduction of effective outlier rejection algorithm. The performance of the developed algorithm is verified with experimental data of mine disposal vehicle and deep-sea ROV.

Red Tide Detection through Image Fusion of GOCI and Landsat OLI (GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지)

  • Shin, Jisun;Kim, Keunyong;Min, Jee-Eun;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.377-391
    • /
    • 2018
  • In order to efficiently monitor red tide over a wide range, the need for red tide detection using remote sensing is increasing. However, the previous studies focus on the development of red tide detection algorithm for ocean colour sensor. In this study, we propose the use of multi-sensor to improve the inaccuracy for red tide detection and remote sensing data in coastal areas with high turbidity, which are pointed out as limitations of satellite-based red tide monitoring. The study area were selected based on the red tide information provided by National Institute of Fisheries Science, and spatial fusion and spectral-based fusion were attempted using GOCI image as ocean colour sensor and Landsat OLI image as terrestrial sensor. Through spatial fusion of the two images, both the red tide of the coastal area and the outer sea areas, where the quality of Landsat OLI image was low, which were impossible to observe in GOCI images, showed improved detection results. As a result of spectral-based fusion performed by feature-level and rawdata-level, there was no significant difference in red tide distribution patterns derived from the two methods. However, in the feature-level method, the red tide area tends to overestimated as spatial resolution of the image low. As a result of pixel segmentation by linear spectral unmixing method, the difference in the red tide area was found to increase as the number of pixels with low red tide ratio increased. For rawdata-level, Gram-Schmidt sharpening method estimated a somewhat larger area than PC spectral sharpening method, but no significant difference was observed. In this study, it is shown that coastal red tide with high turbidity as well as outer sea areas can be detected through spatial fusion of ocean colour and terrestrial sensor. Also, by presenting various spectral-based fusion methods, more accurate red tide area estimation method is suggested. It is expected that this result will provide more precise detection of red tide around the Korean peninsula and accurate red tide area information needed to determine countermeasure to effectively control red tide.

Mutiple Target Angle Tracking Algorithm Based on measurement Fusion (측정치 융합에 기반을 둔 다중표적 방위각 추적 알고리즘)

  • Ryu, Chang-Soo
    • 전자공학회논문지 IE
    • /
    • v.43 no.3
    • /
    • pp.13-21
    • /
    • 2006
  • Ryu et al. proposed a multiple target angle tracking algorithm using the angular measurement obtained from the signal subspace estimated by the output of sensor array. Ryu's algorithm has good features that it has no data association problem and simple structure. But its performance is seriously degraded in the low signal-to-noise ratio, and it uses the angular measurement obtained from the signal subspace of sampling time, even though the signal subspace is continuously updated by the output of sensor array. For improving the tracking performance of Ryu's algorithm, a measurement fusion method is derived based on ML(Maximum Likelihood) in this paper, and it admits us to use the angular measurements obtained form the adjacent signal subspaces as well as the signal subspace of sampling time. The new target angle tracking algorithm is proposed using the derived measurement fusion method. The proposed algorithm has a better tracking performance than that of Ryu's algorithm and it sustains the good features of Ryu's algorithm.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Position Information Acquisition Method Based on LED Lights and Smart Device Camera Using 3-Axis Moving Distance Measurement (3축 이동량 측정을 이용한 LED조명과 스마트단말 카메라기반 위치정보 획득 기법)

  • Jung, Soon-Ho;Lee, Min-Woo;Kim, Ki-Yun;Cha, Jae-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.226-232
    • /
    • 2015
  • As the age of smart device has come, recently many application services related to smart phone are developing. The LBS(Location Based Service) technique is considered as one of the most important techniques to support location based application services. Usually the smart phone acquires the information of position by using the position recognition systems and sensors such as GPS(Global Positioning System) and G-Sensor. However, since the GPS signal from the satellite can hardly be received in the indoor environments, new LBS techniques for the indoor environment are required. In this paper, to solve the problem a position information transceiver using LED lights and smart phone camera sensor is proposed. We proved the possibility of the proposed positioning system through the experiments in the laboratory for the practical verification.

Development of Real-time Traffic Information Generation Technology Using Traffic Infrastructure Sensor Fusion Technology (교통인프라 센서융합 기술을 활용한 실시간 교통정보 생성 기술 개발)

  • Sung Jin Kim;Su Ho Han;Gi Hoan Kim;Jung Rae Kim
    • Journal of Information Technology Services
    • /
    • v.22 no.2
    • /
    • pp.57-70
    • /
    • 2023
  • In order to establish an autonomous driving environment, it is necessary to study traffic safety and demand prediction by analyzing information generated from the transportation infrastructure beyond relying on sensors by the vehicle itself. In this paper, we propose a real-time traffic information generation method using sensor convergence technology of transportation infrastructure. The proposed method uses sensors such as cameras and radars installed in the transportation infrastructure to generate information such as crosswalk pedestrian presence or absence, crosswalk pause judgment, distance to stop line, queue, head distance, and car distance according to each characteristic. create information An experiment was conducted by comparing the proposed method with the drone measurement result by establishing a demonstration environment. As a result of the experiment, it was confirmed that it was possible to recognize pedestrians at crosswalks and the judgment of a pause in front of a crosswalk, and most data such as distance to the stop line and queues showed more than 95% accuracy, so it was judged to be usable.

IoT Enabled Smart Emergency LED Exit Sign controller Design using Arduino

  • Jung, Joonseok;Kwon, Jongman;Mfitumukiza, Joseph;Jung, Soonho;Lee, Minwoo;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.6 no.1
    • /
    • pp.76-81
    • /
    • 2017
  • This paper presents a low cost and flexible IoT enabled smart LED controller using Arduino that is used for emergency exit signs. The Internet of Things (IoT) is become a global network that put together physical objects using network communications for the purpose of inter-communication of devices, access information on internet, interaction with users as well as permanent connected environment. A crucial point in this paper, is underlined on the potential key points of applying the Arduino platform as low cost, easy to use microcontroller with combination of various sensors applied in IoT technology to facilitate and establishment of intelligent products. To demonstrate the feasibility and effectiveness of the system, devices such as LED strip, combination of various sensors, Arduino, power plug and ZigBee module have been integrated to setup smart emergency exit sign system. The general concept of the proposed system design discussed in this paper is all about the combination of various sensor such as smoke detector sensor, humidity, temperature sensor, glass break sensors as well as camera sensor that are connected to the main controller (Arduino) for the purpose of communicating with LED exit signs displayer and dedicated PC monitors from integrated system monitoring (controller room) through gateway devices using Zig bee module. A critical appraisal of the approach in the area concludes the paper.

Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector (퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류)

  • 이상훈
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.329-339
    • /
    • 2003
  • In this study, an approach of image fusion in decision level has been proposed for unsupervised image classification using the images acquired from multiple sensors with different characteristics. The proposed method applies separately for each sensor the unsupervised image classification scheme based on spatial region growing segmentation, which makes use of hierarchical clustering, and computes iteratively the maximum likelihood estimates of fuzzy class vectors for the segmented regions by EM(expected maximization) algorithm. The fuzzy class vector is considered as an indicator vector whose elements represent the probabilities that the region belongs to the classes existed. Then, it combines the classification results of each sensor using the fuzzy class vectors. This approach does not require such a high precision in spatial coregistration between the images of different sensors as the image fusion scheme of pixel level does. In this study, the proposed method has been applied to multispectral SPOT and AIRSAR data observed over north-eastern area of Jeollabuk-do, and the experimental results show that it provides more correct information for the classification than the scheme using an augmented vector technique, which is the most conventional approach of image fusion in pixel level.

A Study on Biometric Model for Information Security (정보보안을 위한 생체 인식 모델에 관한 연구)

  • Jun-Yeong Kim;Se-Hoon Jung;Chun-Bo Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.317-326
    • /
    • 2024
  • Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion.

Segment-based Image Classification of Multisensor Images

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.611-622
    • /
    • 2012
  • This study proposed two multisensor fusion methods for segment-based image classification utilizing a region-growing segmentation. The proposed algorithms employ a Gaussian-PDF measure and an evidential measure respectively. In remote sensing application, segment-based approaches are used to extract more explicit information on spatial structure compared to pixel-based methods. Data from a single sensor may be insufficient to provide accurate description of a ground scene in image classification. Due to the redundant and complementary nature of multisensor data, a combination of information from multiple sensors can make reduce classification error rate. The Gaussian-PDF method defines a regional measure as the PDF average of pixels belonging to the region, and assigns a region into a class associated with the maximum of regional measure. The evidential fusion method uses two measures of plausibility and belief, which are derived from a mass function of the Beta distribution for the basic probability assignment of every hypothesis about region classes. The proposed methods were applied to the SPOT XS and ENVISAT data, which were acquired over Iksan area of of Korean peninsula. The experiment results showed that the segment-based method of evidential measure is greatly effective on improving the classification via multisensor fusion.