• 제목/요약/키워드: Fusion Scheme

검색결과 231건 처리시간 0.024초

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제14권4호
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

RF 센서와 INS을 이용한 UUV 위치 추정 (Underwater Localization using RF Sensor and INS for Unmanned Underwater Vehicles)

  • 박대길;곽경민;정재훈;김진현;정완균
    • 한국해양공학회지
    • /
    • 제31권2호
    • /
    • pp.170-176
    • /
    • 2017
  • In this paper, we propose an underwater localization scheme through the fusion of an inertial navigation system (INS) and the received signal strength (RSS) of electromagnetic (EM) wave sensors to guarantee precise localization performance with high sampling rates. In this localization scheme, the INS predicts the pose of the unmanned underwater vehicle (UUV) by dead reckoning at every step, and the RF sensors corrects the UUV position functions using the Earth-fixed reference when the UUV is located in underwater wireless sensor networks (UWSN). The localization scheme and state modeling were conducted in the extended Kalman filter framework, and UUV localization experiments were conducted in a basin environment. The scheme achieved reliable localization accuracy during long-term navigation, demonstrating the feasibility of exploiting EM wave attenuation as Earth-fixed reference sensors.

AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발 (Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras)

  • 진영석;전형철;신영남;현유진
    • 대한임베디드공학회논문지
    • /
    • 제13권4호
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

A Study on Mobile Robot Navigation Using a New Sensor Fusion

  • Tack, Han-Ho;Jin, Tae-Seok;Lee, Sang-Bae
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.471-475
    • /
    • 2003
  • This paper proposes a sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent on the current data sets. As the results, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations. Finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in an unstructured environment as well as structured environment.

  • PDF

청소로봇의 최적비용함수를 고려한 지도 작성에 관한 연구 (A Study on the Map-Building of a Cleaning Robot Base upon the Optimal Cost Function)

  • 강진구
    • 디지털산업정보학회논문지
    • /
    • 제5권3호
    • /
    • pp.39-45
    • /
    • 2009
  • In this paper we present a cleaning robot system for an autonomous mobile robot. Our robot performs goal reaching tasks into unknown indoor environments by using sensor fusion. The robot's operation objective is to clean floor or any other applicable surface and to build a map of the surrounding environment for some further purpose such as finding the shortest path available. Using its cleaning robot system for an autonomous mobile robot can move in various modes and perform dexterous tasks. Performance of the cleaning robot system is better than a fixed base redundant robot in avoiding singularity and obstacle. Sensor fusion using the clean robot improves the performance of the robot with redundant freedom in workspace and Map-Building. In this paper, Map-building of the cleaning robot has been studied using sensor fusion. A sequence of this alternating task execution scheme enables the clean robot to execute various tasks efficiently. The proposed algorithm is experimentally verified and discussed with a cleaning robot, KCCR.

Centralized Kalman Filter with Adaptive Measurement Fusion: its Application to a GPS/SDINS Integration System with an Additional Sensor

  • Lee, Tae-Gyoo
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권4호
    • /
    • pp.444-452
    • /
    • 2003
  • An integration system with multi-measurement sets can be realized via combined application of a centralized and federated Kalman filter. It is difficult for the centralized Kalman filter to remove a failed sensor in comparison with the federated Kalman filter. All varieties of Kalman filters monitor innovation sequence (residual) for detection and isolation of a failed sensor. The innovation sequence, which is selected as an indicator of real time estimation error plays an important role in adaptive mechanism design. In this study, the centralized Kalman filter with adaptive measurement fusion is introduced by means of innovation sequence. The objectives of adaptive measurement fusion are automatic isolation and recovery of some sensor failures as well as inherent monitoring capability. The proposed adaptive filter is applied to the GPS/SDINS integration system with an additional sensor. Simulation studies attest that the proposed adaptive scheme is effective for isolation and recovery of immediate sensor failures.

Multi- Resolution MSS Image Fusion

  • Ghassemian, Hassan;Amidian, Asghar
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.648-650
    • /
    • 2003
  • Efficient multi-resolution image fusion aims to take advantage of the high spectral resolution of Landsat TM images and high spatial resolution of SPOT panchromatic images simultaneously. This paper presents a multi-resolution data fusion scheme, based on multirate image representation. Motivated by analytical results obtained from high-resolution multispectral image data analysis: the energy packing the spectral features are distributed in the lower frequency bands, and the spatial features, edges, are distributed in the higher frequency bands. This allows to spatially enhancing the multispectral images, by adding the high-resolution spatial features to them, by a multirate filtering procedure. The proposed method is compared with some conventional methods. Results show it preserves more spectral features with less spatial distortion.

  • PDF

센서 융합을 이용한 MAF 공정 특성 분석 (Characterization of Magnetic Abrasive Finishing Using Sensor Fusion)

  • 김설빔;안병운;이성환
    • 대한기계학회논문집A
    • /
    • 제33권5호
    • /
    • pp.514-520
    • /
    • 2009
  • In configuring an automated polishing system, a monitoring scheme to estimate the surface roughness is necessary. In this study, a precision polishing process, magnetic abrasive finishing (MAF), along with an in-process monitoring setup was investigated. A magnetic tooling is connected to a CNC machining to polish the surface of stavax(S136) die steel workpieces. During finishing experiments, both AE signals and force signals were sampled and analysed. The finishing results show that MAF has nano scale finishing capability (upto 8nm in surface roughness) and the sensor signals have strong correlations with the parameters such as gap between the tool and workpiece, feed rate and abrasive size. In addition, the signals were utilized as the input parameters of artificial neural networks to predict generated surface roughness. Among the three networks constructed -AE rms input, force input, AE+force input- the ANN with sensor fusion (AE+force) produced most stable results. From above, it has been shown that the proposed sensor fusion scheme is appropriate for the monitoring and prediction of the nano scale precision finishing process.

Application of Bayesian Statistical Analysis to Multisource Data Integration

  • Hong, Sa-Hyun;Moon, Wooil-M.
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.394-399
    • /
    • 2002
  • In this paper, Multisource data classification methods based on Bayesian formula are considered. For this decision fusion scheme, the individual data sources are handled separately by statistical classification algorithms and then Bayesian fusion method is applied to integrate from the available data sources. This method includes the combination of each expert decisions where the weights of the individual experts represent the reliability of the sources. The reliability measure used in the statistical approach is common to all pixels in previous work. In this experiment, the weight factors have been assigned to have different value for all pixels in order to improve the integrated classification accuracies. Although most implementations of Bayesian classification approaches assume fixed a priori probabilities, we have used adaptive a priori probabilities by iteratively calculating the local a priori probabilities so as to maximize the posteriori probabilities. The effectiveness of the proposed method is at first demonstrated on simulations with artificial and evaluated in terms of real-world data sets. As a result, we have shown that Bayesian statistical fusion scheme performs well on multispectral data classification.

  • PDF