• Title/Summary/Keyword: Fusion Image

Search Result 882, Processing Time 0.028 seconds

Landmark Detection Based on Sensor Fusion for Mobile Robot Navigation in a Varying Environment

  • Jin, Tae-Seok;Kim, Hyun-Sik;Kim, Jong-Wook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.281-286
    • /
    • 2010
  • We propose a space and time based sensor fusion method and a robust landmark detecting algorithm based on sensor fusion for mobile robot navigation. To fully utilize the information from the sensors, first, this paper proposes a new sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable an accurate measurement. Exploration of an unknown environment is an important task for the new generation of mobile robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. The newly proposed, STSF (Space and Time Sensor Fusion) scheme is applied to landmark recognition for mobile robot navigation in an unstructured environment as well as structured environment, and the experimental results demonstrate the performances of the landmark recognition.

A Pyramid Fusion Method of Two Differently Exposed Images Using Gray Pixel Values (계조 화소 값을 이용한 노출속도가 다른 두 영상의 피라미드 융합 방법)

  • Im, Su Jin;Kim, Jin Heon
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1386-1394
    • /
    • 2016
  • Pyramid fusion usually adjusts the Laplacian weights of pixels of the input images by evaluating predefined criteria. This has advantages that it can selectively express intense color and enhance the contrast when applied to HDR exposure fusion. But it may cause noise because the weights are determined by pixel importance without considering the interdependent pixel relationship that constitutes a scene. This paper proposes a fusion method using simple weight criteria generated from the gray pixel values, which is expected to preserve the interdependent relationship and improve execution speed. In order to evaluate the performance of the proposed method we examine a homogeneity measure, H and compare the execution time for both methods. The proposed method is found to be more advantageous with respect to homogeneity and execution speed.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Single Image Enhancement Using Inter-channel Correlation

  • Kim, Jin;Jeong, Soowoong;Kim, Yong-Ho;Lee, Sangkeun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.3
    • /
    • pp.130-139
    • /
    • 2013
  • This paper proposes a new approach for enhancing digital images based on red channel information, which has the most analogous characteristics to invisible infrared rays. Specifically, a red channel in RGB space is used to analyze the image contents and improve the visual quality of the input images but it can cause unexpected problems, such as the over-enhancement of reddish input images. To resolve this problem, inter-channel correlations between the color channels were derived, and the weighting parameters for visually pleasant image fusion were estimated. Applying the parameters resulted in significant brightness as well as improvement in the dark and bright regions. Furthermore, simple contrast and color corrections were used to maintain the original contrast level and color tone. The main advantages of the proposed algorithm are 1) it can improve a given image considerably with a simple inter-channel correlation, 2) it can obtain a similar effect of using an extra infrared image, and 3) it is faster than other algorithms compared without artifacts including halo effects. The experimental results showed that the proposed approach could produce better natural images than the existing enhancement algorithms. Therefore, the proposed scheme can be a useful tool for improving the image quality in consumer imaging devices, such as compact cameras.

  • PDF

Color Space Exploration and Fusion for Person Re-identification (동일인 인식을 위한 컬러 공간의 탐색 및 결합)

  • Nam, Young-Ho;Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.10
    • /
    • pp.1782-1791
    • /
    • 2016
  • Various color spaces such as RGB, HSV, log-chromaticity have been used in the field of person re-identification. However, not enough studies have been done to find suitable color space for the re-identification. This paper reviews color invariance of color spaces by diagonal model and explores the suitability of each color space in the application of person re-identification. It also proposes a method for person re-identification based on a histogram refinement technique and some fusion strategies of color spaces. Two public datasets (ALOI and ImageLab) were used for the suitability test on color space and the ImageLab dataset was used for evaluating the feasibility of the proposed method for person re-identification. Experimental results show that RGB and HSV are more suitable for the re-identification problem than other color spaces such as normalized RGB and log-chromaticity. The cumulative recognition rates up to the third rank under RGB and HSV were 79.3% and 83.6% respectively. Furthermore, the fusion strategy using max score showed performance improvement of 16% or more. These results show that the proposed method is more effective than some other methods that use single color space in person re-identification.

Mobile Robot Navigation using Data Fusion Based on Camera and Ultrasonic Sensors Algorithm (카메라와 초음파센서 융합에 의한이동로봇의 주행 알고리즘)

  • Jang, Gi-Dong;Park, Sang-Keon;Han, Sung-Min;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.5
    • /
    • pp.696-704
    • /
    • 2011
  • In this paper, we propose a mobile robot navigation algorithm using data fusion of a monocular camera and ultrasonic sensors. Threshold values for binary image processing are generated by a fuzzy inference method using image data and data of ultrasonic sensors. Threshold value variations improve obstacle detection for mobile robot to move to the goal under poor illumination environments. Obstacles detected by data fusion of camera and ultrasonic sensors are expressed on the grid map and avoided using the circular planning algorithm. The performance of the proposed method is evaluated by experiments on the Pioneer 2-DX mobile robot in the indoor room with poor lights and a narrow corridor.

Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion (비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정)

  • Park, Jin-Seong;Park, Young-Jin;Park, Youn-Sik;Hong, Deok-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.