• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.032 seconds

Visible Image Enhancement Method Considering Thermal Information from Infrared Image (원적외선 영상의 열 정보를 고려한 가시광 영상 개선 방법)

  • Kim, Seonkeol;Kang, Hang-Bong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.550-558
    • /
    • 2013
  • The infrared and visible images are represented by different information due to the different wavelength of the light. The infrared image has thermal information and the visible image has texture information. Desirable results are obtained by fusing infrared and visible information. To enhance a visible image, we extract a weight map from a visible image using saturation, brightness. After that, the weight map is adjusted using thermal information in the infrared image. Finally, an enhanced image is resulted from combining an infrared image and a visible image. Our experiment results show that our proposed algorithm is working well to enhance the smoke in the original image.

Three-Dimensional Conjugate Heat Transfer Analysis for Infrared Target Modeling (적외선 표적 모델링을 위한 3차원 복합 열해석 기법 연구)

  • Jang, Hyunsung;Ha, Namkoo;Lee, Seungha;Choi, Taekyu;Kim, Minah
    • Journal of KIISE
    • /
    • v.44 no.4
    • /
    • pp.411-416
    • /
    • 2017
  • The spectral radiance received by an infrared (IR) sensor is mainly influenced by the surface temperature of the target itself. Therefore, the precise temperature prediction is important for generating an IR target image. In this paper, we implement the combined three-dimensional surface temperature prediction module against target attitudes, environments and properties of a material for generating a realistic IR signal. In order to verify the calculated surface temperature, we are using the well-known IR signature analysis software, OKTAL-SE and compare the result with that. In addition, IR signal modeling is performed using the result of the surface temperature through coupling with OKTAL-SE.

Symmetrical model based SLAM : M-SLAM (대칭모형 기반 SLAM : M-SLAM)

  • Oh, Jung-Suk;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.463-468
    • /
    • 2010
  • The mobile robot which accomplishes a work in explored region does not know location information of surroundings. Traditionally, simultaneous localization and mapping(SLAM) algorithms solve the localization and mapping problem in explored regions. Among the several SLAM algorithms, the EKF (Extended Kalman Filter) based SLAM is the scheme most widely used. The EKF is the optimal sensor fusion method which has been used for a long time. The odometeric error caused by an encoder can be compensated by an EKF, which fuses different types of sensor data with weights proportional to the uncertainty of each sensor. In many cases the EKF based SLAM requires artificially installed features, which causes difficulty in actual implementation. Moreover, the computational complexity involved in an EKF increases as the number of features increases. And SLAM is a weak point of long operation time. Therefore, this paper presents a symmetrical model based SLAM algorithm(called M-SLAM).

Design of High Speed Data Acquisition and Fusion System with STM32 Processor (STM32 프로세서를 이용한 고속 데이터 수집 및 융합 시스템 설계)

  • Lim, Joong-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.1
    • /
    • pp.9-15
    • /
    • 2016
  • In this paper, we describe the design of a high speed data acquisition system(DAS) with STM32 processor based on Cortex-M4. The system is used for the sensor devices to collect raw data on production lines at factory and send them to the servo computer in real time. The system is designed for multi functions with universal asynchronous receiver and transmitter(UART), analog to digital converter(ADC), digital to analog converter(DAC), and general purpose input output(GPIO). those are well tested for various data acquisition and high speed motor control in real time.

Time Synchronization Error and Calibration in Integrated GPS/INS Systems

  • Ding, Weidong;Wang, Jinling;Li, Yong;Mumford, Peter;Rizos, Chris
    • ETRI Journal
    • /
    • v.30 no.1
    • /
    • pp.59-67
    • /
    • 2008
  • The necessity for the precise time synchronization of measurement data from multiple sensors is widely recognized in the field of global positioning system/inertial navigation system (GPS/INS) integration. Having precise time synchronization is critical for achieving high data fusion performance. The limitations and advantages of various time synchronization scenarios and existing solutions are investigated in this paper. A criterion for evaluating synchronization accuracy requirements is derived on the basis of a comparison of the Kalman filter innovation series and the platform dynamics. An innovative time synchronization solution using a counter and two latching registers is proposed. The proposed solution has been implemented with off-the-shelf components and tested. The resolution and accuracy analysis shows that the proposed solution can achieve a time synchronization accuracy of 0.1 ms if INS can provide a hard-wired timing signal. A synchronization accuracy of 2 ms was achieved when the test system was used to synchronize a low-grade micro-electromechanical inertial measurement unit (IMU), which has only an RS-232 data output interface.

  • PDF

공간정보산업 기술동향 - ETRI 연구과제 중심으로

  • Kim, Min-Su
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2010.06a
    • /
    • pp.186-186
    • /
    • 2010
  • 최근 들어, 2차원/3차원의 랩 또는 항공/위성 영상의 공간정보를 웹 상에서 서비스 하고 이러한 공간정보에 다양한 정보들을 융 복합하여 사용자들에게 보다 높은 수준의 정보를 제공하기 위한 기술개발에 대한 관심이 급증하고 있다. 예를들어, Microsoft사의 경우는 사용자 참여형(Participatory) 센서 웹 환경을 구축하기 위하여 SenseWeb 관련 기술 개발 및 시범 프로젝트를 수행하고 있는데, SenseWeb 시스템에서는 전 세계에 존재하는 모든 센서들을 연계하여 웹 상에서 사용자들에게 제공할 수 있도록 하는 것을 목표로 하고 있다. 최근에는, 이러한 SenseWeb 시스템을 이용하여 센싱정보와 Bing Map의 공간정보를 융합하여 사용자에게 서비스를 제공할 수 있는 SensorMap 시스템이 개발되었는데, 이러한 SensorWeb/SensorMap 시스템을 기반으로 다양한 시범 프로젝트들이 진행되어 오고 있다. Google사의 경우는 기존에 서비스 되고 Google Eearth/Map을 기반으로 실시간 센싱정보를 연계하여 제공하는 서비스와 다양한 사용자들의 정보들을 테이블 형태로 연계 및 융합하여 제공하기 위한 Google Fusion Tables 서비스를 제공하고 있다. Oracle사의 경우는 센싱정보를 포함하여 실시간으로 끊임없이 변화하는 정보들에 대하여 이벤트 처리, 패턴 분석, 그리고 상황인식 등의 서비스를 제공할 수 있는 CEP(Complex Event Processing) 제품을 선보이고 있다. Nokia사의 경우는 기존의 고정 센서노드들 이외에 모든 모바일 폰을 이동이 가능한 센서노드로 가정하고 이러한 모바일 센서노드들로부터 교통정보를 포함하는 다양한 센싱정보를 수집하고 분석하여 공간정보 기반으로 서비스하기 위한 기술 개발을 꾸준히 추진해 오고 있다. 본 발표에서는 이러한 공간정보와 센싱정보 또는 기타 사용자 정보와의 융 복합서비스를 가능하게 하는 핵심기술들에 대하여 소개하고자 한다. 첫째로 다양한 공간정보와 센싱정보의 융 복합 및 분석 서비스를 제공할 수 있는 u-GIS 융합 엔진에 대하여 설명하고자 한다. u-GIS 융합 엔진에서는 구체적으로 최근 이슈가 되고 있는 다양한 센싱정보의 효율적 수집, 분석, 관리를 위한 GeoSensor 데이터 저장/관리 기술과 센싱정보-공간정보의 실시간 융합 분석 기술에 대해서 소개를 하고자 한다. 둘째로, 이러한 공간정보, 센싱정보 그리고 기타 사용자 정보들을 웹 상에서 효율적으로 매쉬업하여 2차원 및 3차원으로 사용자에게 제공하기 위한 맞춤형 국토정보 제공 기술에 대하여 설명하고자 한다. 여기서는 공간정보 센싱정보 그리고 기타 사용자 정보와의 연계를 위한 매쉬업 엔진 기술 그리고 실시간 3차원 공간정보 제공 기술 등에 대하여 소개를 하고자 한다. 끝으로, 영상 기반의 효율적인 공간정보 구축을 위한 멀티센서 데이터 처리 기술에 대하여 간략히 소개를 하고, 앞에서 설명된 핵심기술들 이외에도 향후 추가로 요구되는 기술개발 내용에 대하여 간략히 설명을 하고자 한다.

  • PDF

On Issue and Outlook of wearable Computer based on Technology in Convergence (융합환경에서 웨어러블 컴퓨터 기술 중심의 시장 및 발전 방향에 관한 연구)

  • Lee, Seong-Hoon;Lee, Dong-Woo
    • Journal of the Korea Convergence Society
    • /
    • v.6 no.3
    • /
    • pp.73-78
    • /
    • 2015
  • In information society, Convergence means a service or new product which appeared through fusion of unit technologies in information and communication regions. The effects of convergence technologies and social phenomenons are visualized in overall regions of society such as economy, society, culture, etc. In this paper, we described a wearable computer which was leading case in digital convergence. Wearable computer is unit that you can literally wear on your body to enhance recognition abilities and problem-solving abilities by processing various types of integrated information. Therefore, we studied the issues, market and outlookof wearable computer in this paper.

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Matching and Geometric Correction of Multi-Resolution Satellite SAR Images Using SURF Technique (SURF 기법을 활용한 위성 SAR 다중해상도 영상의 정합 및 기하보정)

  • Kim, Ah-Leum;Song, Jung-Hwan;Kang, Seo-Li;Lee, Woo-Kyung
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.431-444
    • /
    • 2014
  • As applications of spaceborne SAR imagery are extended, there are increased demands for accurate registrations for better understanding and fusion of radar images. It becomes common to adopt multi-resolution SAR images to apply for wide area reconnaissance. Geometric correction of the SAR images can be performed by using satellite orbit and attitude information. However, the inherent errors of the SAR sensor's attitude and ground geographical data tend to cause geometric errors in the produced SAR image. These errors should be corrected when the SAR images are applied for multi-temporal analysis, change detection applications and image fusion with other sensor images. The undesirable ground registration errors can be corrected with respect to the true ground control points in order to produce complete SAR products. Speeded Up Robust Feature (SURF) technique is an efficient algorithm to extract ground control points from images but is considered to be inappropriate to apply to SAR images due to high speckle noises. In this paper, an attempt is made to apply SURF algorithm to SAR images for image registration and fusion. Matched points are extracted with respect to the varying parameters of Hessian and SURF matching thresholds, and the performance is analyzed by measuring the imaging matching accuracies. A number of performance measures concerning image registration are suggested to validate the use of SURF for spaceborne SAR images. Various simulations methodologies are suggested the validate the use of SURF for the geometric correction and image registrations and it is shown that a good choice of input parameters to the SURF algorithm should be made to apply for the spaceborne SAR images of moderate resolutions.