• Title/Summary/Keyword: Infrared images

Search Result 684, Processing Time 0.029 seconds

Far-infrared Study of Supernova Remnants in the Large Megellanic Cloud

  • Kim, Yesol;Koo, Bon-Chul;Seok, Ji Yeon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.1
    • /
    • pp.53-53
    • /
    • 2013
  • We present preliminary results of far-infrared(FIR) study of the supernova remnant(SNR)s in the Large Magellanic Cloud using the Herschel HERITAGE (HERschel Inventory of The Agents of Galaxy Evolution) data set. HERITAGE provides FIR data covering the entire LMC at 100,160, 250, 350, and 500 um. In order to confirm FIR emission associated with SNRs, we refer to Magellanic Cloud Emission-Line Survey (MCELS) H-alpha & SII data, Spitzer surveying the Agents of a Galaxy's Evolution (SAGE) Multiband Imaging Photometer (MIPS) 24um & 70um data, Chandra Supernova Remnants Catalog, and ATCA 4.8GHz continuum images of Dickel et al. (2005). Among 47 SNRs in the LMC, 7 SNRs show associated FIR emission. We present multi-wavelength view of 5 SNRs; DEM L249, N49, N63A, N132D, and the SNR in N4. N49 and N132D show morphological correlation in FIR and X-ray, suggesting that the FIR emission is from dust grains collisionally heated by X-ray emitting plasma. The FIR emission of N63A resembles H-alpha emission, which implies that the FIR line radiation could be dominant. The FIR images of the rest two objects, DEM L249 and SNR in N4, show no correlation to the other-waveband images.

  • PDF

Real-Time Automatic Tracking of Facial Feature (얼굴 특징 실시간 자동 추적)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.6
    • /
    • pp.1182-1187
    • /
    • 2004
  • Robust, real-time, fully automatic tracking of facial features is required for many computer vision and graphics applications. In this paper, we describe a fully automatic system that tracks eyes and eyebrows in real time. The pupils are tracked using the red eye effect by an infrared sensitive camera equipped with infrared LEDs. Templates are used to parameterize the facial features. For each new frame, the pupil coordinates are used to extract cropped images of eyes and eyebrows. The template parameters are recovered by PCA analysis on these extracted images using a PCA basis, which was constructed during the training phase with some example images. The system runs at 30 fps and requires no manual initialization or calibration. The system is shown to work well on sequences with considerable head motions and occlusions.

TSDnet: Three-scale Dense Network for Infrared and Visible Image Fusion (TSDnet: 적외선과 가시광선 이미지 융합을 위한 규모-3 밀도망)

  • Zhang, Yingmei;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.656-658
    • /
    • 2022
  • The purpose of infrared and visible image fusion is to integrate images of different modes with different details into a result image with rich information, which is convenient for high-level computer vision task. Considering many deep networks only work in a single scale, this paper proposes a novel image fusion based on three-scale dense network to preserve the content and key target features from the input images in the fused image. It comprises an encoder, a three-scale block, a fused strategy and a decoder, which can capture incredibly rich background details and prominent target details. The encoder is used to extract three-scale dense features from the source images for the initial image fusion. Then, a fusion strategy called l1-norm to fuse features of different scales. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in subjective observation.

Test of Fault Detection to Solar-Light Module Using UAV Based Thermal Infrared Camera (UAV 기반 열적외선 카메라를 이용한 태양광 모듈 고장진단 실험)

  • LEE, Geun-Sang;LEE, Jong-Jo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.106-117
    • /
    • 2016
  • Recently, solar power plants have spread widely as part of the transition to greater environmental protection and renewable energy. Therefore, regular solar plant inspection is necessary to efficiently manage solar-light modules. This study implemented a test that can detect solar-light module faults using an UAV based thermal infrared camera and GIS spatial analysis. First, images were taken using fixed UAV and an RGB camera, then orthomosaic images were created using Pix4D SW. We constructed solar-light module layers from the orthomosaic images and inputted the module layer code. Rubber covers were installed in the solar-light module to detect solar-light module faults. The mean temperature of each solar-light module can be calculated using the Zonalmean function based on temperature information from the UAV thermal camera and solar-light module layer. Finally, locations of solar-light modules of more than $37^{\circ}C$ and those with rubber covers can be extracted automatically using GIS spatial analysis and analyzed specifically using the solar-light module's identifying code.

Monocular Vision Based Localization System using Hybrid Features from Ceiling Images for Robot Navigation in an Indoor Environment (실내 환경에서의 로봇 자율주행을 위한 천장영상으로부터의 이종 특징점을 이용한 단일비전 기반 자기 위치 추정 시스템)

  • Kang, Jung-Won;Bang, Seok-Won;Atkeson, Christopher G.;Hong, Young-Jin;Suh, Jin-Ho;Lee, Jung-Woo;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.

A Study on PWM Control of Near-Infrared Fluorescence Imaging System (근적외선 형광 영상시스템의 PWM 제어에 관한 연구)

  • Lee, Byeong-Ho;Pan, Sung Bum
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.115-121
    • /
    • 2018
  • Fluorescent images using near-infrared light have no worry about radioactivity, and images can be checked in real time during surgery. Therefore experiments using fluorescent images for monitoring lymph node biopsy are actively under way. Fluorescent imaging equipment uses high heat-generating components such as LED and camera, thus uses water-cooling system as a stable heating suppression means. However in the fluorescent image equipment, the water cooling system takes a large volume which is a disadvantage in terms of miniaturization of the equipment. Even if the air cooling system is used for miniaturizing the equipment, heat generation is a problem. In this paper, we have experimented with the air cooling method using PWM control for the miniaturization of the equipment, and confirmed the constant quality of the fluorescent image and the suppression of the heat generation without any problems even when the equipment is used for a long time.

Infrared and visible image fusion based on Laplacian pyramid and generative adversarial network

  • Wang, Juan;Ke, Cong;Wu, Minghu;Liu, Min;Zeng, Chunyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1761-1777
    • /
    • 2021
  • An image with infrared features and visible details is obtained by processing infrared and visible images. In this paper, a fusion method based on Laplacian pyramid and generative adversarial network is proposed to obtain high quality fusion images, termed as Laplacian-GAN. Firstly, the base and detail layers are obtained by decomposing the source images. Secondly, we utilize the Laplacian pyramid-based method to fuse these base layers to obtain more information of the base layer. Thirdly, the detail part is fused by a generative adversarial network. In addition, generative adversarial network avoids the manual design complicated fusion rules. Finally, the fused base layer and fused detail layer are reconstructed to obtain the fused image. Experimental results demonstrate that the proposed method can obtain state-of-the-art fusion performance in both visual quality and objective assessment. In terms of visual observation, the fusion image obtained by Laplacian-GAN algorithm in this paper is clearer in detail. At the same time, in the six metrics of MI, AG, EI, MS_SSIM, Qabf and SCD, the algorithm presented in this paper has improved by 0.62%, 7.10%, 14.53%, 12.18%, 34.33% and 12.23%, respectively, compared with the best of the other three algorithms.

Object-based Compression of Thermal Infrared Images for Machine Vision (머신 비전을 위한 열 적외선 영상의 객체 기반 압축 기법)

  • Lee, Yegi;Kim, Shin;Lim, Hanshin;Choo, Hyon-Gon;Cheong, Won-Sik;Seo, Jeongil;Yoon, Kyoungro
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.738-747
    • /
    • 2021
  • Today, with the improvement of deep learning technology, computer vision areas such as image classification, object detection, object segmentation, and object tracking have shown remarkable improvements. Various applications such as intelligent surveillance, robots, Internet of Things, and autonomous vehicles in combination with deep learning technology are being applied to actual industries. Accordingly, the requirement of an efficient compression method for video data is necessary for machine consumption as well as for human consumption. In this paper, we propose an object-based compression of thermal infrared images for machine vision. The input image is divided into object and background parts based on the object detection results to achieve efficient image compression and high neural network performance. The separated images are encoded in different compression ratios. The experimental result shows that the proposed method has superior compression efficiency with a maximum BD-rate value of -19.83% to the whole image compression done with VVC.

A catalog of infrared supernova remnants in the Large Magellanic Cloud

  • Seok, Ji-Yeon;Koo, Bon-Chul
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.36 no.2
    • /
    • pp.104.1-104.1
    • /
    • 2011
  • We present a catalog of infrared supernova remnants (SNRs) in the Large Magellanic Cloud (LMC). We have searched the Spitzer archival data for infrared counterparts to all 45 known SNRs in the LMC, and identified 21 which is 47% of the known SNRs. Seven of them are newly detected: SNR 0450-70.9, SNR in N4, N103B, DEM L241, DEM L249, DEM L316A, and DEM L316B. All newly discovered SNRs show emission at several IRAC 3.4, 4.5, 5.8, and 8.0 micron bands and/or MIPS 24 and 70 micron bands. Most SNRs show shell structures. We derive infrared fluxes of these newly detected SNRs. The catalog contains general information of each SNR such as location, age, and SN type together with AKARI and/or Spitzer fluxes. For the entire SNR sample, we examine their infrared colors and the possible correlation of the infrared fluxes with the fluxes at other wavelengths. For the newly detected SNRs except the SNR in N4, we also performed follow-up imaging observations of [Fe II] 1.644 micron line using IRIS2 mounted on the Anglo Australian Telescope. Three out of six SNRs show [Fe II] emission corresponding to their infrared shells. [Fe II] knots are also detected in N103B which show good spatial correlation to infrared emission seen at Spitzer images as well as knotty $H{\alpha}$ emission. We investigate the characteristics and origin of the infrared emission in individual SNRs, and discuss the environmental and evolutionary effects.

  • PDF

AKARI OBSERVATION OF THE FLUCTUATION OF THE NEAR-INFRARED BACKGROUND

  • Matsumoto, T.;Seo, H.J.;Jeong, W.S.;Lee, H.M.;Matsuura, S.;Matsuhara, H.;Oyabu, S.;Pyo, J.;Wada, T.
    • Publications of The Korean Astronomical Society
    • /
    • v.27 no.4
    • /
    • pp.363-365
    • /
    • 2012
  • We report a search for fluctuations of the sky brightness toward the North Ecliptic Pole with AKARI, at 2.4, 3.2, and $4.1{\mu}m$. The stacked images with a diameter of 10 arcminutes of the AKARI-Monitor Field show a spatial structure on the scale of a few hundred arcseconds. A power spectrum analysis shows that there is a significant excess fluctuation at angular scales larger than 100 arcseconds that cannot be explained by zodiacal light, diffuse Galactic light, shot noise of faint galaxies, or clustering of low-redshift galaxies. These findings indicate that the detected fluctuation could be attributed to the first stars of the universe, i.e., Population III stars.