• Title/Summary/Keyword: infrared images

Search Result 689, Processing Time 0.031 seconds

Near-infrared Polarimetric Study of N159/N160 Star Forming Regions in the Large Magellanic Cloud

  • Kim, Jaeyeong;Jeong, Woong-Seob;Pak, Soojong;Pyo, Jeonghyun;Tamura, Motohide
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.1
    • /
    • pp.67.1-67.1
    • /
    • 2016
  • We observed two star forming regions, N159 and N160, in the Large Magellanic Cloud with SIRPOL, the polarimeter of the Infrared Survey Facility (IRSF) in South Africa. The photometric and polarimetric observations are done in three near-infrared bands, J, H, and Ks. We measured Stokes parameters of point sources and calculated their degrees of polarization and polarization angles. The polarization vector map shows complex features associated with dust and gas structures. Overall features of the magnetic field in N159 and N160 regions are different from each other and appear to be related to local environments, such as interior and boundary of shell structure, existence of star-forming HII regions, and boundaries between HII regions and dense dark clouds. We discuss the relation between the structure of magnetic field and the local properties of dust and gas in N159 and N160 regions by comparing our polarization vector map with images of $H{\alpha}$, mid-infrared, and $^{12}CO$ emissions, respectively by WFI of MPG/ESO telescope, Spitzer IRAC, and NANTEN.

  • PDF

The Development of a Real-Time Hand Gestures Recognition System Using Infrared Images (적외선 영상을 이용한 실시간 손동작 인식 장치 개발)

  • Ji, Seong Cheol;Kang, Sun Woo;Kim, Joon Seek;Joo, Hyonam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1100-1108
    • /
    • 2015
  • A camera-based real-time hand posture and gesture recognition system is proposed for controlling various devices inside automobiles. It uses an imaging system composed of a camera with a proper filter and an infrared lighting device to acquire images of hand-motion sequences. Several steps of pre-processing algorithms are applied, followed by a background normalization process before segmenting the hand from the background. The hand posture is determined by first separating the fingers from the main body of the hand and then by finding the relative position of the fingers from the center of the hand. The beginning and ending of the hand motion from the sequence of the acquired images are detected using pre-defined motion rules to start the hand gesture recognition. A set of carefully designed features is computed and extracted from the raw sequence and is fed into a decision tree-like decision rule for determining the hand gesture. Many experiments are performed to verify the system. In this paper, we show the performance results from tests on the 550 sequences of hand motion images collected from five different individuals to cover the variations among many users of the system in a real-time environment. Among them, 539 sequences are correctly recognized, showing a recognition rate of 98%.

APPLICATION OF REMOTE SENSING IMAGERY ON THE ESTIMATE OF EVAPOTRANSPIRATION OVER PADDY FIELD

  • Chang, Tzu-Yin;Chien, Tzu-Chieh;Liou, Yuei-An
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.752-755
    • /
    • 2006
  • Evaportranspiration is an important factor in hydrology cycle. Traditionally, it is measured by using basin or empirical formula with meteorology data, while it does not represent the evaportranspiration over a regional area. With the advent of improved remote sensing technology, it becomes a surface parameter of research interest in the field of remote sensing. Airborne and satellite imagery are utilized in this study. The high resolution airborne images include visible, near infrared, and thermal infrared bands and the satellite images are acquired by MODIS. Surface heat fluxes such as latent heat flux and sensible heat flux are estimate by using airborne and satellite images with surface meteorological measurements. We develop a new method to estimate the evaportranspiration over the rice paddy. The surface heat fluxes are initialized with a surface energy balance concept and iterated for convergent solution with atmospheric correct functions associated with aerodynamic resistance of heat transport. Furthermore, we redistribute the total net energy into sensible heat and latent heat fluxes. The result reveals that radiation and evaporation controlled extremes can be properly decided with both airborne and satellite images. The correlation coefficient of latent heat flux and sensible heat flux with corresponding in situ observations are 0.66 and 0.76, respectively. The relative root mean squared errors (RMSEs) for latent heat flux and sensible heat flux are 97.81 $(W/m^2)$ and 124.33 $(W/m^2)$, respectively. It is also shown that the newly developed retrieval scheme performs well when it is tested by using MODIS date.

  • PDF

The Operational Procedure on Estimating Typhoon Center Intensity using Meteorological Satellite Images in KMA

  • Park, Jeong-Hyun;Park, Jong-Seo;Kim, Baek-Min;Suh, Ae-Sook
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.278-281
    • /
    • 2006
  • Korea Meteorological Administration(KMA) has issued the tropical storm(typhoon) warning or advisories when it was developed to tropical storm from tropical depression and a typhoon is expected to influence the Korean peninsula and adjacent seas. Typhoon information includes current typhoon position and intensity. KMA has used the Dvorak Technique to analyze the center of typhoon and it's intensity by using available geostationary satellites' images such as GMS, GOES-9 and MTSAT-1R since 2001. The Dvorak technique is so subjective that the analysis results could be variable according to analysts. To reduce the subjective errors, QuikSCAT seawind data have been used with various analysis data including sea surface temperature from geostationary meteorological satellites, polar orbit satellites, and other observation data. On the other hand, there is an advantage of using the Subjective Dvorak Technique(SDT). SDT can get information about intensity and center of typhoon by using only infrared images of geostationary meteorology satellites. However, there has been a limitation to use the SDT on operational purpose because of lack of observation and information from polar orbit satellites such as SSM/I. Therefore, KMA has established Advanced Objective Dvorak Technique(AODT) system developed by UW/CIMSS(University of Wisconsin-Madison/Cooperative Institude for Meteorological Satellite Studies) to improve current typhoon analysis technique, and the performance has been tested since 2005. We have developed statistical relationships to correct AODT CI numbers according to the SDT CI numbers that have been presumed as truths of typhoons occurred in northwestern pacific ocean by using linear, nonlinear regressions, and neural network principal component analysis. In conclusion, the neural network nonlinear principal component analysis has fitted best to the SDT, and shown Root Mean Square Error(RMSE) 0.42 and coefficient of determination($R^2$) 0.91 by using MTSAT-1R satellite images of 2005. KMA has operated typhoon intensity analysis using SDT and AODT since 2006 and keep trying to correct CI numbers.

  • PDF

Characteristics of Satellite Brightness Temperature and Rainfall Intensity over the Life Cycle of Convective Cells-Case Study (대류 세포의 발달 단계별 위성 휘도온도와 강우강도의 특성-사례연구)

  • Kim, Deok Rae;Kwon, Tae Yong
    • Atmosphere
    • /
    • v.21 no.3
    • /
    • pp.273-284
    • /
    • 2011
  • This study investigates the characteristics of satellite brightness temperature (TB) and rainfall intensity over the life cycle of convective cells. The convective cells in the three event cases are detected and tracked from the growth stage to the dissipation stage using the half-hourly infrared (IR) images. For each IR images the values of minimum, mean, and variance for the convective cell's TBs and the sizes of convective cells are calculated and also the relationship between TB and rainfall intensity are investigated, which is obtained using the pixel values of satellite TB and the ground rainfall intensity measured by AWS (Automatic Weather Station). At the growth stage of the convective cells, the TB's variance and cloud size consistently increased, whereas TB's minimum and mean consistently decreased. At this stage the empirical relationships between TB and rainfall intensity are statistically significant and their slopes (intercepts) in absolute values are relatively large (small) compared to those at the dissipation stage. At the dissipation stage of the convective cells, the variability of TB distributions shows the opposite trend. The statistical significance of the empirical relationships are relatively weak, but their slopes (intercepts) vary over life cycle. These results indicate that satellite IR images can provide valuable information in identifying the convective cell's maturity stage and in the growth stage, they may be used in providing considerably accurate rainfall estimates.

The Target Detection and Classification Method Using SURF Feature Points and Image Displacement in Infrared Images (적외선 영상에서 변위추정 및 SURF 특징을 이용한 표적 탐지 분류 기법)

  • Kim, Jae-Hyup;Choi, Bong-Joon;Chun, Seung-Woo;Lee, Jong-Min;Moon, Young-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.43-52
    • /
    • 2014
  • In this paper, we propose the target detection method using image displacement, and classification method using SURF(Speeded Up Robust Features) feature points and BAS(Beam Angle Statistics) in infrared images. The SURF method that is a typical correspondence matching method in the area of image processing has been widely used, because it is significantly faster than the SIFT(Scale Invariant Feature Transform) method, and produces a similar performance. In addition, in most SURF based object recognition method, it consists of feature point extraction and matching process. In proposed method, it detects the target area using the displacement, and target classification is performed by using the geometry of SURF feature points. The proposed method was applied to the unmanned target detection/recognition system. The experimental results in virtual images and real images, we have approximately 73~85% of the classification performance.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

RNCC-based Fine Co-registration of Multi-temporal RapidEye Satellite Imagery (RNCC 기반 다시기 RapidEye 위성영상의 정밀 상호좌표등록)

  • Han, Youkyung;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.581-588
    • /
    • 2018
  • The aim of this study is to propose a fine co-registration approach for multi-temporal satellite images acquired from RapidEye, which has an advantage of availability for time-series analysis. To this end, we generate multitemporal ortho-rectified images using RPCs (Rational Polynomial Coefficients) provided with RapidEye images and then perform fine co-registration between the ortho-rectified images. A DEM (Digital Elevation Model) extracted from the digital map was used to generate the ortho-rectified images, and the RNCC (Registration Noise Cross Correlation) was applied to conduct the fine co-registration. Experiments were carried out using 4 RapidEye 1B images obtained from May 2015 to November 2016 over the Yeonggwang area. All 5 bands (blue, green, red, red edge, and near-infrared) that RapidEye provided were used to carry out the fine co-registration to show their possibility of being applicable for the co-registration. Experimental results showed that all the bands of RapidEye images could be co-registered with each other and the geometric alignment between images was qualitatively/quantitatively improved. Especially, it was confirmed that stable registration results were obtained by using the red and red edge bands, irrespective of the seasonal differences in the image acquisition.

A Development of the Next-generation Interface System Based on the Finger Gesture Recognizing in Use of Image Process Techniques (영상처리를 이용한 지화인식 기반의 차세대 인터페이스 시스템 개발)

  • Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.935-942
    • /
    • 2011
  • This study aims to design and implement the finger gesture recognizing system that automatically recognizes finger gestures input through a camera and controls the computer. Common CCD cameras were redesigned as infrared light cameras to acquire the images. The recorded images go through the pre-process to find the hand features, the finger gestures are read accordingly, and an event takes place for the follow-up mouse controlling and presentation, and finally the way to control computers is suggested. The finger gesture recognizing system presented in this study has been verified as the next-generation interface to replace the mouse and keyboard for the future information-based units.