• Title/Summary/Keyword: Mask matching

Search Result 61, Processing Time 0.024 seconds

Monitoring and Forecasting the Eyjafjallajökull Volcanic Ash using Combination of Satellite and Trajectory Analysis (인공위성 관측자료와 궤적분석을 이용한 Eyjafjallajökull 화산재 감시와 예측)

  • Lee, Kwon Ho
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.30 no.2
    • /
    • pp.139-149
    • /
    • 2014
  • A new technique, namely the combination of satellite and trajectory analysis (CSTA), for exploring the spatio-temporal distribution information of volcanic ash plume (VAP) from volcanic eruption. CSTA uses the satellite derived ash property data and a matching forward-trajectories, which can generate airmass history pattern for specific VAP. In detail, VAP properties such as ash mask, aerosol optical thickness at 11 ${\mu}m$ ($AOT_{11}$), ash layer height, and effective radius from the Moderate Resolution Imaging Spectro-radiometer (MODIS) satellite were retrieved, and used to estimate the possibility of the ash forecasting in local atmosphere near volcano. The use of CSTA for Iceland's Eyjafjallaj$\ddot{o}$kull volcano erupted in May 2010 reveals remarkable spatial coherence for some VAP source-transport pattern. The CSTA forecasted points of VAP are consistent with the area of MODIS retrieved VAP. The success rate of the 24 hour VAP forecast result was about 77.8% in this study. Finally, the use of CSTA could provide promising results for VAP monitoring and forecasting by satellite observation data and verification with long term measurement dataset.

Noise Removal Filter Algorithm using Spatial Weight in AWGN Environment (AWGN 환경에서 공간 가중치를 이용한 잡음 제거 필터 알고리즘)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.207-209
    • /
    • 2021
  • In recent years, with the development of artificial intelligence and IoT technology, automation and unmanned technology are in progress in various fields, and the importance of image processing such as object tracking, medical images and object recognition, which are the basis of this, is increasing. In particular, in systems requiring detailed data processing, noise reduction is used as a pre-processing step, but the existing algorithm has a disadvantage that blurring occurs in the filtering process. Therefore, in this paper, we propose a filter algorithm using modified spatial weights to minimize information loss in the filtering process. The proposed algorithm uses mask matching to remove AWGN, and obtains the output of the filter by adding or subtracting the output of the modified spatial weight. The proposed algorithm has superior noise reduction characteristics compared to the existing method and reconstructs the image while minimizing the blurring phenomenon.

  • PDF

Virtual core point detection and ROI extraction for finger vein recognition (지정맥 인식을 위한 가상 코어점 검출 및 ROI 추출)

  • Lee, Ju-Won;Lee, Byeong-Ro
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.3
    • /
    • pp.249-255
    • /
    • 2017
  • The finger vein recognition technology is a method to acquire a finger vein image by illuminating infrared light to the finger and to authenticate a person through processes such as feature extraction and matching. In order to recognize a finger vein, a 2D mask-based two-dimensional convolution method can be used to detect a finger edge but it takes too much computation time when it is applied to a low cost micro-processor or micro-controller. To solve this problem and improve the recognition rate, this study proposed an extraction method for the region of interest based on virtual core points and moving average filtering based on the threshold and absolute value of difference between pixels without using 2D convolution and 2D masks. To evaluate the performance of the proposed method, 600 finger vein images were used to compare the edge extraction speed and accuracy of ROI extraction between the proposed method and existing methods. The comparison result showed that a processing speed of the proposed method was at least twice faster than those of the existing methods and the accuracy of ROI extraction was 6% higher than those of the existing methods. From the results, the proposed method is expected to have high processing speed and high recognition rate when it is applied to inexpensive microprocessors.

Fingerprint Pore Extraction Method using 1D Gaussian Model (1차원 가우시안 모델을 이용한 지문 땀샘 추출 방법)

  • Cui, Junjian;Ra, Moonsoo;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.135-144
    • /
    • 2015
  • Fingerprint pores have proven to be useful features for fingerprint recognition and several pore-based fingerprint recognition systems have been reported recently. In order to recognize fingerprints using pore information, it is very important to extract pores reliably and accurately. Existing pore extraction methods utilize 2D model fitting to detect pore centers. This paper proposes a pore extraction method using 1D Gaussian model which is much simpler than 2D model. During model fitting process, 1D model requires less computational cost than 2D model. The proposed method first calculates local ridge orientation; then, ridge mask is generated. Since pore center is brighter than its neighboring pixels, pore candidates are extracted using a $3{\times}3$ filter and a $5{\times}5$ filter successively. Pore centers are extracted by fitting 1D Gaussian model on the pore candidates. Extensive experiments show that the proposed pore extraction method can extract pores more effectively and accurately than other existing methods, and pore matching results show the proposed pore extraction method could be used in fingerprint recognition.

Robust Feature Extraction Based on Image-based Approach for Visual Speech Recognition (시각 음성인식을 위한 영상 기반 접근방법에 기반한 강인한 시각 특징 파라미터의 추출 방법)

  • Gyu, Song-Min;Pham, Thanh Trung;Min, So-Hee;Kim, Jing-Young;Na, Seung-You;Hwang, Sung-Taek
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.348-355
    • /
    • 2010
  • In spite of development in speech recognition technology, speech recognition under noisy environment is still a difficult task. To solve this problem, Researchers has been proposed different methods where they have been used visual information except audio information for visual speech recognition. However, visual information also has visual noises as well as the noises of audio information, and this visual noises cause degradation in visual speech recognition. Therefore, it is one the field of interest how to extract visual features parameter for enhancing visual speech recognition performance. In this paper, we propose a method for visual feature parameter extraction based on image-base approach for enhancing recognition performance of the HMM based visual speech recognizer. For experiments, we have constructed Audio-visual database which is consisted with 105 speackers and each speaker has uttered 62 words. We have applied histogram matching, lip folding, RASTA filtering, Liner Mask, DCT and PCA. The experimental results show that the recognition performance of our proposed method enhanced at about 21% than the baseline method.

Detection of Traffic Light using Color after Morphological Preprocessing (형태학적 전처리 후 색상을 이용한 교통 신호의 검출)

  • Kim, Chang-dae;Choi, Seo-hyuk;Kang, Ji-hun;Ryu, Sung-pil;Kim, Dong-woo;Ahn, Jae-hyeong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.367-370
    • /
    • 2015
  • This paper proposes an improve method of the detection performance of traffic lights for autonomous driving cars. Earlier detection methods used to adopt color thresholding, template matching and based learning maching methods, but its have some problems such as recognition rate decreasing, slow processing time. The proposed method uses both detection mask and morphological preprocessing. Firstly, input color images are converted to YCbCr image in order to strengthen its illumination, and horizontal edge components are extracted in the Y Channel. Secondly, the region of interest is detected according to morphological characteristics of the traffic lights. Finally, the traffic signal is detected based on color distributions. The proposed method showed that the detection rate and processing time improved rather than the conventional algorithm about some surrounding environments.

  • PDF

A Surface-micromachined Tunable Microgyroscope (주파수 조정가능한 박막미세가공 마이크로 자이로)

  • Lee, Ki-Bang;Yoon, Jun-Bo;Kang, Myung-Seok;Cho, Young-Ho;Youn, Sung-Kie;Kim, Choong-Ki
    • Proceedings of the KIEE Conference
    • /
    • 1996.07c
    • /
    • pp.1968-1970
    • /
    • 1996
  • We investigate a surface-micromachined polysilicon microgyroscope, whose resonant frequencies are electrostatically-tunable after fabrication. The microgyroscope with two oscillation nudes has been designed so that the resonant frequency in the sensing mode is higher than that in the actuating mode. The microgyroscope has been fabricated by a 4-mask surface-micrormachining process, including the deep RIE of a $6{\mu}m$-thick LPCVD polycrystalline silicon layer. The resonant frequency in the sensing mode has been lowered to that in actuating mode through the adjustment of an inter-plate bias voltage; thereby achieving a frequency matching at 5.8kHz under the bias voltage of 2V in a reduced pressure of 0.1torr. For an input angular rate of $50^{\circ}/sec$, an output signal of 20mV has been measured from the tuned microgyroscope under an AC drive voltage of 2V with a DC bias voltage of 3V.

  • PDF

Development of a Data Reduction Algorithm for Optical Wide Field Patrol (OWL) II: Improving Measurement of Lengths of Detected Streaks

  • Park, Sun-Youp;Choi, Jin;Roh, Dong-Goo;Park, Maru;Jo, Jung Hyun;Yim, Hong-Suh;Park, Young-Sik;Bae, Young-Ho;Park, Jang-Hyun;Moon, Hong-Kyu;Choi, Young-Jun;Cho, Sungki;Choi, Eun-Jung
    • Journal of Astronomy and Space Sciences
    • /
    • v.33 no.3
    • /
    • pp.221-227
    • /
    • 2016
  • As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.

Design and Implementation of Machine Learning System for Fine Dust Anomaly Detection based on Big Data (빅데이터 기반 미세먼지 이상 탐지 머신러닝 시스템 설계 및 구현)

  • Jae-Won Lee;Chi-Ho Lin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.55-58
    • /
    • 2024
  • In this paper, we propose a design and implementation of big data-based fine dust anomaly detection machine learning system. The proposed is system that classifies the fine dust air quality index through meteorological information composed of fine dust and big data. This system classifies fine dust through the design of an anomaly detection algorithm according to the outliers for each air quality index classification categories based on machine learning. Depth data of the image collected from the camera collects images according to the level of fine dust, and then creates a fine dust visibility mask. And, with a learning-based fingerprinting technique through a mono depth estimation algorithm, the fine dust level is derived by inferring the visibility distance of fine dust collected from the monoscope camera. For experimentation and analysis of this method, after creating learning data by matching the fine dust level data and CCTV image data by region and time, a model is created and tested in a real environment.

Setup Verification in Stereotactic Radiotherapy Using Digitally Reconstructed Radiograph (DRR) (디지털화재구성사진(Digitally Reconstructed Radiograph)을 이용한 정위방사선수술 및 치료의 치료위치 확인)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.17 no.1
    • /
    • pp.84-88
    • /
    • 1999
  • Purpose :To develop a method for verifying a treatment setup in stereotactic radiotherapy by ma- tching portal images to DRRs. Materials and Methods : Four pairs of orthogonal portal images of one patient immobilized by a thermoplastic mask frame for fractionated stereotactic radiotherapy were compared with DRRs. Portal images are obtained in AP (anteriorfposterior) and lateral directions with a target localizer box containing fiducial markers attached to a stereotactic frame. DRRs superimposed over a planned iso-center and fiducial markers are printed out on transparent films. And then, they were overlaid over onhogonal penal images by matching anatomical structures. From three different kind of objects (isgcenter, fiducial markers, anatomical structure) on DRRs and portal images, the displacement error between anatomical structure and isocenters (overall setup error), the displacement error between anatomical structure and fiducial markers (irnrnobiliBation error), and the displacement error between fiducial markers and isocenters (localization error) were measured. Results : Localization error were 1.5$\pm$0.3 mm (AP), 0.9$\pm$0.3 mm (lateral), and immobilization errors were 1.9$\pm$0.5 mm (AP), 1.9$\pm$0.4 mm (lateral). In addition, overall setup errors were 1.0$\pm$0.9 mm (AP), 1.3$\pm$0.4 mm (lateral). From these orthogonal displacement errors, maximum 3D displacement errors($\sqrt{(\DeltaAP)^{2}+(\DeltaLat)^{2}$)) were found to be 1.7$\pm$0.4 mm for localization, 2.0$\pm$0.6 mm for immobilization, and 2.3$\pm$0.7 mm for overall treatment setup. Conclusion : By comparing orthogonal portal images with DRRs, we find out that it is possible to verify treatment setup directly in stereotactic radiotherapy.

  • PDF