• Title/Summary/Keyword: Noise Removing

Search Result 410, Processing Time 0.035 seconds

Multi-spectral Flash Imaging using Region-based Weight Map (영역기반 가중치 맵을 이용한 멀티스팩트럼 플래시 영상 획득)

  • Choi, Bong-Seok;Kim, Dae-Chul;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.127-135
    • /
    • 2013
  • In order to acquire images in low-light environments, it is usually necessary to adopt long exposure times or resort to flash lights. However, flashes often induce color distortion, cause the red-eye effect and can be disturbing to subjects. On the other hand, long-exposure shots are susceptible to subject-motion, as well as motion-blur due to camera shake when performed hand-held. A recently introduced technique to overcome the limitations of traditional low-light photography is that of multi-spectral flash. Multi-spectral flash images are a combination of UV/IR and visible spectrum information. The general idea is that of retrieving details from the UV/IR spectrum and color from the visible spectrum. However, multi-spectral flash images themselves are subject to color distortion and noise. This works presents a method to compute multi-spectral flash images so that noise can be reduced and color accuracy improved. The proposed approach is a previously seen optimization method, improved by the introduction of a weight map used to discriminate uniform regions from detail regions. The weight map is generated by applying canny edge operator and it is applied to the optimization process for discriminating the weights in uniform region and edge. Accordingly, the weight of color information is increased in the uniform region and the detail region of weight is decreased in detail region. Therefore, the proposed method can be enhancing color reproduction and removing artifacts. The performance of the proposed method has been objectively evaluated using long-exposure shots as reference.

Comparative Analysis among Radar Image Filters for Flood Mapping (홍수매핑을 위한 레이더 영상 필터의 비교분석)

  • Kim, Daeseong;Jung, Hyung-Sup;Baek, Wonkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.43-52
    • /
    • 2016
  • Due to the characteristics of microwave signals, Radar satellite image has been used for flood detection without weather and time influence. The more methods of flood detection were developed, the more detection rate of flood area has been increased. Since flood causes a lot of damages, flooded area should be distinguished from non flooded area. Also, the detection of flood area should be accurate. Therefore, not only image resolution but also the filtering process is critical to minimize resolution degradation. Although a resolution of radar images become better as technology develops, there were a limited focused on a highly suitable filtering methods for flood detection. Thus, the purpose of this study is to find out the most appropriate filtering method for flood detection by comparing three filtering methods: Lee filter, Frost filter and NL-means filter. Therefore, to compare the filters to detect floods, each filters are applied to the radar image. Comparison was drawn among filtered images. Then, the flood map, results of filtered images are compared in that order. As a result, Frost and NL-means filter are more effective in removing the speckle noise compared to Lee filter. In case of Frost filter, resolution degradation occurred severly during removal of the noise. In case of NL-means filter, shadow effect which could be one of the main reasons that causes false detection were not eliminated comparing to other filters. Nevertheless, result of NL-means filter shows the best detection rate because the number of shadow pixels is relatively low in entire image. Kappa coefficient is scored 0.81 for NL-means filtered image and 0.55, 0.64 and 0.74 follows for non filtered image, Lee filtered image and Frost filtered image respectively. Also, in the process of NL-means filter, speckle noise could be removed without resolution degradation. Accordingly, flooded area could be distinguished effectively from other area in NL-means filtered image.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Extraction of Sternocleidomastoid Muscle for Ultrasound Images of Cervical Vertebrae (경추 초음파 영상에서 흉쇄유돌근 추출)

  • Kim, Kwang-Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.11
    • /
    • pp.2321-2326
    • /
    • 2011
  • Cervical vertebrae are a complex structure and an important part of human body connecting the head and the trunk. In this paper, we propose a method to extract sternocleidomastoid muscle from ultrasonography images of cervical vertabrae automatically. In our method, Region of Interests(ROI) is extracted first from an ultrasonography image after removing unnecessary auxiliary information such as metrics. Then we apply Ends-in search stretching algorithm in order to enhance the contrast of brightness. Average binarization is then applied to those pixels which its brightness is sufficiently large. The noise part is removed by image processing algorithms. After extracting fascia encloses sternocleidomastoid muscle, target muscle object is extracted using the location information of fascia according to the number of objects in the fascia. When only one object is to be extracted, we search downward first to extract the target muscle area and then search from right to left to extract the area and merge them. If there are two target objects, we extract first from the upper-bound of higher object to the lower-bound of lower object and then remove the fascia of the target object area. Smearing technique is used to restore possible loss of the fat area in the process. The thickness of sternocleidomastoid muscle is then calculated as the maximum thickness of those extracted objects. In this experiment with 30 real world ultrasonography images, the proposed method verified its efficacy and accuracy by health professionals.

Trace-Back Viterbi Decoder with Sequential State Transition Control (순서적 역방향 상태천이 제어에 의한 역추적 비터비 디코더)

  • 정차근
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.40 no.11
    • /
    • pp.51-62
    • /
    • 2003
  • This paper presents a novel survivor memeory management and decoding techniques with sequential backward state transition control in the trace back Viterbi decoder. The Viterbi algorithm is an maximum likelihood decoding scheme to estimate the likelihood of encoder state for channel error detection and correction. This scheme is applied to a broad range of digital communication such as intersymbol interference removing and channel equalization. In order to achieve the area-efficiency VLSI chip design with high throughput in the Viterbi decoder in which recursive operation is implied, more research is required to obtain a simple systematic parallel ACS architecture and surviver memory management. As a method of solution to the problem, this paper addresses a progressive decoding algorithm with sequential backward state transition control in the trace back Viterbi decoder. Compared to the conventional trace back decoding techniques, the required total memory can be greatly reduced in the proposed method. Furthermore, the proposed method can be implemented with a simple pipelined structure with systolic array type architecture. The implementation of the peripheral logic circuit for the control of memory access is not required, and memory access bandwidth can be reduced Therefore, the proposed method has characteristics of high area-efficiency and low power consumption with high throughput. Finally, the examples of decoding results for the received data with channel noise and application result are provided to evaluate the efficiency of the proposed method.

A Study on the Resistance Against Environmental Loading of the Fine-Size Exposed Aggregate Portland Cement Concrete Pavements (소입경 골재노출콘크리트포장의 환경하중 저항성에 대한 연구)

  • Chon, Beom-Jun;Lee, Seung-Woo;Chae, Sung-Wook;Bae, Jae-Min
    • International Journal of Highway Engineering
    • /
    • v.11 no.2
    • /
    • pp.99-109
    • /
    • 2009
  • Fine-size exposed aggregate portland cement concrete pavements (FEACP) have surface texture of exposed aggregate by removing upper 2$\sim$3mm mortar of surface of which curing is delayed by using delay-setting agent. FEACPs have advantages of maintaining low-noise and adequate skid-resistance level during the performance period than general portland cement concrete pavements. It is necessary to ensure the durability environmental loading to prevent unexpected distress during the service life of FEACP. In the process of curing, volume change accompanied change in by moisture and temperature could be an important cause of crack in concrete to construct for successful FEACP, The use of chloride containing deicer may accelerate defects of concrete pavement, such as crack and scaling. This study aim to evaluate environmental loading resistance of FEACP, based on the estimation of shrinkage-crack-control-capability by moisture evaporation and scaling by deicer in freeze-thaw reaction.

  • PDF

Cable Fault Detection Improvement of STDR Using Reference Signal Elimination (인가신호 제거를 이용한 STDR의 케이블 고장 검출 성능 향상)

  • Jeon, Jeong-Chay;Kim, Taek-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.450-456
    • /
    • 2016
  • STDR (sequence time domain reflectometry) to detect a cable fault using a pseudo noise sequence as a reference signal, and time correlation analysis between the reference signal and reflection signal is robust to noisy environments and can detect intermittent faults including open faults and short circuits. On the other hand, if the distance of the fault location is far away or the fault type is a soft fault, attenuation of the reflected signal becomes larger; hence the correlation coefficient in the STDR becomes smaller, which makes fault detection difficult and the measurement error larger. In addition, automation of the fault location by detection of phase and peak value becomes difficult. Therefore, to improve the cable fault detection of a conventional STDR, this paper proposes the algorithm in that the peak value of the correlation coefficient of the reference signal is detected, and a peak value of the correlation coefficient of the reflected signal is then detected after removing the reference signal. The performance of the proposed method was validated experimentally in low-voltage power cables. The performance evaluation showed that the proposed method can identify whether a fault occurred more accurately and can track the fault locations better than conventional STDR despite the signal attenuation. In addition, there was no error of an automatic fault type and its location by the detection of the phase and peak value through the elimination of the reference signal and normalization of the correlation coefficient.

Sensitivity Analysis of Surface Reflectance Retrieved from 6SV LUT for Each Channel of KOMPSAT-3/3A (KOMPSAT-3/3A 채널별 6SV 조견표의 지표반사도 민감도 분석)

  • Jung, Daeseong;Jin, Donghyun;Seong, Noh-Hun;Lee, Kyeong-Sang;Seo, Minji;Choi, Sungwon;Sim, Suyoung;Han, Kyung-Soo;Kim, Bo-Ram
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.785-791
    • /
    • 2020
  • The radiance measured from satellite has noise due to atmospheric effect. Atmospheric correction is the process of calculating surface reflectance by removing atmospheric effect and surface reflectance is calculated by the Radiative Transfer Model (RTM)-based Look-Up Table (LUT). In general, studies using a LUT make LUT for each channel with the same atmospheric and geometric conditions. However, atmospheric effect of atmospheric factors do not react sensitively in the same channel. In this study, the LUT for each channel of Korea Multi-Purpose SATellite (KOMPSAT)-3/3A was made under the same atmospheric·geometric conditions. And, the accuracy of the LUT was verified by using the simulated Top of Atmosphere radiation and surface reflectance in the RTM. As a result, the relative error of the surface reflectance in the blue channel that sensitive to the aerosol optical depth was 81.14% at the maximum, and 42.67% in the NIR (Near Infrared) channel.

A Evaluation Method for the Effectiveness of Anti-snore Pillow (코골이 방지 베개의 효율성 검증을 위한 방법)

  • Jee, Duk-Keun;Wei, Ran;Im, Jae-Joong;Kim, Hee-Sun;Kim, Hyun-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.545-554
    • /
    • 2011
  • In this study, the parameters of Polysomnography (PSG) test, such as total sleep time, snoring time, had been analyzed to evaluate the effectiveness of a developed anti-snore pillow. The developed anti-snore pillow is made up of two polyvinylidene fluoride (PVDF) vibration sensors, pumps, valves, and air bladders. The two PVDF sensors inside the pillow can acquire the sound signals and the algorithm was perfectly designed to extract snoring by removing unwanted noise accurately and automatically. Once the pillow recognizes snore, a pump inside the hardware activates, and a bladder under the neck area inside the pillow will be inflated. The PSG test was used and two volunteers were participated for the study. The parameters of the PSG results were analyzed to evaluate the effectiveness of the anti-snore pillow. The total sleep time of each volunteer was similar on each phase of test, but the snoring time and the longest snoring episode were significantly decreased with the use of anti-snore pillow. The overall results showed excellent possibilities for reducing snoring for the person who snores during sleep by using the anti-snore pillow. The effectiveness of the anti-snore pillow can be evaluated by the PSG test. Moreover, the relationship between each parameter of PSG test and the quality of sleep will be used for further researches.

  • PDF

ECG Baseline Wandering Removing Algorithm using Slope analysis and Curve Point Detection (기울기 분석과 굴곡점 검출을 이용한 ECG 기저선 잡음 제거 알고리즘)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.2105-2112
    • /
    • 2010
  • The noise component of electrocardiogram is not distributed in a certain frequency band. It is expressed in various types of signals by rater's physical and environmental conditions. Particularly, since the baseline wander is occurred by the mixture of the original signal and 0 ~ 2 [Hz] range of the frequency components according to muscle constraction of part attaching to an electrode and respiration rythm, it makes the ECG signal analysis difficult. Several methods have been proposed to eliminate the wandering effectually. However, they have some problems. In some methods, the high processing time is required due to the computational complexity, while in other cases ECG signal morphology can be distorted. This paper suggests a simple and effective algorithm that eliminates baseline wander with low computational complexity and without distorting signal morphology. First, the algorithm detects and segments a baseline wandering interval using slope analysis and curve point detection, second, approximates the wandering in the interval as a sinusoid, and then subtracts the sinusoid from signal. Finally, ecg signals without baseline wander are obtained. In order to evaluate the performance of the algorithm, several ECG signals with baseline wandering in MIT/BIH ECG database 101, 111, 113, 234 record were chosen and applied to the algorithm. It is found that the algorithm removes baseline wanders effectively without significant morphological distortion.