• Title/Summary/Keyword: Noisy Model

Search Result 344, Processing Time 0.025 seconds

Robust Reference Point and Feature Extraction Method for Fingerprint Verification using Gradient Probabilistic Model (지문 인식을 위한 Gradient의 확률 모델을 이용하는 강인한 기준점 검출 및 특징 추출 방법)

  • 박준범;고한석
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.6
    • /
    • pp.95-105
    • /
    • 2003
  • A novel reference point detection method is proposed by exploiting tile gradient probabilistic model that captures the curvature information of fingerprint. The detection of reference point is accomplished through searching and locating the points of occurrence of the most evenly distributed gradient in a probabilistic sense. The uniformly distributed gradient texture represents either the core point itself or those of similar points that can be used to establish the rigid reference from which to map the features for recognition. Key benefits are reductions in preprocessing and consistency of locating the same points as the reference points even when processing arch type fingerprints. Moreover, the new feature extraction method is proposed by improving the existing feature extraction using filterbank method. Experimental results indicate the superiority of tile proposed scheme in terms of computational time in feature extraction and verification rate in various noisy environments. In particular, the proposed gradient probabilistic model achieved 49% improvement under ambient noise, 39.2% under brightness noise and 15.7% under a salt and pepper noise environment, respectively, in FAR for the arch type fingerprints. Moreover, a reduction of 0.07sec in reference point detection time of the GPM is shown possible compared to using the leading the poincare index method and a reduction of 0.06sec in code extraction time of the new filterbank mettled is shown possible compared to using the leading the existing filterbank method.

A Study on the Factors Affecting Examinee Classification Accuracy under DINA Model : Focused on Examinee Classification Methods (DINA 모형에서 응시생 분류 정확성에 영향을 미치는 요인 탐구 : 응시생 분류방법을 중심으로)

  • Kim, Ji-Hyo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3748-3759
    • /
    • 2013
  • The purpose of this study was to examine the classification accuracies of ML, MAP, and EAP methods under DINA model. For this purpose, this study examined the classification accuracies of the classification methods under the various conditions: the number of attributes, the ability distribution of examinees, and test length. To accomplish this purpose, this study used a simulation method. For the simulation study, data was simulated under the various simulation conditions including the number of attributes (K= 5, 7), the ability distribution of examinees (high, middle, low), and test length (J= 15, 30, 45). Additionally, the percent of agreements between true skill patterns(true ${\alpha}$) and skill patterns estimated by the ML, MAP, and EAP methods were calculated. The summary of the main results of this study is as follows: First, When the number of attributes was 5 and 7, the EAP method showed relatively higher average in the percent of exact agreement than the ML and MAP methods. Second, under the same conditions, as the number of attributes increased, the average percent of exact agreement decreased in ML, MAP, and EAP methods. Third, when the prior distribution of examinees ability was different from low to high under the conditions of the same test length, the EAP method showed relatively higher average in the percent of exact agreement than those of the ML and MAP methods. Fourth, the average percent of exact agreement increased in all methods, ML, MAP, and EAP when the test length increased from 15 to 30 and 45 under the conditions of the same the ability distribution of examinees.

Performance Analysis of The CCITT X.25 Protocol (X. 25 Protocol의 성능 분석)

  • 최준균;은종관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.11 no.1
    • /
    • pp.25-39
    • /
    • 1986
  • In this paper, we analyze the performance, particularly the flow control mechanism, of the CCITT X.25 protocol in a packet-switched network. In this analysis, we consider the link and packet layers separately, and investigate the performance in three measures; normalized channel throughput, mean transmission time, and transmission efficiency. Each of these measures is formulated in terms of given protocol parameters such as windos size, $T_1$ and $T_2$ values, message length, and so forth. We model the service procedure of the inpur traffic based on the flow control mechanism of the X.25 protocol, and investigate the mechanism of the sliding window flow control with the piggybacked acknowlodgment scheme using a discrete-time Markov chain model. With this model, we study the effect of variation of the protoccol parameters on the performance of the X.25 protocol. From the numerical results of this analysis one can select the optimal valuse of the protocol parameters for different channel environments. it has been found that to maintain the trasnmission capacity satisfactorily, the window size must be greater than or equal to 7 in a high-speed channel. The time-out value, $T_1$, must carefully be selected in a noisy channel. In a normal condition, it should be in the order of ls. The value of $T_2$ has some effect on the transmission efficiency, but is not critical.

  • PDF

Improving Recall for Context-Sensitive Spelling Correction Rules using Conditional Probability Model with Dynamic Window Sizes (동적 윈도우를 갖는 조건부확률 모델을 이용한 한국어 문맥의존 철자오류 교정 규칙의 재현율 향상)

  • Choi, Hyunsoo;Kwon, Hyukchul;Yoon, Aesun
    • Journal of KIISE
    • /
    • v.42 no.5
    • /
    • pp.629-636
    • /
    • 2015
  • The types of errors corrected by a Korean spelling and grammar checker can be classified into isolated-term spelling errors and context-sensitive spelling errors (CSSE). CSSEs are difficult to detect and to correct, since they are correct words when examined alone. Thus, they can be corrected only by considering the semantic and syntactic relations to their context. CSSEs, which are frequently made even by expert wiriters, significantly affect the reliability of spelling and grammar checkers. An existing Korean spelling and grammar checker developed by P University (KSGC 4.5) adopts hand-made correction rules for correcting CSSEs. The KSGC 4.5 is designed to obtain very high precision, which results in an extremely low recall. Our overall goal of previous works was to improve the recall without considerably lowering the precision, by generalizing CSSE correction rules that mainly depend on linguistic knowledge. A variety of rule-based methods has been proposed in previous works, and the best performance showed 95.19% of average precision and 37.56% of recall. This study thus proposes a statistics based method using a conditional probability model with dynamic window sizes. in order to further improve the recall. The proposed method obtained 97.23% of average precision and 50.50% of recall.

A Study on the Air Pollution Monitoring Network Algorithm Using Deep Learning (심층신경망 모델을 이용한 대기오염망 자료확정 알고리즘 연구)

  • Lee, Seon-Woo;Yang, Ho-Jun;Lee, Mun-Hyung;Choi, Jung-Moo;Yun, Se-Hwan;Kwon, Jang-Woo;Park, Ji-Hoon;Jung, Dong-Hee;Shin, Hye-Jung
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.11
    • /
    • pp.57-65
    • /
    • 2021
  • We propose a novel method to detect abnormal data of specific symptoms using deep learning in air pollution measurement system. Existing methods generally detect abnomal data by classifying data showing unusual patterns different from the existing time series data. However, these approaches have limitations in detecting specific symptoms. In this paper, we use DeepLab V3+ model mainly used for foreground segmentation of images, whose structure has been changed to handle one-dimensional data. Instead of images, the model receives time-series data from multiple sensors and can detect data showing specific symptoms. In addition, we improve model's performance by reducing the complexity of noisy form time series data by using 'piecewise aggregation approximation'. Through the experimental results, it can be confirmed that anomaly data detection can be performed successfully.

A study on speech enhancement using complex-valued spectrum employing Feature map Dependent attention gate (특징 맵 중요도 기반 어텐션을 적용한 복소 스펙트럼 기반 음성 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.544-551
    • /
    • 2023
  • Speech enhancement used to improve the perceptual quality and intelligibility of noise speech has been studied as a method using a complex-valued spectrum that can improve both magnitude and phase in a method using a magnitude spectrum. In this paper, a study was conducted on how to apply attention mechanism to complex-valued spectrum-based speech enhancement systems to further improve the intelligibility and quality of noise speech. The attention is performed based on additive attention and allows the attention weight to be calculated in consideration of the complex-valued spectrum. In addition, the global average pooling was used to consider the importance of the feature map. Complex-valued spectrum-based speech enhancement was performed based on the Deep Complex U-Net (DCUNET) model, and additive attention was conducted based on the proposed method in the Attention U-Net model. The results of the experiments on noise speech in a living room environment showed that the proposed method is improved performance over the baseline model according to evaluation metrics such as Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short Time Object Intelligence (STOI), and consistently improved performance across various background noise environments and low Signal-to-Noise Ratio (SNR) conditions. Through this, the proposed speech enhancement system demonstrated its effectiveness in improving the intelligibility and quality of noisy speech.

Electromagnetic Traveltime Tomography with Wavefield Transformation (파동장 변환을 이용한 전자탐사 주시 토모그래피)

  • Lee, Tae-Jong;Suh, Jung-Hee;Shin, Chang-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.17-25
    • /
    • 1999
  • A traveltime tomography has been carried out by transforming electromagnetic data in frequency domain to wave-like domain. The transform uniquely relates a field satisfying a diffusion equation to an integral of the corresponding wavefield. But direct transform of frequency domain magnetic fields to wave-field domain is ill-posed problem because the kernel of the integral transform is highly damped. In this study, instead of solving such an unstable problem, it is assumed that wave-fields in transformed domain can be approximated by sum of ray series. And for further simplicity, reflection and refraction energy compared to that of direct wave is weak enough to be neglected. Then first arrival can be approximated by calculating the traveltime of direct wave only. But these assumptions are valid when the conductivity contrast between background medium and the target anomalous body is low enough. So this approach can only be applied to the models with low conductivity contrast. To verify the algorithm, traveltime calculated by this approach was compared to that of direct transform method and exact traveltime, calculated analytically, for homogeneous whole space. The error in first arrival picked by this study was less than that of direct transformation method, especially when the number of frequency samples is less than 10, or when the data are noisy. Layered earth model with varying conductivity contrasts and inclined dyke model have been successfully imaged by applying nonlinear traveltime tomography in 30 iterations within three CPU minutes on a IBM Pentium Pro 200 MHz.

  • PDF

Recognition of Overlapped Sound and Influence Analysis Based on Wideband Spectrogram and Deep Neural Networks (광역 스펙트로그램과 심층신경망에 기반한 중첩된 소리의 인식과 영향 분석)

  • Kim, Young Eon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.421-430
    • /
    • 2018
  • Many voice recognition systems use methods such as MFCC, HMM to acknowledge human voice. This recognition method is designed to analyze only a targeted sound which normally appears between a human and a device one. However, the recognition capability is limited when there is a group sound formed with diversity in wider frequency range such as dog barking and indoor sounds. The frequency of overlapped sound resides in a wide range, up to 20KHz, which is higher than a voice. This paper proposes the new recognition method which provides wider frequency range by conjugating the Wideband Sound Spectrogram and the Keras Sequential Model based on DNN. The wideband sound spectrogram is adopted to analyze and verify diverse sounds from wide frequency range as it is designed to extract features and also classify as explained. The KSM is employed for the pattern recognition using extracted features from the WSS to improve sound recognition quality. The experiment verified that the proposed WSS and KSM excellently classified the targeted sound among noisy environment; overlapped sounds such as dog barking and indoor sounds. Furthermore, the paper shows a stage by stage analyzation and comparison of the factors' influences on the recognition and its characteristics according to various levels of noise.

Design and Implementation of a Real-Time Lipreading System Using PCA & HMM (PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현)

  • Lee chi-geun;Lee eun-suk;Jung sung-tae;Lee sang-seol
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1597-1609
    • /
    • 2004
  • A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.

  • PDF

Dilated convolution and gated linear unit based sound event detection and tagging algorithm using weak label (약한 레이블을 이용한 확장 합성곱 신경망과 게이트 선형 유닛 기반 음향 이벤트 검출 및 태깅 알고리즘)

  • Park, Chungho;Kim, Donghyun;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.414-423
    • /
    • 2020
  • In this paper, we propose a Dilated Convolution Gate Linear Unit (DCGLU) to mitigate the lack of sparsity and small receptive field problems caused by the segmentation map extraction process in sound event detection with weak labels. In the advent of deep learning framework, segmentation map extraction approaches have shown improved performance in noisy environments. However, these methods are forced to maintain the size of the feature map to extract the segmentation map as the model would be constructed without a pooling operation. As a result, the performance of these methods is deteriorated with a lack of sparsity and a small receptive field. To mitigate these problems, we utilize GLU to control the flow of information and Dilated Convolutional Neural Networks (DCNNs) to increase the receptive field without additional learning parameters. For the performance evaluation, we employ a URBAN-SED and self-organized bird sound dataset. The relevant experiments show that our proposed DCGLU model outperforms over other baselines. In particular, our method is shown to exhibit robustness against nature sound noises with three Signal to Noise Ratio (SNR) levels (20 dB, 10 dB and 0 dB).