• Title/Summary/Keyword: speech signal processing

Search Result 331, Processing Time 0.025 seconds

A Contrast Enhancement Method using the Contrast Measure in the Laplacian Pyramid for Digital Mammogram (디지털 맘모그램을 위한 라플라시안 피라미드에서 대비 척도를 이용한 대비 향상 방법)

  • Jeon, Geum-Sang;Lee, Won-Chang;Kim, Sang-Hee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.2
    • /
    • pp.24-29
    • /
    • 2014
  • Digital mammography is the most common technique for the early detection of breast cancer. To diagnose the breast cancer in early stages and treat efficiently, many image enhancement methods have been developed. This paper presents a multi-scale contrast enhancement method in the Laplacian pyramid for the digital mammogram. The proposed method decomposes the image into the contrast measures by the Gaussian and Laplacian pyramid, and the pyramid coefficients of decomposed multi-resolution image are defined as the frequency limited local contrast measures by the ratio of high frequency components and low frequency components. The decomposed pyramid coefficients are modified by the contrast measure for enhancing the contrast, and the final enhanced image is obtained by the composition process of the pyramid using the modified coefficients. The proposed method is compared with other existing methods, and demonstrated to have quantitatively good performance in the contrast measure algorithm.

A study on performance improvement of neural network using output probability of HMM (HMM의 출력확률을 이용한 신경회로망의 성능향상에 관한 연구)

  • Pyo Chang Soo;Kim Chang Keun;Hur Kang In
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.1 no.1
    • /
    • pp.1-6
    • /
    • 2000
  • In this paper, the hybrid system of HMM and neural network is proposed and show better recognition rate of the post-process procedure which minimizes the process error of recognition than that of HMM(Hidden Markov Model) only used. After the HMM training by training data, testing data that are not taken part in the training are sent to HMM. The output probability from HMM output by testing data is used for the training data of the neural network, post processor. After neural network training, the hybrid system is completed. This hybrid system makes the recognition rate improvement of about $4.5\%$ in MLP and about $2\%$ in RBFN and gives the solution to training time of conventional hybrid system and to decrease of the recognition rate due to the lack of training data in real-time speech recognition system.

  • PDF

Multi-Modal Instruction Recognition System using Speech and Gesture (음성 및 제스처를 이용한 멀티 모달 명령어 인식 시스템)

  • Kim, Jung-Hyun;Rho, Yong-Wan;Kwon, Hyung-Joon;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.57-62
    • /
    • 2006
  • 휴대용 단말기의 소형화 및 지능화와 더불어 차세대 PC 기반의 유비쿼터스 컴퓨팅에 대한 관심이 높아짐에 따라 최근에는 펜이나 음성 입력 멀티미디어 등 여러 가지 대화 모드를 구비한 멀티 모달 상호작용 (Multi-Modal Interaction MMI)에 대한 연구가 활발히 진행되고 있다. 따라서, 본 논문에서는 잡음 환경에서의 명확한 의사 전달 및 휴대용 단말기에서의 음성-제스처 통합 인식을 위한 인터페이스의 연구를 목적으로 Voice-XML과 Wearable Personal Station(WPS) 기반의 음성 및 내장형 수화 인식기를 통합한 멀티 모달 명령어 인식 시스템 (Multi-Modal Instruction Recognition System : MMIRS)을 제안하고 구현한다. 제안되어진 MMIRS는 한국 표준 수화 (The Korean Standard Sign Language : KSSL)에 상응하는 문장 및 단어 단위의 명령어 인식 모델에 대하여 음성뿐만 아니라 화자의 수화제스처 명령어를 함께 인식하고 사용함에 따라 잡음 환경에서도 규정된 명령어 모델에 대한 인식 성능의 향상을 기대할 수 있다. MMIRS의 인식 성능을 평가하기 위하여, 15인의 피험자가 62개의 문장형 인식 모델과 104개의 단어인식 모델에 대하여 음성과 수화 제스처를 연속적으로 표현하고, 이를 인식함에 있어 개별 명령어 인식기 및 MMIRS의 평균 인식율을 비교하고 분석하였으며 MMIRS는 문장형 명령어 인식모델에 대하여 잡음환경에서는 93.45%, 비잡음환경에서는 95.26%의 평균 인식율을 나타내었다.

  • PDF

A Study on the Automatic Speech Control System Using DMS model on Real-Time Windows Environment (실시간 윈도우 환경에서 DMS모델을 이용한 자동 음성 제어 시스템에 관한 연구)

  • 이정기;남동선;양진우;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.51-56
    • /
    • 2000
  • Is this paper, we studied on the automatic speech control system in real-time windows environment using voice recognition. The applied reference pattern is the variable DMS model which is proposed to fasten execution speed and the one-stage DP algorithm using this model is used for recognition algorithm. The recognition vocabulary set is composed of control command words which are frequently used in windows environment. In this paper, an automatic speech period detection algorithm which is for on-line voice processing in windows environment is implemented. The variable DMS model which applies variable number of section in consideration of duration of the input signal is proposed. Sometimes, unnecessary recognition target word are generated. therefore model is reconstructed in on-line to handle this efficiently. The Perceptual Linear Predictive analysis method which generate feature vector from extracted feature of voice is applied. According to the experiment result, but recognition speech is fastened in the proposed model because of small loud of calculation. The multi-speaker-independent recognition rate and the multi-speaker-dependent recognition rate is 99.08% and 99.39% respectively. In the noisy environment the recognition rate is 96.25%.

  • PDF

Modified AWSSDR method for frequency-dependent reverberation time estimation (주파수 대역별 잔향시간 추정을 위한 변형된 AWSSDR 방식)

  • Min Sik Kim;Hyung Soon Kim
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.91-100
    • /
    • 2023
  • Reverberation time (T60) is a typical acoustic parameter that provides information about reverberation. Since the impacts of reverberation vary depending on the frequency bands even in the same space, frequency-dependent (FD) T60, which offers detailed insights into the acoustic environments, can be useful. However, most conventional blind T60 estimation methods, which estimate the T60 from speech signals, focus on fullband T60 estimation, and a few blind FDT60 estimation methods commonly show poor performance in the low-frequency bands. This paper introduces a modified approach based on Attentive pooling based Weighted Sum of Spectral Decay Rates (AWSSDR), previously proposed for blind T60 estimation, by extending its target from fullband T60 to FDT60. The experimental results show that the proposed method outperforms conventional blind FDT60 estimation methods on the acoustic characterization of environments (ACE) challenge evaluation dataset. Notably, it consistently exhibits excellent estimation performance in all frequency bands. This demonstrates that the mechanism of the AWSSDR method is valuable for blind FDT60 estimation because it reflects the FD variations in the impact of reverberation, aggregating information about FDT60 from the speech signal by processing the spectral decay rates associated with the physical properties of reverberation in each frequency band.

Fast Speech Recognition System using Classification of Energy Labeling (에너지 라벨링 그룹화를 이용한 고속 음성인식시스템)

  • Han Su-Young;Kim Hong-Ryul;Lee Kee-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.77-83
    • /
    • 2004
  • In this paper, the Classification of Energy Labeling has been proposed. Energy parameters of input signal which are extracted from each phoneme are labelled. And groups of labelling according to detected energies of input signals are detected. Next. DTW processes in a selected group of labeling. This leads to DTW processing faster than a previous algorithm. In this Method, because an accurate detection of parameters is necessary on the assumption in steps of a detection of speeching duration and a detection of energy parameters, variable windows which are decided by pitch period are used. A pitch period is detected firstly : next window scale is decided between 200 frames and 300 frames. The proposed method makes it possible to cancel an influence of windows and reduces the computational complexity by $25\%$.

  • PDF

Simulation of the Loudness Recruitment using Sensorineural Hearing Impairment Modeling (감음신경성 난청의 모델링을 통한 라우드니스 누가현상의 시뮬레이션)

  • Kim, D.W.;Park, Y.C.;Kim, W.K.;Doh, W.;Park, S.J.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.63-66
    • /
    • 1997
  • With the advent of high speed digital signal processing chips, new digital techniques have been introduced to the hearing instrument. This advanced hearing instrument circuitry has led to the need or and the development of new fitting approach. A number of different fitting approaches have been developed over the past few years, yet there has been little agreement on which approach is the "best" or most appropriate to use. However, when we develop not only new hearing aid, but also its fitting method, the intensive subject-based clinical tests are necessarily accompanied. In this paper, we present an objective method to evaluate and predict the performance of hearing aids without the help of such subject-based tests. In the hearing impairment simulation (HIS) algorithm, a sensorineural hearing impairment model is established from auditory test data of the impaired subject being simulated. Also, in the hearing impairment simulation system the abnormal loudness relationships created by recruitment was transposed to the normal dynamic span of hearing. The nonlinear behavior of the loudness recruitment is defined using hearing loss unctions generated from the measurements. The recruitment simulation is validated by an experiment with two impaired listeners, who compared processed speech in the normal ear with unprocessed speech in the impaired ear. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP.

  • PDF

Adaptation Mode Controller for Adaptive Microphone Array System (마이크로폰 어레이를 위한 적응 모드 컨트롤러)

  • Jung Yang-Won;Kang Hong-Goo;Lee Chungyong;Hwang Youngsoo;Youn Dae Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1573-1580
    • /
    • 2004
  • In this paper, an adaptation mode controller for adaptive microphone array system is proposed for high-quality speech acquisition in real environments. To ensure proper adaptation of the adaptive array algorithm, the proposed adaptation mode controller uses not only temporal information, but also spatial information. The proposed adaptation mode controller is constructed with two processing stages: an initialization stage and a running stage. In the initialization stage, a sound source localization technique is adopted, and a signal correlation characteristic is used in the running stage. For the adaptive may algorithm, a generalized sidelobe canceller with an adaptive blocking matrix is used. The proposed adaptation mode controller can be used even when the adaptive blocking matrix is not adapted, and is much stable than the power ratio method. The proposed algorithm is evaluated in real environment, and simulation results show 13dB SINR improvement with the speaker sitting 2m distance from the may.

The Effect of Helium Gas Intake on the Characteristics Change of the Acoustic Organs for Voice Signal Analysis Parameter Application (음성신호 분석 요소의 적용으로 헬륨가스 흡입이 음성 기관의 특성 변화에 미치는 영향)

  • Kim, Bong-Hyun;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.397-404
    • /
    • 2011
  • In this paper, we were carried out experiments to apply parameter of voice analysis to measure changing characteristic articulator according to inhale the helium gas. The helium gas was used to overcome air embolism nitrogen gas to deal a fatal blow in body nitrogen gas by diver. However, the helium gas has been much trouble interpretation about abnormal voice of diver to cause squeaky voice of low articulation. Therefor, we was carried out experiments about pitch and spectrogram measurement, analysis based on to influence in acoustic organs before and after of inhaled helium gas.

Recognizing Five Emotional States Using Speech Signals (음성 신호를 이용한 화자의 5가지 감성 인식)

  • Kang Bong-Seok;Han Chul-Hee;Woo Kyoung-Ho;Yang Tae-Young;Lee Chungyong;Youn Dae-Hee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.101-104
    • /
    • 1999
  • 본 논문에서는 음성 신호를 이용해서 화자의 감정을 인식하기 위해 3가지 시스템을 구축하고 이들의 성능을 비교해 보았다. 인식 대상으로 하는 감정은 기쁨, 슬픔, 화남, 두려움, 지루함, 평상시의 감정이고, 각 감정에 대한 감정 음성 데이터베이스를 직접 구축하였다. 피치와 에너지 정보를 감성 인식의 특징으로 이용하였고, 인식 알고리듬은 MLB(Maximum-Likelihood Bayes)분류기, NN(Nearest Neighbor)분류기 및 HMM(Hidden Markov Model)분류기를 이용하였다. 이 중 MLB 분류기와 NN 분류기에서는 특징벡터로 피치와 에너지의 평균과 표준편차, 최대값 등 통계적인 정보를 이용하였고, TMM 분류기에서는 각 프레임에서의 델타 피치와 델타델타 피치, 델타 에너지와 델타델타 에너지 등 시간적 정보를 이용하였다. 실험은 화자종속, 문장독립형 방식으로 하였고, 인식 실험 결과는 MLB를 이용해서 $68.9\%, NN을 이용해서 $66.7\%를 얻었고, HMM 분류기를 이용해서 $89.30\%를 얻었다.

  • PDF