• Title/Summary/Keyword: 음성다중

Search Result 350, Processing Time 0.02 seconds

Performance Analysis of a Statistical Packet Voice/Data Multiplexer (통계적 패킷 음성 / 데이터 다중화기의 성능 해석)

  • 신병철;은종관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.11 no.3
    • /
    • pp.179-196
    • /
    • 1986
  • In this paper, the peformance of a statistical packet voice/data multiplexer is studied. In ths study we assume that in the packet voice/data multiplexer two separate finite queues are used for voice and data traffics, and that voice traffic gets priority over data. For the performance analysis we divide the output link of the multiplexer into a sequence of time slots. The voice signal is modeled as an (M+1) - state Markov process, M being the packet generation period in slots. As for the data traffic, it is modeled by a simple Poisson process. In our discrete time domain analysis, the queueing behavior of voice traffic is little affected by the data traffic since voice signal has priority over data. Therefore, we first analyze the queueing behavior of voice traffic, and then using the result, we study the queueing behavior of data traffic. For the packet voice multiplexer, both inpur state and voice buffer occupancy are formulated by a two-dimensional Markov chain. For the integrated voice/data multiplexer we use a three-dimensional Markov chain that represents the input voice state and the buffer occupancies of voice and data. With these models, the numerical results for the performance have been obtained by the Gauss-Seidel iteration method. The analytical results have been verified by computer simylation. From the results we have found that there exist tradeoffs among the number of voice users, output link capacity, voic queue size and overflow probability for the voice traffic, and also exist tradeoffs among traffic load, data queue size and oveflow probability for the data traffic. Also, there exists a tradeoff between the performance of voice and data traffics for given inpur traffics and link capacity. In addition, it has been found that the average queueing delay of data traffic is longer than the maximum buffer size, when the gain of time assignment speech interpolation(TASI) is more than two and the number of voice users is small.

  • PDF

Design of a Low Bit-rate Speech Coder Based on Mixed Multi-band Excitation Model (혼합 다중대역 여기모델에 기반한 저 전송률 음성 부호화기의 설계)

  • 한우진;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.510-521
    • /
    • 2002
  • MBE (multi-band excitation) coder can achieve high qualify synthetic speech below 4.0 kbps. There are, however, significant differences of the fine structure between the original spectrum and the synthetic spectrum. They are mainly due to the exclusive partition of voiced and unvoiced regions in frequency domain and the decision procedure based on the experimental threshold. This paper proposes MMBE (mixed multi-band excitation) speech model to overcome drawbacks of a MBE coder. In addition, two analysis methods, which do not need my decision procedure based on a threshold, are presented. Both voiced and unvoiced components can be mixed over all the frequency axis in the MMBE speech model. To illustrate the potential of the proposed speech model, we develop a 2.6 kbps MMBE coder and compare it with a 2.9 kbps MBE coder by both objective and subjective methods. The results have shown that the proposed coder has a better performance even at a lower bit-rate compared with the MBE coder.

Voice Personality Transformation Using a Multiple Response Classification and Regression Tree (다중 응답 분류회귀트리를 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.253-261
    • /
    • 2004
  • In this paper, a new voice personality transformation method is proposed. which modifies speaker-dependent feature variables in the speech signals. The proposed method takes the cepstrum vectors and pitch as the transformation paremeters, which represent vocal tract transfer function and excitation signals, respectively. To transform these parameters, a multiple response classification and regression tree (MR-CART) is employed. MR-CART is the vector extended version of a conventional CART, whose response is given by the vector form. We evaluated the performance of the proposed method by comparing with a previously proposed codebook mapping method. We also quantitatively analyzed the performance of voice transformation and the complexities according to various observations. From the experimental results for 4 speakers, the proposed method objectively outperforms a conventional codebook mapping method. and we also observed that the transformed speech sounds closer to target speech.

A Speech Synthesis System based on Cepstral Parameters and Multiband Excitation Signal (켑스트럼 파라미터와 다중대역 여기신호를 사용한 음성 합성 시스팀)

  • 김기순
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.211-215
    • /
    • 1995
  • 명료하고 자연스러운 한국어 음성을 생성하기 위하여 다중대역 여기신호를 이용한 음성 합성 시스팀을 제안한다. 분석계에서는 켑스트럼 파라미터를 사용하여 유성/무성 판별 스펙트럼을 이용한 유/무성 구간 자동판별법을 제안하고, 현재 단순 임펄스와 백색잡음만으로도 구성된 음원과 간단한 유성/무성 판별로 구동되어지는 합성음의 음질상의 한계를 개선하기 위하여 합성계에서는 음질개선 방안으로 유성음 구동시 다중대역 여기신호를 도입하여 합성시 이용한다. 제안된 방법에 대한 청취실험을 한 결과, 유성음 부분 특히 잡음이 많이 섞여 있는 유성음화 마찰음과 모음의 천이부분 등에서 일반적으로 사용되고 있는 간단한 유성/무성 파라미터를 사용한 합성음에 비하여 다중대역 여기신호를 사용한 합성음의 명료도가 매우 우수함을 확인하였다.

  • PDF

Robust Speech Enhancement By Multi $H_\infty$ Filter (다중 $H_\infty$ 필터에 의한 강인한 음성향상)

  • Kim Jun Il;Lee Ki Yong
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.85-88
    • /
    • 2004
  • 칼만/위너 필터 같은 기존의 음성향상 알고리즘은 잡음의 선험적 지식을 요구하고, 음성신호와 추정신호의 오차분산을 최소화하는데 중점을 두었다. 따라서, 잡음에 대한 통계적 추정에 오류가 있을 경우 결과에 악영향을 미칠 수 있다. 그러나 $H_\infty$ 필터는 잡음에 대한 어떠한 가정이나 선험적 지식을 요구하지 않는다. $H_\infty$ 필터는 최소상계(Upper Bound Least)를 적용하여 추정된 모든 신호들로부터 최소 에러 신호를 갖는 최상의 추정신호를 찾아내므로 칼만/위너 필터보다 잡음의 변화에 강인하다. 본 논문에서는 학습 신호로부터 은닉 마코프 모델의 파리미터를 추정한 후, 오염된 신호를 고정된 개수의 $H_\infty$ 필터를 통과시켜 각 출력에 가중된 합으로 향상된 음성 신호를 구한다. 음성의 통계적 특성을 이용하여 모델 파라미터를 추정하는 은닉 마코프 모델과 잡음의 변화에 강인한 $H_\infty$ 알고리즘을 사용해서, 다중 $H_\infty$필터에 의한 강인한 음성향상 방법을 제안하였다.

  • PDF

Target Speaker Speech Restoration via Spectral bases Learning (주파수 특성 기저벡터 학습을 통한 특정화자 음성 복원)

  • Park, Sun-Ho;Yoo, Ji-Ho;Choi, Seung-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.179-186
    • /
    • 2009
  • This paper proposes a target speech extraction which restores speech signal of a target speaker form noisy convolutive mixture of speech and an interference source. We assume that the target speaker is known and his/her utterances are available in the training time. Incorporating the additional information extracted from the training utterances into the separation, we combine convolutive blind source separation(CBSS) and non-negative decomposition techniques, e.g., probabilistic latent variable model. The nonnegative decomposition is used to learn a set of bases from the spectrogram of the training utterances, where the bases represent the spectral information corresponding to the target speaker. Based on the learned spectral bases, our method provides two postprocessing steps for CBSS. Channel selection step finds a desirable output channel from CBSS, which dominantly contains the target speech. Reconstruct step recovers the original spectrogram of the target speech from the selected output channel so that the remained interference source and background noise are suppressed. Experimental results show that our method substantially improves the separation results of CBSS and, as a result, successfully recovers the target speech.

An efficient video multiplexer for the transmission of the DMB multimedia data (DMB 멀티미디어 데이터의 전송을 위한 효율적인 비디오 다중화기)

  • Na Nam-Woong;Baek Sun-Hye;Hong Sung-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.183-186
    • /
    • 2003
  • DMB(Digital Multimedia Broadcasting)는 유럽의 디지털 오디오 방송규격인 Eureka-147 DAB(Digital Audio Broadcasting) 전송시스템을 기반으로 하여 동영상 및 음성, 문자데이터 등을 포함한 멀티미디어 서비스를 제공하기 위한 새로운 방송표준이다 따라서 DMB 시스템은 Eureka-147 DAB 전송부 이외에 영상 및 음성을 압축하는 미디어압축 (복)부호화부, 압축된 미디어 스트림을 다중화 하는 비디오 (역)다중화부가 추가된 구조를 갖는다. 본 논문은 DMB 표준의 비디오 다중화부의 분석을 통하여 확장된 전송기능 및 높은 전송효율을 제공할 수 있는 새로운 비디오 다중화 구조를 제시한다. 또한 표준 비디오 다중화기와 제안된 비디오 다중화기의 성능평가를 위해 기능적으로 분석하고 시뮬레이션을 통해 전송효율을 측정하였다.

  • PDF

A Design of Multi-channel Speech Pickup Embedded System for Hands-free Comuunication (핸즈프리 통신을 위한 다중채널 음성픽업 임베디드 시스템 설계)

  • Ju, Hyng-Jun;Park, Chan-Sub;Jeon, Jae-Kuk;Kim, Ki-Man
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.2
    • /
    • pp.366-373
    • /
    • 2007
  • In this paper we propose a multi-channel speech pickup system for calling quality enhancement of hands-free communication using ALTERA Nios-II processor. Multi-channel speech pickup system uses Delay-and-Sum beamformer with zero-padding interpolator. This paper implements speech pickup system using the Nios-II processor with real-time I/O data processing speed. The proposes speech pickup embedded system shows a good agreement with those of computer simulation(MATLAB) and conventional DSP processor(TMS320C6711) result. The proposed method is effective more than previous methods in cost and design processing time. As a result, LE(Logic Element) of hardware used 3,649/5,980(61%) on a chip.

Enhancing Multimodal Emotion Recognition in Speech and Text with Integrated CNN, LSTM, and BERT Models (통합 CNN, LSTM, 및 BERT 모델 기반의 음성 및 텍스트 다중 모달 감정 인식 연구)

  • Edward Dwijayanto Cahyadi;Hans Nathaniel Hadi Soesilo;Mi-Hwa Song
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.617-623
    • /
    • 2024
  • Identifying emotions through speech poses a significant challenge due to the complex relationship between language and emotions. Our paper aims to take on this challenge by employing feature engineering to identify emotions in speech through a multimodal classification task involving both speech and text data. We evaluated two classifiers-Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM)-both integrated with a BERT-based pre-trained model. Our assessment covers various performance metrics (accuracy, F-score, precision, and recall) across different experimental setups). The findings highlight the impressive proficiency of two models in accurately discerning emotions from both text and speech data.

Channel-attentive MFCC for Improved Recognition of Partially Corrupted Speech (부분 손상된 음성의 인식 향상을 위한 채널집중 MFCC 기법)

  • 조훈영;지상문;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4
    • /
    • pp.315-322
    • /
    • 2003
  • We propose a channel-attentive Mel frequency cepstral coefficient (CAMFCC) extraction method to improve the recognition performance of speech that is partially corrupted in the frequency domain. This method introduces weighting terms both at the filter bank analysis step and at the output probability calculation of decoding step. The weights are obtained for each frequency channel of filter bank such that the more reliable channel is emphasized by a higher weight value. Experimental results on TIDIGITS database corrupted by various frequency-selective noises indicated that the proposed CAMFCC method utilizes the uncorrupted speech information well, improving the recognition performance by 11.2% on average in comparison to a multi-band speech recognition system.