• 제목/요약/키워드: speech communication

검색결과 888건 처리시간 0.027초

An Encrypted Speech Retrieval Scheme Based on Long Short-Term Memory Neural Network and Deep Hashing

  • Zhang, Qiu-yu;Li, Yu-zhou;Hu, Ying-jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권6호
    • /
    • pp.2612-2633
    • /
    • 2020
  • Due to the explosive growth of multimedia speech data, how to protect the privacy of speech data and how to efficiently retrieve speech data have become a hot spot for researchers in recent years. In this paper, we proposed an encrypted speech retrieval scheme based on long short-term memory (LSTM) neural network and deep hashing. This scheme not only achieves efficient retrieval of massive speech in cloud environment, but also effectively avoids the risk of sensitive information leakage. Firstly, a novel speech encryption algorithm based on 4D quadratic autonomous hyperchaotic system is proposed to realize the privacy and security of speech data in the cloud. Secondly, the integrated LSTM network model and deep hashing algorithm are used to extract high-level features of speech data. It is used to solve the high dimensional and temporality problems of speech data, and increase the retrieval efficiency and retrieval accuracy of the proposed scheme. Finally, the normalized Hamming distance algorithm is used to achieve matching. Compared with the existing algorithms, the proposed scheme has good discrimination and robustness and it has high recall, precision and retrieval efficiency under various content preserving operations. Meanwhile, the proposed speech encryption algorithm has high key space and can effectively resist exhaustive attacks.

잡음환경에서의 음성인식 성능 향상을 위한 이중채널 음성의 CASA 기반 전처리 방법 (CASA-based Front-end Using Two-channel Speech for the Performance Improvement of Speech Recognition in Noisy Environments)

  • 박지훈;윤재삼;김홍국
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.289-290
    • /
    • 2007
  • In order to improve the performance of a speech recognition system in the presence of noise, we propose a noise robust front-end using two-channel speech signals by separating speech from noise based on the computational auditory scene analysis (CASA). The main cues for the separation are interaural time difference (ITD) and interaural level difference (ILD) between two-channel signal. As a result, we can extract 39 cepstral coefficients are extracted from separated speech components. It is shown from speech recognition experiments that proposed front-end has outperforms the ETSI front-end with single-channel speech.

  • PDF

음성통신 중 웨이브렛 계수 양자화를 이용한 비밀정보 통신 방법 (Secret Data Communication Method using Quantization of Wavelet Coefficients during Speech Communication)

  • 이종관
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 가을 학술발표논문집 Vol.33 No.2 (D)
    • /
    • pp.302-305
    • /
    • 2006
  • In this paper, we have proposed a novel method using quantization of wavelet coefficients for secret data communication. First, speech signal is partitioned into small time frames and the frames are transformed into frequency domain using a WT(Wavelet Transform). We quantize the wavelet coefficients and embedded secret data into the quantized wavelet coefficients. The destination regard quantization errors of received speech as seceret dat. As most speech watermark techniques have a trade off between noise robustness and speech quality, our method also have. However we solve the problem with a partial quantization and a noise level dependent threshold. In additional, we improve the speech quality with de-noising method using wavelet transform. Since the signal is processed in the wavelet domain, we can easily adapt the de-noising method based on wavelet transform. Simulation results in the various noisy environments show that the proposed method is reliable for secret communication.

  • PDF

PESQ-Based Selection of Efficient Partial Encryption Set for Compressed Speech

  • Yang, Hae-Yong;Lee, Kyung-Hoon;Lee, Sang-Han;Ko, Sung-Jea
    • ETRI Journal
    • /
    • 제31권4호
    • /
    • pp.408-418
    • /
    • 2009
  • Adopting an encryption function in voice over Wi-Fi service incurs problems such as additional power consumption and degradation of communication quality. To overcome these problems, a partial encryption (PE) algorithm for compressed speech was recently introduced. However, from the security point of view, the partial encryption sets (PESs) of the conventional PE algorithm still have much room for improvement. This paper proposes a new selection method for finding a smaller PES while maintaining the security level of encrypted speech. The proposed PES selection method employs the perceptual evaluation of the speech quality (PESQ) algorithm to objectively measure the distortion of speech. The proposed method is applied to the ITU-T G.729 speech codec, and content protection capability is verified by a range of tests and a reconstruction attack. The experimental results show that encrypting only 20% of the compressed bitstream is sufficient to effectively hide the entire content of speech.

디지털 통신 시스템에서의 음성 인식 성능 향상을 위한 전처리 기술 (Pre-Processing for Performance Enhancement of Speech Recognition in Digital Communication Systems)

  • 서진호;박호종
    • 한국음향학회지
    • /
    • 제24권7호
    • /
    • pp.416-422
    • /
    • 2005
  • 디지털 통신 시스템에서의 음성 인식은 음성 부호화기에 의한 음성 신호의 왜곡으로 인하여 성능이 크게 저하된다. 본 논문에서는 음성 부호화기에 의한 스펙트럼 왜곡을 분석하고 왜곡된 주파수 정보를 보상하는 전처리 과정을 통하여 음성 인식 성능을 향상시키는 방법을 제안한다. 현재 널리 사용되는 표준 음성 부호화기인 IS-127 EVRC, ITU G.729 CS-ACELP. IS-96 QCELP를 사용하여 부호화에 의한 왜곡을 분석하고, 모든 음성 부호화기에 공통으로 적용하여 왜곡을 보상할 수 있는 전처리 방법을 개발하였다. 본 논문에서 제안하는 왜곡 보상 방법을 세 종류의 음성부호화기에 각각 적용하였으며, 왜곡된 음성 신호에 대한 음성 인식률에 비하여 최대 $15.6\%$의 인식률 향상을 얻을 수 있었다.

뇌성마비 성인의 일상발화와 명료한 발화에서의 모음의 음향적 특성 (Acoustic properties of vowels produced by cerebral palsic adults in conversational and clear speech)

  • 고현주;김수진
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.101-104
    • /
    • 2006
  • The present study examined two acoustic characteristics(duration and intensity) of vowels produced by 4 cerebral palsic adults and 4 nondisabled adults in conversational and clear speech. In this study, clear speech means: (1) slow one's speech rate just a little, (2) articulate all phonemes accurately and increase vocal volume. Speech material included 10 bisyllabic real words in the frame sentences. Temporal-acoustic analysis showed that vowels produced by two speaker groups in clear speech(in this case, more accurate and louder speech) were significantly longer than vowels in conversational speech. In addition, intensity of vowels produced by cerebral palsic speakers in clear speech(in this case, more accurate and louder speech) was higher than in conversational speech.

  • PDF

마이크로폰의 이득 특성에 강인한 위치 추적 (A new sound source localization method robust to microphones' gain)

  • 최지성;이지연;정상배;한민수
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.127-130
    • /
    • 2006
  • This paper suggests an algorithm that can estimate the direction of the sound source with three microphones arranged on a circle. The algorithm is robust to microphones' gains because it uses only the time differences between microphones. To make this possible, a cost function which normalizes the microphone's gains is utilized and a procedure to detect the rough position of the sound source is also proposed. Through our experiments, we obtained significant performance improvement compared with the energy-based localizer.

  • PDF

Maximum Likelihood Training and Adaptation of Embedded Speech Recognizers for Mobile Environments

  • Cho, Young-Kyu;Yook, Dong-Suk
    • ETRI Journal
    • /
    • 제32권1호
    • /
    • pp.160-162
    • /
    • 2010
  • For the acoustic models of embedded speech recognition systems, hidden Markov models (HMMs) are usually quantized and the original full space distributions are represented by combinations of a few quantized distribution prototypes. We propose a maximum likelihood objective function to train the quantized distribution prototypes. The experimental results show that the new training algorithm and the link structure adaptation scheme for the quantized HMMs reduce the word recognition error rate by 20.0%.

Adaptive Speech Streaming Based on Packet Loss Prediction Using Support Vector Machine for Software-Based Multipoint Control Unit over IP Networks

  • Kang, Jin Ah;Han, Mikyong;Jang, Jong-Hyun;Kim, Hong Kook
    • ETRI Journal
    • /
    • 제38권6호
    • /
    • pp.1064-1073
    • /
    • 2016
  • An adaptive speech streaming method to improve the perceived speech quality of a software-based multipoint control unit (SW-based MCU) over IP networks is proposed. First, the proposed method predicts whether the speech packet to be transmitted is lost. To this end, the proposed method learns the pattern of packet losses in the IP network, and then predicts the loss of the packet to be transmitted over that IP network. The proposed method classifies the speech signal into different classes of silence, unvoiced, speech onset, or voiced frame. Based on the results of packet loss prediction and speech classification, the proposed method determines the proper amount and bitrate of redundant speech data (RSD) that are sent with primary speech data (PSD) in order to assist the speech decoder to restore the speech signals of lost packets. Specifically, when a packet is predicted to be lost, the amount and bitrate of the RSD must be increased through a reduction in the bitrate of the PSD. The effectiveness of the proposed method for learning the packet loss pattern and assigning a different speech coding rate is then demonstrated using a support vector machine and adaptive multirate-narrowband, respectively. The results show that as compared with conventional methods that restore lost speech signals, the proposed method remarkably improves the perceived speech quality of an SW-based MCU under various packet loss conditions in an IP network.