• Title/Summary/Keyword: Speech signal processing

Search Result 331, Processing Time 0.026 seconds

An Adaptive Utterance Verification Framework Using Minimum Verification Error Training

  • Shin, Sung-Hwan;Jung, Ho-Young;Juang, Biing-Hwang
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.423-433
    • /
    • 2011
  • This paper introduces an adaptive and integrated utterance verification (UV) framework using minimum verification error (MVE) training as a new set of solutions suitable for real applications. UV is traditionally considered an add-on procedure to automatic speech recognition (ASR) and thus treated separately from the ASR system model design. This traditional two-stage approach often fails to cope with a wide range of variations, such as a new speaker or a new environment which is not matched with the original speaker population or the original acoustic environment that the ASR system is trained on. In this paper, we propose an integrated solution to enhance the overall UV system performance in such real applications. The integration is accomplished by adapting and merging the target model for UV with the acoustic model for ASR based on the common MVE principle at each iteration in the recognition stage. The proposed iterative procedure for UV model adaptation also involves revision of the data segmentation and the decoded hypotheses. Under this new framework, remarkable enhancement in not only recognition performance, but also verification performance has been obtained.

Audio Signal Processing and System Design for improved intelligibility in Conference Room (회의실의 명료성(STI) 향상을 위한 오디오신호 처리 및 시스템 설계)

  • Kang, Cheolyong;Lee, Seokjoo;Jo, Kwangyeon;Lee, Seonhee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.225-232
    • /
    • 2017
  • Recently, the development of digital transmission technology of audio signals and the introduction of audio network equipment using digital transmission technology have been made. As a result, audio network technology and equipment are actively applied to the design and construction of audio systems. The meeting room is a place where a large number of participants exchange opinions and communicate with each other. In addition to using an electric acoustic device such as a microphone and a speaker, it improves the intelligibility of the conference room through an example using an audio network.

Signal Processing and Implementation of Transmitter for Cochlear Implant (인공 와우를 위한 신호 처리 및 전달부의 구현)

  • Chae, D.;Choi, D.;Byun, J.;Baeck, S.;Kong, H.;Park, S.
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.284-286
    • /
    • 1993
  • Software and hardware for cochlear implant system have been developed to create a speech signal processing system which, in real-time, extracts model parameter including formants, pitch, amplitude information. The system is based on the Texas Instruments TMS320 family. In hardware, computer interface has been desisted and implemented that allows presentation of biphasic pulse stimuli to patients with the hearing handicapped. The host computer sends a stream of bytes to the parallel port. Upon receipt of the data the interface generates the appropriate burst sequence that is delivered to the patient's external transmitter coil. The coded information is interpreted by the Nucleus-22 internal receiver that delivers the pulse to the specified electrodes at the specified amplitude and pulse width.

  • PDF

Emotion Recognition of Low Resource (Sindhi) Language Using Machine Learning

  • Ahmed, Tanveer;Memon, Sajjad Ali;Hussain, Saqib;Tanwani, Amer;Sadat, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.369-376
    • /
    • 2021
  • One of the most active areas of research in the field of affective computing and signal processing is emotion recognition. This paper proposes emotion recognition of low-resource (Sindhi) language. This work's uniqueness is that it examines the emotions of languages for which there is currently no publicly accessible dataset. The proposed effort has provided a dataset named MAVDESS (Mehran Audio-Visual Dataset Mehran Audio-Visual Database of Emotional Speech in Sindhi) for the academic community of a significant Sindhi language that is mainly spoken in Pakistan; however, no generic data for such languages is accessible in machine learning except few. Furthermore, the analysis of various emotions of Sindhi language in MAVDESS has been carried out to annotate the emotions using line features such as pitch, volume, and base, as well as toolkits such as OpenSmile, Scikit-Learn, and some important classification schemes such as LR, SVC, DT, and KNN, which will be further classified and computed to the machine via Python language for training a machine. Meanwhile, the dataset can be accessed in future via https://doi.org/10.5281/zenodo.5213073.

Implementation of Packet Voice Protocol (패킷음성 프로토콜의 구현)

  • 이상길;신병철;김윤관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.12
    • /
    • pp.1841-1854
    • /
    • 1993
  • In this paper, the packet voice protocol for the transmission of voice signal onto ethernet is implemented in a personal computer (PC). The packet voice protocol used is a modified one from CCITT G.764 packetized voice protocol. The hardware system to facilitate the voice communication onto ethernet is divided into telephone interface, speech processing, PC interface and controllers. The software structure of the protocol is designed according to the OSI seven layer architecture and is divided into three routines : ethernet device driver, telephone interface, and processing routine of the packet voice protocol. Experiments through ethernet with telephone interface show that this packet voice communication achieves satisfactory quality when the network traffic is light.

  • PDF

Distance Functions to Detect Changes in Data Streams

  • Bud Ulziitugs;Lim, Jong-Tae
    • Journal of Information Processing Systems
    • /
    • v.2 no.1
    • /
    • pp.44-47
    • /
    • 2006
  • One of the critical issues in a sensor network concerns the detection of changes in data streams. Recently presented change detection schemes primarily use a sliding window model to detect changes. In such a model, a distance function is used to compare two sliding windows. Therefore, the performance of the change detection scheme is greatly influenced by the distance function. With regard to sensor nodes, however, energy consumption constitutes a critical design concern because the change detection scheme is implemented in a sensor node, which is a small battery-powered device. In this paper, we present a comparative study of various distance functions in terms of execution time, energy consumption, and detecting accuracy through simulation of speech signal data. The simulation result demonstrates that the Euclidean distance function has the highest performance while consuming a low amount of power. We believe our work is the first attempt to undertake a comparative study of distance functions in terms of execution time, energy consumption, and accuracy detection.

Design of digital DBNN for pattern recoginition (패턴인식을 위한 디지탈 DBNN의 설계)

  • 송창영;문성룡;김환용
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.11
    • /
    • pp.3001-3011
    • /
    • 1996
  • In this paper, using DBNN algorithm which is used in the binary pattern classification or speech signal processing the digital DBNN circuit is designed having the variable expansion depending the size of input data and pattern type. The processing elemen(PE) of the proposed network consists of the synapse and MAXNET circuits for the similarity measurement between reference and input pattern. Global MAXNET selects the global winner among the local winners which is selected in each PE. Through the several simultions, and thus each PE and global MAXNET search the reference pattern that was the most simlar to input pattern for the discord of the pattern.

  • PDF

Vector Quantization of Image Signal using Larning Count Control Neural Networks (학습 횟수 조절 신경 회로망을 이용한 영상 신호의 벡터 양자화)

  • 유대현;남기곤;윤태훈;김재창
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.1
    • /
    • pp.42-50
    • /
    • 1997
  • Vector quantization has shown to be useful for compressing data related with a wide rnage of applications such as image processing, speech processing, and weather satellite. Neural networks of images this paper propses a efficient neural network learning algorithm, called learning count control algorithm based on the frquency sensitive learning algorithm. This algorithm can train a results more codewords can be assigned to the sensitive region of the human visual system and the quality of the reconstructed imate can be improved. We use a human visual systrem model that is a cascade of a nonlinear intensity mapping function and a modulation transfer function with a bandpass characteristic.

  • PDF

A Study on Speech Recognition for Neck-Microphone Input Signal (넥마이크로 입력된 음성 신호에 대한 인식 연구)

  • Lee, Yeon-Chul;Lee, Sahng-Woon;Hong, Hun-Sop;Han, Mun-Sung;Ma, Pyong-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.747-750
    • /
    • 2002
  • 본 논문에서는 일반적으로 사용되는 마이크가 잡음에 민감하여 음성인식피치 성능을 저하시키기 때문에 잡음치 영향을 받지 않는 고지향성을 가지는 넥마이크로 입력되는 음성신호에 대한 특성을 고찰하고 기존의 일반마이크 입력 음성을 이용하는 인식시스템에서의 인식성능을 살펴본다. 넥마이크는 일반마이크와 동일한 원리로 음성을 채집하는 목부위에 장착된다. 실험에서 넥마이크에 의한 음성은 일반마이크 입력 음성에 비해 인식 성능이 저하되는 결과를 보여주어 앞으로 새로운 인터페이스의 연구대상으로 여겨진다.

  • PDF

A Comparative Study of the Speech Signal Parameters for the Consonants of Pyongyang and Seoul Dialects - Focused on "ㅅ/ㅆ" (평양 지역어와 서울 지역어의 자음에 대한 음성신호 파라미터들의 비교 연구 - "ㅅ/ ㅆ"을 중심으로)

  • So, Shin-Ae;Lee, Kang-Hee;You, Kwang-Bock;Lim, Ha-Young
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.6
    • /
    • pp.927-937
    • /
    • 2018
  • In this paper the comparative study of the consonants of Pyongyang and Seoul dialects of Korean is performed from the perspective of the signal processing which can be regarded as the basis of engineering applications. Until today, the most of speech signal studies were primarily focused on the vowels which are playing important role in the language evolution. In any language, however, the number of consonants is greater than the number of vowels. Therefore, the research of consonants is also important. In this paper, with the vowel study of the Pyongyang dialect, which was conducted by phonological research and experimental phonetic methods, the consonant studies are processed based on an engineering operation. The alveolar consonant, which has demonstrated many differences in the phonetic value between Pyongyang and Seoul dialects, was used as the experimental data. The major parameters of the speech signal analysis - formant frequency, pitch, spectrogram - are measured. The phonetic values between the two dialects were compared with respect to /시/ and /씨/ of Korean language. This study can be used as the basis for the voice recognition and the voice synthesis in the future.