• Title/Summary/Keyword: Speech Database

Search Result 331, Processing Time 0.026 seconds

Sound event classification using deep neural network based transfer learning (깊은 신경망 기반의 전이학습을 이용한 사운드 이벤트 분류)

  • Lim, Hyungjun;Kim, Myung Jong;Kim, Hoirin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.143-148
    • /
    • 2016
  • Deep neural network that effectively capture the characteristics of data has been widely used in various applications. However, the amount of sound database is often insufficient for learning the deep neural network properly, so resulting in overfitting problems. In this paper, we propose a transfer learning framework that can effectively train the deep neural network even with insufficient sound event data by employing rich speech or music data. A series of experimental results verify that proposed method performs significantly better than the baseline deep neural network that was trained only with small sound event data.

Emotion Recognition of Low Resource (Sindhi) Language Using Machine Learning

  • Ahmed, Tanveer;Memon, Sajjad Ali;Hussain, Saqib;Tanwani, Amer;Sadat, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.369-376
    • /
    • 2021
  • One of the most active areas of research in the field of affective computing and signal processing is emotion recognition. This paper proposes emotion recognition of low-resource (Sindhi) language. This work's uniqueness is that it examines the emotions of languages for which there is currently no publicly accessible dataset. The proposed effort has provided a dataset named MAVDESS (Mehran Audio-Visual Dataset Mehran Audio-Visual Database of Emotional Speech in Sindhi) for the academic community of a significant Sindhi language that is mainly spoken in Pakistan; however, no generic data for such languages is accessible in machine learning except few. Furthermore, the analysis of various emotions of Sindhi language in MAVDESS has been carried out to annotate the emotions using line features such as pitch, volume, and base, as well as toolkits such as OpenSmile, Scikit-Learn, and some important classification schemes such as LR, SVC, DT, and KNN, which will be further classified and computed to the machine via Python language for training a machine. Meanwhile, the dataset can be accessed in future via https://doi.org/10.5281/zenodo.5213073.

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.

A Study on Detection of Abnormal Patterns Based on AI·IoT to Support Environmental Management of Architectural Spaces (건축공간 환경관리 지원을 위한 AI·IoT 기반 이상패턴 검출에 관한 연구)

  • Kang, Tae-Wook
    • Journal of KIBIM
    • /
    • v.13 no.3
    • /
    • pp.12-20
    • /
    • 2023
  • Deep learning-based anomaly detection technology is used in various fields such as computer vision, speech recognition, and natural language processing. In particular, this technology is applied in various fields such as monitoring manufacturing equipment abnormalities, detecting financial fraud, detecting network hacking, and detecting anomalies in medical images. However, in the field of construction and architecture, research on deep learning-based data anomaly detection technology is difficult due to the lack of digitization of domain knowledge due to late digital conversion, lack of learning data, and difficulties in collecting and processing field data in real time. This study acquires necessary data through IoT (Internet of Things) from the viewpoint of monitoring for environmental management of architectural spaces, converts them into a database, learns deep learning, and then supports anomaly patterns using AI (Artificial Infelligence) deep learning-based anomaly detection. We propose an implementation process. The results of this study suggest an effective environmental anomaly pattern detection solution architecture for environmental management of architectural spaces, proving its feasibility. The proposed method enables quick response through real-time data processing and analysis collected from IoT. In order to confirm the effectiveness of the proposed method, performance analysis is performed through prototype implementation to derive the results.

Identification and Detection of Emotion Using Probabilistic Output SVM (확률출력 SVM을 이용한 감정식별 및 감정검출)

  • Cho, Hoon-Young;Jung, Gue-Jun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.8
    • /
    • pp.375-382
    • /
    • 2006
  • This paper is about how to identify emotional information and how to detect a specific emotion from speech signals. For emotion identification and detection task. we use long-term acoustic feature parameters and select the optimal Parameters using the feature selection technique based on F-score. We transform the conventional SVM into probabilistic output SVM for our emotion identification and detection system. In this paper we propose three approximation methods for log-likelihoods in a hypothesis test and compare the performance of those three methods. Experimental results using the SUSAS database showed the effectiveness of both feature selection and Probabilistic output SVM in the emotion identification task. The proposed methods could detect anger emotion with 91.3% correctness.

Optimization of Memristor Devices for Reservoir Computing (축적 컴퓨팅을 위한 멤리스터 소자의 최적화)

  • Kyeongwoo Park;HyeonJin Sim;HoBin Oh;Jonghwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.1-6
    • /
    • 2024
  • Recently, artificial neural networks have been playing a crucial role and advancing across various fields. Artificial neural networks are typically categorized into feedforward neural networks and recurrent neural networks. However, feedforward neural networks are primarily used for processing static spatial patterns such as image recognition and object detection. They are not suitable for handling temporal signals. Recurrent neural networks, on the other hand, face the challenges of complex training procedures and requiring significant computational power. In this paper, we propose memristors suitable for an advanced form of recurrent neural networks called reservoir computing systems, utilizing a mask processor. Using the characteristic equations of Ti/TiOx/TaOy/Pt, Pt/TiOx/Pt, and Ag/ZnO-NW/Pt memristors, we generated current-voltage curves to verify their memristive behavior through the confirmation of hysteresis. Subsequently, we trained and inferred reservoir computing systems using these memristors with the NIST TI-46 database. Among these systems, the accuracy of the reservoir computing system based on Ti/TiOx/TaOy/Pt memristors reached 99%, confirming the Ti/TiOx/TaOy/Pt memristor structure's suitability for inferring speech recognition tasks.

  • PDF

Design and Implementation of a Real-time Bio-signal Obtaining, Transmitting, Compressing and Storing System for Telemedicine (원격 진료를 위한 실시간 생체 신호 취득, 전송 및 압축, 저장 시스템의 설계 및 구현)

  • Jung, In-Kyo;Kim, Young-Joon;Park, In-Su;Lee, In-Sung
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.4
    • /
    • pp.42-50
    • /
    • 2008
  • The real-time bio-signal monitoring system based on the ZigBee and SIP/RTP has proposed and implemented for telemedicine but that has some problems at the stabilities to transmit bio-signal from the sensors to the other sides. In this paper, we designed and implemented a real-time bio-signal monitoring system that is focused on the reliability and efficiency for transmitting bio-signal at real-time. We designed the system to have enhanced architecture and performance in the ubiquitous sensor network, SIP/RTP real-time transmission and management of the database. The Bluetooth network is combined with ZigBee network to distribute traffic of the ECG and the other bio-signal. The modified and multiplied RTP session is used to ensure real-time transmission of ECG, other bio-signals and speech information on the internet. The modified ECG compression method based on DWLT and MSVQ is used to reduce data rate for storing ECG to the database. Finally we implemented a system that has improved performance for transmitting bio-signal from the sensors to the monitoring console and database. This implemented system makes possible to make various applications to serve U-health care services.

Front-End Processing for Speech Recognition in the Telephone Network (전화망에서의 음성인식을 위한 전처리 연구)

  • Jun, Won-Suk;Shin, Won-Ho;Yang, Tae-Young;Kim, Weon-Goo;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.4
    • /
    • pp.57-63
    • /
    • 1997
  • In this paper, we study the efficient feature vector extraction method and front-end processing to improve the performance of the speech recognition system using KT(Korea Telecommunication) database collected through various telephone channels. First of all, we compare the recognition performances of the feature vectors known to be robust to noise and environmental variation and verify the performance enhancement of the recognition system using weighted cepstral distance measure methods. The experiment result shows that the recognition rate is increasedby using both PLP(Perceptual Linear Prediction) and MFCC(Mel Frequency Cepstral Coefficient) in comparison with LPC cepstrum used in KT recognition system. In cepstral distance measure, the weighted cepstral distance measure functions such as RPS(Root Power Sums) and BPL(Band-Pass Lifter) help the recognition enhancement. The application of the spectral subtraction method decrease the recognition rate because of the effect of distortion. However, RASTA(RelAtive SpecTrAl) processing, CMS(Cepstral Mean Subtraction) and SBR(Signal Bias Removal) enhance the recognition performance. Especially, the CMS method is simple but shows high recognition enhancement. Finally, the performances of the modified methods for the real-time implementation of CMS are compared and the improved method is suggested to prevent the performance degradation.

  • PDF

A Study on a Landscape Color Analysis according to Regional Environment - Centering on Damyang County, Jeollnamdo - (지역 환경에 따른 경관 색채분석에 관한 연구 - 전라남도 담양군을 중심으로 -)

  • Choi, Seong-Kyung;Moon, Jung-Min
    • Korean Institute of Interior Design Journal
    • /
    • v.21 no.4
    • /
    • pp.146-154
    • /
    • 2012
  • As Damyang has preserved both beautiful natural environment and tradition very well, it needs colors which can coexist with Damyang while preserving it as it is rather than colorful and refined colors. However, the present Damyang deteriorates the quality of beautiful natural scenes by chaotic uses of colors. Therefore, colors which can represent symbolism based on the present colors of Damyang should be used so that everyone can be pleased with them. Finally, the basic colors decided were classified into main, supplement and highlight colors in consideration of characteristics of each scene and they were effectively arranged based on the colors decided. If such colors and color schemes are properly applied according to characteristics of scenes, ecological, historical, cultural and traditional scenes of Damyang can be preserved consistently. Academic literature uses the abstract to succinctly communicate complex research. An abstract may act as a stand-alone entity instead of a full paper. As such, an abstract is used by many organizations as the basis for selecting research that is proposed for presentation in the form of a poster, platform/oral presentation or workshop presentation at an academic conference. Most literature database search engines index only abstracts rather than providing the entire text of the paper. Full texts of scientific papers must often be purchased because of copyright and/or publisher fees and therefore the abstract is a significant selling point for the reprint or electronic version of the full-text. Abstracts are protected under copyright law just as any other form of written speech is protected. However, publishers of scientific articles invariably make abstracts publicly available, even when the article itself is protected by a toll barrier. For example, articles in the biomedical literature are available publicly from medline which is accessible through design. It is a common misconception that the abstracts in medline provide sufficient information for medical practitioners, students, scholars and patients. The abstract can convey the main results and conclusions of a scientific article but the full text article must be consulted for details of the methodology.

  • PDF

Research on Classification of Human Emotions Using EEG Signal (뇌파신호를 이용한 감정분류 연구)

  • Zubair, Muhammad;Kim, Jinsul;Yoon, Changwoo
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.821-827
    • /
    • 2018
  • Affective computing has gained increasing interest in the recent years with the development of potential applications in Human computer interaction (HCI) and healthcare. Although momentous research has been done on human emotion recognition, however, in comparison to speech and facial expression less attention has been paid to physiological signals. In this paper, Electroencephalogram (EEG) signals from different brain regions were investigated using modified wavelet energy features. For minimization of redundancy and maximization of relevancy among features, mRMR algorithm was deployed significantly. EEG recordings of a publically available "DEAP" database have been used to classify four classes of emotions with Multi class Support Vector Machine. The proposed approach shows significant performance compared to existing algorithms.