• Title/Summary/Keyword: Speech Database

Search Result 331, Processing Time 0.019 seconds

Node-Link Development for Pedestrian Navigation System (PNS 네트워크 Node-Link 구성체계)

  • Nam, Doo-Hee;Kim, Young-Shin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.5
    • /
    • pp.26-32
    • /
    • 2008
  • A pedestrian navigation system, an information delivery server, and a program for naturally guiding (such as speech-guiding) the user of a portable terminal at an intersection. An information delivery server comprises a map database containing data such as nodes including paths constituting intersections, links, and costs of the links. The node-link structure is the most important part in pedestrian navigation system. Functional requirements for the road map database vary in different navigation phases. though there are various road network models, their traditional node-link structures, unfortunately, do not solve the problem well. This paper proposes a node-link structure for pedestrian navigation system. The network topological structure in pedestrianl network is presented, which accords with the practical walking habit better than traditional way treating the entire road network.

  • PDF

Establishment of the Korean Standard Vocal Sound into Character Conversion Rule (한국어 음가를 한글 표기로 변환하는 표준규칙 제정)

  • 이계영;임재걸
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.51-64
    • /
    • 2004
  • The purpose of this paper is to establish the Standard Korean Vocal Sound into Character Conversion Rule (Standard VSCC Rule) by reversely applying the Korean Standard Pronunciation Rule that regulates the way of reading written Hangeul sentences. The Standard VSCC Rule performs a crucially important role in Korean speech recognition. The general method of speech recognition is to find the most similar pattern among the standard voice patterns to the input voice pattern. Each of the standard voice patterns is an average of several sample voice patterns. If the unit of the standard voice pattern is a word, then the number of entries of the standard voice pattern will be greater than a few millions (taking inflection and postpositional particles into account). This many entries require a huge database and an impractically too many comparisons in the process of finding the most similar pattern. Therefore, the unit of the standard voice pattern should be a syllable. In this case, we have to resolve the problem of the difference between the Korean vocal sounds and the writing characters. The process of converting a sequence of Korean vocal sounds into a sequence of characters requires our Standard VSCC Rule. Making use of our Standard VSCC Rule, we have implemented a Korean vocal sounds into Hangeul character conversion system. The Korean Standard Pronunciation Rule consists of 30 items. In order to show soundness and completeness of our Standard VSCC Rule, we have tested the conversion system with various data sets reflecting all the 30 items. The test results will be presented in this paper.

English Phoneme Recognition using Segmental-Feature HMM (분절 특징 HMM을 이용한 영어 음소 인식)

  • Yun, Young-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.167-179
    • /
    • 2002
  • In this paper, we propose a new acoustic model for characterizing segmental features and an algorithm based upon a general framework of hidden Markov models (HMMs) in order to compensate the weakness of HMM assumptions. The segmental features are represented as a trajectory of observed vector sequences by a polynomial regression function because the single frame feature cannot represent the temporal dynamics of speech signals effectively. To apply the segmental features to pattern classification, we adopted segmental HMM(SHMM) which is known as the effective method to represent the trend of speech signals. SHMM separates observation probability of the given state into extra- and intra-segmental variations that show the long-term and short-term variabilities, respectively. To consider the segmental characteristics in acoustic model, we present segmental-feature HMM(SFHMM) by modifying the SHMM. The SFHMM therefore represents the external- and internal-variation as the observation probability of the trajectory in a given state and trajectory estimation error for the given segment, respectively. We conducted several experiments on the TIMIT database to establish the effectiveness of the proposed method and the characteristics of the segmental features. From the experimental results, we conclude that the proposed method is valuable, if its number of parameters is greater than that of conventional HMM, in the flexible and informative feature representation and the performance improvement.

A study on end-to-end speaker diarization system using single-label classification (단일 레이블 분류를 이용한 종단 간 화자 분할 시스템 성능 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.536-543
    • /
    • 2023
  • Speaker diarization, which labels for "who spoken when?" in speech with multiple speakers, has been studied on a deep neural network-based end-to-end method for labeling on speech overlap and optimization of speaker diarization models. Most deep neural network-based end-to-end speaker diarization systems perform multi-label classification problem that predicts the labels of all speakers spoken in each frame of speech. However, the performance of the multi-label-based model varies greatly depending on what the threshold is set to. In this paper, it is studied a speaker diarization system using single-label classification so that speaker diarization can be performed without thresholds. The proposed model estimate labels from the output of the model by converting speaker labels into a single label. To consider speaker label permutations in the training, the proposed model is used a combination of Permutation Invariant Training (PIT) loss and cross-entropy loss. In addition, how to add the residual connection structures to model is studied for effective learning of speaker diarization models with deep structures. The experiment used the Librispech database to generate and use simulated noise data for two speakers. When compared with the proposed method and baseline model using the Diarization Error Rate (DER) performance the proposed method can be labeling without threshold, and it has improved performance by about 20.7 %.

Review of Clinical Researches about Korean Medicine Treatment on Language Disorder of Cerebral Palsy (뇌성마비 언어장애에 대한 한의 치료 연구 동향)

  • Kim, Lakhyung;Yu, Gyung
    • The Journal of Pediatrics of Korean Medicine
    • /
    • v.26 no.4
    • /
    • pp.32-37
    • /
    • 2012
  • Objectives: The purpose of this study was to obtain some understanding about Korean medicine treatment on language disorder in cerebral palsy for future practice and the research, from the clinical studies. Methods: The literature was searched using the database-China Academic Journals (CAJ). Clinical studies of Korean medicine treatment for language disorder in cerebral palsy, including Randomized controlled trial (RCT), case control study, case series, case report were analyzed. Results: Fifteen Clinical studies met our inclusion criteria; One case study and six case series, one non-randomized controlled trial and seven RCTs. Acupuncture treatment, especially Head acupuncture, was the major treatment for language disorder of cerebral palsy in clinical studies, as it was used in fourteen studies. Acupoint massage, tuina, and acupoint injection were employed as treatment methods in the studies. Acupuncture treatment was used for language disorder of cerebral palsy combined with language therapy and other rehabilitation treatment in many studies. The effectiveness in the treatment groups, regardless of treatment methods, was higher than that of control group in all RCT studies. Conclusions: The results of this study could be used in the practice and the future study about language disorder of cerebral palsy.

SVM Based Speaker Verification Using Sparse Maximum A Posteriori Adaptation

  • Kim, Younggwan;Roh, Jaeyoung;Kim, Hoirin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.5
    • /
    • pp.277-281
    • /
    • 2013
  • Modern speaker verification systems based on support vector machines (SVMs) use Gaussian mixture model (GMM) supervectors as their input feature vectors, and the maximum a posteriori (MAP) adaptation is a conventional method for generating speaker-dependent GMMs by adapting a universal background model (UBM). MAP adaptation requires the appropriate amount of input utterance due to the number of model parameters to be estimated. On the other hand, with limited utterances, unreliable MAP adaptation can be performed, which causes adaptation noise even though the Bayesian priors used in the MAP adaptation smooth the movements between the UBM and speaker dependent GMMs. This paper proposes a sparse MAP adaptation method, which is known to perform well in the automatic speech recognition area. By introducing sparse MAP adaptation to the GMM-SVM-based speaker verification system, the adaptation noise can be mitigated effectively. The proposed method utilizes the L0 norm as a regularizer to induce sparsity. The experimental results on the TIMIT database showed that the sparse MAP-based GMM-SVM speaker verification system yields a 42.6% relative reduction in the equal error rate with few additional computations.

  • PDF

Preliminary Analysis of Language Styles between South and North Korean Broadcastings (남북한 방송언어의 차이에 대한 기초 분석)

  • Lee, Chang-H.;Kim, Kyung-Il;Park, Jong-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.9
    • /
    • pp.3311-3317
    • /
    • 2010
  • This study compared South and North Korean broadcasting languages to measure the language differences due to the long segregation. This study would provide fundamental database on the language uses between South and North Korea. The KLIWC analyzed the text that was selected from news clips of South and North Korean broadcasting agencies. The results showed that North Korean languages were significantly different from South in terms of affective, cognitive, and social words. In addition, North Korean broadcasting used more person pronoun and a part of speech than South Korean broadcasting. Psychological interpretations were provided based on the language differences.

Monophone and Biphone Compuond Unit for Korean Vocabulary Speech Recognition (한국어 어휘 인식을 위한 혼합형 음성 인식 단위)

  • 이기정;이상운;홍재근
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.6
    • /
    • pp.867-874
    • /
    • 2001
  • In this paper, considering the pronunciation characteristic of Korean, recognition units which can shorten the recognition time and reflect the coarticulation effect simultaneously are suggested. These units are composed of monophone and hipbone ones. Monophone units are applied to the vowels which represent stable characteristic. Biphones are used to the consonant which vary according to adjacent vowel. In the experiment of word recognition of PBW445 database, the compound units result in comparable recognition accuracy with 57% speed up compared with triphone units and better recognition accuracy with similar speed. In addition, we can reduce the memory size because of fewer units.

  • PDF

Faster User Enrollment for Neural Speaker Verification Systems (신경망 기반 화자증명 시스템에서 더욱 향상된 사용자 등록속도)

  • Lee, Tae-Seung;Park, Sung-Won;Hwang, Byong-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.1021-1026
    • /
    • 2003
  • While multilayer perceptrons (MLPs) have great possibility on the application to speaker verification, they suffer from inferior learning speed. To appeal to users, the speaker verification systems based on MLPs must achieve a reasonable enrolling speed and it is thoroughly dependent on the fast teaming of MLPs. To attain real-time enrollment on the systems, the previous two studies have been devoted to the problem and each satisfied the objective. In this paper, the two studies are combined and applied to the systems, on the assumption that each method operates on different optimization principle. By conducting experiments using an MLP-based speaker verification system to which the combination is applied on real speech database, the feasibility of the combination is verified from the results of the experiments.

  • PDF

Rapid Speaker Adaptation Based on Eigenvoice Using Weight Distribution Characteristics (가중치 분포 특성을 이용한 Eigenvoice 기반 고속화자적응)

  • 박종세;김형순;송화전
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.5
    • /
    • pp.403-407
    • /
    • 2003
  • Recently, eigenvoice approach has been widely used for rapid speaker adaptation. However, even in the eigenvoice approach, Performance improvement using very small amount of adaptation data is relatively small in comparison with that using somewhat large adaptation data because the reliable estimation of weights of eigenvoice is difficult. In this paper, we propose a rapid speaker adaptation method based on eigenvoice using the weight distribution characteristics to improve the performance on a small adaptation data. In the Experimental results on vocabulary-independent word recognition task (using PBW 452 database), the weight threshold method alleviates the problem of relatively low performance for a tiny small adaptation data. When single adaptation word is used, word error rate is reduced about 9-18% by the weight threshold method.