• Title/Summary/Keyword: sentence recognition test

Search Result 20, Processing Time 0.02 seconds

Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation

  • Jeon, Hyung-Bae;Lee, Soo-Young
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.487-493
    • /
    • 2016
  • Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain-specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain-specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain-specific LMs; therefore, the clustering and weight-estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non-negative matrix factorization, and LDA with n-gram counting.

Speech perception difficulties and their associated cognitive functions in older adults (노년층의 말소리 지각 능력 및 관련 인지적 변인)

  • Lee, Soo Jung;Kim, HyangHee
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.63-69
    • /
    • 2016
  • The aims of the present study are two-fold: 1) to explore differences on speech perception between younger and older adults according to noise conditions; and 2) to investigate which cognitive domains are correlated with speech perception. Data were acquired from 15 younger adults and 15 older adults. Sentence recognition test was conducted in four noise conditions(i.e., in-quiet, +5 dB SNR, 0 dB SNR, -5 dB SNR). All participants completed auditory and cognitive assessment. Upon controlling for hearing thresholds, the older group revealed significantly poorer performance compared to the younger adults only under the high noise condition at -5 dB SNR. For older group, performance on Seoul Verbal Learning Test(immediate recall) was significantly correlated with speech perception performance, upon controlling for hearing thresholds. In older adults, working memory and verbal short-term memory are the best predictors of speech-in-noise perception. The current study suggests that consideration of cognitive function for older adults in speech perception assessment is necessary due to its adverse effect on speech perception under background noise.

Factors affecting Diabetic Eye disease and Kidney disease Screening in Diabetic Patients (당뇨병 환자의 당뇨성 안질환 및 신장질환 합병증 검사 수검 여부에 영향을 주는 요인)

  • Kang, Jeong-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.4
    • /
    • pp.226-235
    • /
    • 2020
  • This study was undertaken to investigate factors that affect the assessment of complications in diabetic eye and kidney diseases. Data was obtained from the National Community Health Survey, 2017. The subjects included were 25,829 respondents who had been diagnosed with diabetes. Logistic regression analysis was applied to determine the factors affecting associated diabetic eye disease (fundus examination) and kidney disease (microalbuminuria examination) complications. The diabetic eye disease complication rate was 35.6%, and diabetic kidney disease complication rate was 39.8%. Complications arising due to diabetes were determined to be 35.6% for eye diseases and 39.8% for kidney related diseases. Ed. Notes: The original sentence is not very lucid. I have suggested an alternate edit. I leave it to the author's discretion to accept or reject the same. Please delete whichever sentence is not suitable. Walking activity (OR=1.03, OR=1.02), hemoglobin A1c (HbA1c) recognition (OR=2.33, OR=2.33), blood glucose level recognition (OR=1.61, OR=1.71), diabetes drug therapy (OR=2.67, OR=3.05), and diabetic management education (OR=1.45, OR=1.47) were more likely to be evaluated for eye and kidney disease complications. Our results indicate that to increase the rate of screening for diabetic complications, it is necessary to develop a diabetes management system that includes the type and timing of diabetic complications, as well as different promotional methods that recognize HbA1C and blood glucose levels. Ed. Notes: Do you mean 'screening' methods? Please revise appropriately, if required. In addition, it is essential to develop a guideline for the management of diabetes mellitus, and to incorporate a screening test for diabetic complications in the national screening system.

Readability of Health Messages and Its Communicative Effect (건강 메시지의 독이성과 소통 효과)

  • You, Myoung Soon;Ju, Young Kee
    • Korean Journal of Health Education and Promotion
    • /
    • v.29 no.5
    • /
    • pp.27-36
    • /
    • 2012
  • Objectives: Developing efficient health messages is important for improving health behaviors at a societal level. This study attempts to test a few variables that could constitute the elements for measuring readability of health message. The number of subject-verb relationships in a sentence, placement of jargon, i.e., explication before or after each jargon, and the number of less familiar Chinese characters were manipulated to hypothetically differentiate readability. Methods: In a $2{\times}2$ mixed factorial experiment, 152 college students read two health messages regarding side effect of health functional food and energy drink. The participants' perceived readability was asked, and eight questions were developed to measure the participants' recognition of the health information. Results: Those who read messages manipulated to have high readability rated the message significantly higher than those who read messages with low readability. Also, the former answered the questions more correctly than the latter, implying the association between readability and knowledge acquisition regarding health. Conclusions: Readability is suggested as a factor determining the effect of health messages in affecting the public's health risk perception and relevant behaviors. Further studies to sophisticate the measurement itself and to examine the effect of actual public messages with different readabilities are suggested.

Language Model based on VCCV and Test of Smoothing Techniques for Sentence Speech Recognition (문장음성인식을 위한 VCCV 기반의 언어모델과 Smoothing 기법 평가)

  • Park, Seon-Hee;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.241-246
    • /
    • 2004
  • In this paper, we propose VCCV units as a processing unit of language model and compare them with clauses and morphemes of existing processing units. Clauses and morphemes have many vocabulary and high perplexity. But VCCV units have low perplexity because of the small lexicon and the limited vocabulary. The construction of language models needs an issue of the smoothing. The smoothing technique used to better estimate probabilities when there is an insufficient data to estimate probabilities accurately. This paper made a language model of morphemes, clauses and VCCV units and calculated their perplexity. The perplexity of VCCV units is lower than morphemes and clauses units. We constructed the N-grams of VCCV units with low perplexity and tested the language model using Katz, absolute, modified Kneser-Ney smoothing and so on. In the experiment results, the modified Kneser-Ney smoothing is tested proper smoothing technique for VCCV units.

Effect of Digital Noise Reduction of Hearing Aids on Music and Speech Perception

  • Kim, Hyo Jeong;Lee, Jae Hee;Shim, Hyun Joon
    • Journal of Audiology & Otology
    • /
    • v.24 no.4
    • /
    • pp.180-190
    • /
    • 2020
  • Background and Objectives: Although many studies have evaluated the effect of the digital noise reduction (DNR) algorithm of hearing aids (HAs) on speech recognition, there are few studies on the effect of DNR on music perception. Therefore, we aimed to evaluate the effect of DNR on music, in addition to speech perception, using objective and subjective measurements. Subjects and Methods: Sixteen HA users participated in this study (58.00±10.44 years; 3 males and 13 females). The objective assessment of speech and music perception was based on the Korean version of the Clinical Assessment of Music Perception test and word and sentence recognition scores. Meanwhile, for the subjective assessment, the quality rating of speech and music as well as self-reported HA benefits were evaluated. Results: There was no improvement conferred with DNR of HAs on the objective assessment tests of speech and music perception. The pitch discrimination at 262 Hz in the DNR-off condition was better than that in the unaided condition (p=0.024); however, the unaided condition and the DNR-on conditions did not differ. In the Korean music background questionnaire, responses regarding ease of communication were better in the DNR-on condition than in the DNR-off condition (p=0.029). Conclusions: Speech and music perception or sound quality did not improve with the activation of DNR. However, DNR positively influenced the listener's subjective listening comfort. The DNR-off condition in HAs may be beneficial for pitch discrimination at some frequencies.

Effect of Digital Noise Reduction of Hearing Aids on Music and Speech Perception

  • Kim, Hyo Jeong;Lee, Jae Hee;Shim, Hyun Joon
    • Korean Journal of Audiology
    • /
    • v.24 no.4
    • /
    • pp.180-190
    • /
    • 2020
  • Background and Objectives: Although many studies have evaluated the effect of the digital noise reduction (DNR) algorithm of hearing aids (HAs) on speech recognition, there are few studies on the effect of DNR on music perception. Therefore, we aimed to evaluate the effect of DNR on music, in addition to speech perception, using objective and subjective measurements. Subjects and Methods: Sixteen HA users participated in this study (58.00±10.44 years; 3 males and 13 females). The objective assessment of speech and music perception was based on the Korean version of the Clinical Assessment of Music Perception test and word and sentence recognition scores. Meanwhile, for the subjective assessment, the quality rating of speech and music as well as self-reported HA benefits were evaluated. Results: There was no improvement conferred with DNR of HAs on the objective assessment tests of speech and music perception. The pitch discrimination at 262 Hz in the DNR-off condition was better than that in the unaided condition (p=0.024); however, the unaided condition and the DNR-on conditions did not differ. In the Korean music background questionnaire, responses regarding ease of communication were better in the DNR-on condition than in the DNR-off condition (p=0.029). Conclusions: Speech and music perception or sound quality did not improve with the activation of DNR. However, DNR positively influenced the listener's subjective listening comfort. The DNR-off condition in HAs may be beneficial for pitch discrimination at some frequencies.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Spatialization of Unstructured Document Information Using AI (AI를 활용한 비정형 문서정보의 공간정보화)

  • Sang-Won YOON;Jeong-Woo PARK;Kwang-Woo NAM
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.3
    • /
    • pp.37-51
    • /
    • 2023
  • Spatial information is essential for interpreting urban phenomena. Methodologies for spatializing urban information, especially when it lacks location details, have been consistently developed. Typical methods include Geocoding using structured address information or place names, spatial integration with existing geospatial data, and manual tasks utilizing reference data. However, a vast number of documents produced by administrative agencies have not been deeply dealt with due to their unstructured nature, even when there's demand for spatialization. This research utilizes the natural language processing model BERT to spatialize public documents related to urban planning. It focuses on extracting sentence elements containing addresses from documents and converting them into structured data. The study used 18 years of urban planning public announcement documents as training data to train the BERT model and enhanced its performance by manually adjusting its hyperparameters. After training, the test results showed accuracy rates of 96.6% for classifying urban planning facilities, 98.5% for address recognition, and 93.1% for address cleaning. When mapping the result data on GIS, it was possible to effectively display the change history related to specific urban planning facilities. This research provides a deep understanding of the spatial context of urban planning documents, and it is hoped that through this, stakeholders can make more effective decisions.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.