• Title/Summary/Keyword: Word Reduction

Search Result 128, Processing Time 0.024 seconds

Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model (Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소)

  • Kim, Kwang-Ho;Lee, Donghyun;Lim, Minkyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

Affixation effects on word-final coda deletion in spontaneous Seoul Korean speech

  • Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.9-14
    • /
    • 2016
  • This study investigated the patterns of coda deletion in spontaneous Seoul Korean speech. More specifically, the current study focused on three factors in promoting coda deletion, namely, word position, consonant type, and morpheme type. The results revealed that, first, coda deletion frequently occurred when affixes were attached to the ends of words, rather than in affixes in word-internal positions or in roots. Second, alveolar consonants [n] and [l] in the coda positions of high-frequency affixes [nɨn] and [lɨl] were most likely to be deleted. Additionally, regarding affix reduction in the word-final position, all subjects seemed to depend on this articulatory strategy to a similar degree. In sum, the current study found that affixes without primary semantic content in spontaneous speech tend to undergo the process of reduction, favoring the occurrence of specific pronunciation variants.

Effects of Word Frequency on a Lenition Process: Evidence from Stop Voicing and /h/ Reduction in Korean

  • Choi, Tae-Hwan;Lim, Nam-Sil;Han, Jeong-Im
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.35-48
    • /
    • 2006
  • The present study examined whether words with higher frequency have more exposure to the lenition process such as intervocalic stop voicing or /h/ reduction in the production of the Korean speakers. Experiment 1 and Experiment 2 tested if word-internal intervocalic voicing and /h/ reduction occur more often in the words with higher frequency than less frequent words respectively. Results showed that the rate of voicing was not significantly different between the high frequency group and the low frequency group; rather both high and low frequency words were shown to be fully voiced in this prosodic position. However, intervocalic /h/s were deleted more in high frequency words than in low frequency words. Low frequency words showed that other phonetic variants such as [h] and [w] were found more often than in high frequency group. Thus the results of the present study are indefinitive as to the relationship between the word frequency and lenition with the data at hand.

  • PDF

Patterns of consonant deletion in the word-internal onset position: Evidence from spontaneous Seoul Korean speech

  • Kim, Jungsun;Yun, Weonhee;Kang, Ducksoo
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.45-51
    • /
    • 2016
  • This study examined the deletion of onset consonant in the word-internal structure in spontaneous Seoul Korean speech. It used the dataset of speakers in their 20s extracted from the Korean Corpus of Spontaneous Speech (Yun et al., 2015). The proportion of deletion of word-internal onset consonants was analyzed using the linear mixed-effects regression model. The factors that promoted the deletion of onsets were primarily the types of consonants and their phonetic contexts. The results showed that onset deletion was more likely to occur for a lenis velar stop [k] than the other consonants, and in the phonetic contexts, when the preceding vowel was a low central vowel [a]. Moreover, some speakers tended to more frequently delete onset consonants (e.g., [k] and [n]) than other speakers, which reflected individual differences. This study implies that word-internal onsets undergo a process of gradient reduction within individuals' articulatory strategies.

Coordinative movement of articulators in bilabial stop /p/

  • Son, Minjung
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.77-89
    • /
    • 2018
  • Speech articulators are coordinated for the purpose of segmental constriction in terms of a task. In particular, vertical jaw movements repeatedly contribute to consonantal as well as vocalic constriction. The current study explores vertical jaw movements in conjunction with bilabial constriction in bilabial stop /p/ in the context /a/-to-/a/. Revisiting kinematic data of /p/ collected using the electromagenetic midsagittal articulometer (EMMA) method from seven (four female and three male) speakers of Seoul Korean, we examined maximum vertical jaw position, its relative timing with respect to the upper and lower lips, and lip aperture minima. The results of those dependent variables are recapitulated in terms of linguistic (different word boundaries) and paralinguistic (different speech rates) factors as follows. Firstly, maximum jaw height was lower in the across-word boundary condition (across-word < within-word), but it did not differ as a function of different speech rates (comfortable = fast). Secondly, more reduction in the lip aperture (LA) gesture occurred in fast rate, while word-boundary effects were absent. Thirdly, jaw raising was still in progress after the lips' positional extrema were achieved in the within-word condition, while the former was completed before the latter in the across-word condition. Lastly, relative temporal lags between the jaw and the lips (UL and LL) were more synchronous in fast rate, compared to comfortable rate. When these results are considered together, it is possible to posit that speakers are not tolerant of lenition to the extent that it is potentially realized as a labial approximant in either word-boundary condition while jaw height still manifested lower jaw position in the across-word boundary condition. Early termination of vertical jaw maxima before vertical lower lip maxima across-word condition may be partly responsible for the spatial reduction of jaw raising movements. This may come about as a consequence of an excessive number of factors (e.g., upper lip height (UH), lower lip height (LH), jaw angle (JA)) for the representation of a vector with two degrees of freedom (x, y) engaged in a gesture-based task (e.g., lip aperture (LA)). In the task-dynamic application toolkit, the jaw angle parameter can be assigned numerical values for greater weight in the across-word boundary condition, which in turn gives rise to lower jaw position. Speech rate-dependent spatial reduction in lip aperture may be able to be resolved by means of manipulating activation time of an active tract variable in the gestural score level.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

A Study on the Reduction of Common Words to Classify Causes of Marine Accidents (해양사고 원인을 분류하기 위한 공통단어의 축소에 관한 연구)

  • Yim, Jeong-Bin
    • Journal of Navigation and Port Research
    • /
    • v.41 no.3
    • /
    • pp.109-118
    • /
    • 2017
  • The key word (KW) is a set of words to clearly express the important causations of marine accidents; they are determined by a judge in a Korean maritime safety tribunal. The selection of KW currently has two main issues: one is maintaining consistency due to the different subjective opinion of each judge, and the second is the large number of KW currently in use. To overcome the issues, the systematic framework used to construct KW's needs to be optimized with a minimal number of KW's being derived from a set of Common Words (CW). The purpose of this study is to identify a set of CW to develop the systematic KW construction frame. To fulfill the purpose, the word reduction method to find minimum number of CW is proposed using P areto distribution function and Pareto index. A total of 2,642 KW were compiled and 56 baseline CW were identified in the data sets. These CW, along with their frequency of use across all KW, are reported. Through the word reduction experiments, an average reduction rate of 58.5% was obtained. The estimated CW according to the reduction rates was verified using the Pareto chart. Through this analysis, the development of a systematic KW construction frame is expected to be possible.

Measuring Acoustical Parameters of English Words by the Position in the Phrases (영어어구의 위치에 따른 단어의 음향 변수 측정)

  • Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.115-128
    • /
    • 2007
  • The purposes of this paper were to develop an automatic script to collect such acoustic parameters as duration, intensity, pitch and the first two formant values of English words produced by two native Canadian speakers either alone or in a two-word phrase at a normal speed and to compare those values by the position in the phrases. A Praat script was proposed to obtain the comparable parameters at evenly divided time point of the target word. Results showed that the total duration of the word in the phrase was shorter than that of the word produced alone. That was attributed to the pronunciation style of the native speakers generally placing the primary word stress in the first word position. Also, the reduction ratio of the male speaker depended on the word position in the phrase while the female speaker didn't. Moreover, there were different contours of intensity and pitch by the position of the target word in the phrase while almost the same formant patterns were observed. Further studies would be desirable to examine those parameters of the words in the authentic speech materials.

  • PDF

An effect of dictionary information in the handwritten Hangul word recognition (필기한글 단어 인식에서 사전정보의 효과)

  • 김호연;임길택;남윤석
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1019-1022
    • /
    • 1999
  • In this paper, we analysis the effect of a dictionary in a handwritten Hangul word recognition problem in terms of its size and the length of the words in it. With our experimental results, we can account for the word recognition rate depending not only on character recognition performance, but also much on the amount of the information that the dictionary contains, as well as the reduction rate of a dictionary.

  • PDF

Phonological Error Patterns of Korean Children With Specific Phonological Disorders (정상 아동과 기능적 음운장애 아동의 음운 오류 비교)

  • Kim, Min-Jung;Pae, So-Yeong
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.7-18
    • /
    • 2000
  • The purpose of this study was to investigate the phonological error patterns of korean children with and without specific phonological disorders(SPD). In this study, 29 normally developing children and 10 SPD children were involved. The children were matched the percentage of consonants correct(PCC). 22 picture cards were used to elicit korean consonants in word initial syllable initial, word medial syllable initial, word medial syllable final, word final syllable final positions. The findings were as follows. First, the phonological error patterns of SPD were 1) similar to those of normal children with the same PCC, 2) similar to those of normal children with the lower PCC, or 3) unusual to those of normal children. Second,. korean children showed phonological processes reflecting the korean phonological characteristics: tensification, reduction of the word medial syllable final consonant. This study suggests that both the PCC and error patterns should be considered in assessing phonological abilities of children.

  • PDF