• Title/Summary/Keyword: Words Error

Search Result 260, Processing Time 0.024 seconds

A Stochastic Word-Spacing System Based on Word Category-Pattern (어절 내의 형태소 범주 패턴에 기반한 통계적 자동 띄어쓰기 시스템)

  • Kang, Mi-Young;Jung, Sung-Won;Kwon, Hyuk-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.11
    • /
    • pp.965-978
    • /
    • 2006
  • This paper implements an automatic Korean word-spacing system based on word-recognition using morpheme unigrams and the pattern that the categories of those morpheme unigrams share within a candidate word. Although previous work on Korean word-spacing models has produced the advantages of easy construction and time efficiency, there still remain problems, such as data sparseness and critical memory size, which arise from the morpho-typological characteristics of Korean. In order to cope with both problems, our implementation uses the stochastic information of morpheme unigrams, and their category patterns, instead of word unigrams. A word's probability in a sentence is obtained based on morpheme probability and the weight for the morpheme's category within the category pattern of the candidate word. The category weights are trained so as to minimize the error means between the observed probabilities of words and those estimated by words' individual-morphemes' probabilities weighted according to their categories' powers in a given word's category pattern.

Radiation-Induced Soft Error Detection Method for High Speed SRAM Instruction Cache (고속 정적 RAM 명령어 캐시를 위한 방사선 소프트오류 검출 기법)

  • Kwon, Soon-Gyu;Choi, Hyun-Suk;Park, Jong-Kang;Kim, Jong-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.948-953
    • /
    • 2010
  • In this paper, we propose multi-bit soft error detection method which can use an instruction cache of superscalar CPU architecture. Proposed method is applied to high-speed static RAM for instruction cache. Using 1D parity and interleaving, it has less memory overhead and detects more multi-bit errors comparing with other methods. It only detects occurrence of soft errors in static RAM. Error correction is treated like a cache miss situation. When soft errors are occurred, it is detected by 1D parity. Instruction cache just fetch the words from lower-level memory to correct errors. This method can detect multi-bit errors in maximum 4$\times$4 window.

Augmented Quantum Short-Block Code with Single Bit-Flip Error Correction (단일 비트플립 오류정정 기능을 갖는 증강된 Quantum Short-Block Code)

  • Park, Dong-Young;Suh, Sang-Min;Kim, Baek-Ki
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.31-40
    • /
    • 2022
  • This paper proposes an augmented QSBC(Quantum Short-Block Code) that preserves the function of the existing QSBC and adds a single bit-flip error correction function due to Pauli X and Y errors. The augmented QSBC provides the diagnosis and automatic correction of a single Pauli X error by inserting additional auxiliary qubits and Toffoli gates as many as the number of information words into the existing QSBC. In this paper, the general expansion method of the augmented QSBC using seed vector and the realization method of the Toffoli gate of the single bit-flip error automatic correction function reflecting the scalability are also presented. The augmented QSBC proposed in this paper has a trade-off with a coding rate of at least 1/3 and at most 1/2 due to the insertion of auxiliary qubits.

Perceptual, Acoustical, and Physiological Tools in Ataxic Dysarthria Management: A Case Report (운동실조형 마비성구음장애에 적용되는 지각적, 음향학적, 생리학적 도구에 관하여 - 환자사례를 중심으로 -)

  • Kim Hyang Hui
    • Proceedings of the KSPS conference
    • /
    • 1996.02a
    • /
    • pp.9-22
    • /
    • 1996
  • Among the various dysarthric subtypes, diagnosis of ataxic dysarthria is rendered when the speech characteristics include imprecise and irregular articulatory breakdowns, marked degree of speech rate impairment, overall monopitch and monoloudness, and respiratory-articulatory incoordination. Traditionally, speech pathologists have relied only upon their ‘ears’ to describe and evaluate the dysarthric speech. A statement of percentage of correct words identified by a listener do not provide so much more than an index of severity. Within the same perceptual dimension, a carefully constructed speech intelligibility test can specify patterns of errors. The patterns can contain a diagnostic value as well as Provide strategies for remediation. The phonetically transcribed texts on single words and a standard passage, 'kail' produced by an ataxic dysarthria are presented in this report, with an emphasis of the articulatory error analysis. Furthermore,, acoustic tools [e.g., spectrography to measure formant transitions, segment durations, consonant spectra, etc.] are utilized to serve as basic measures that objectively document patients' speech intelligibility, Finally, the treatment methods [e.g., spectrography as a visual feedback, gestural reorganization using pacing method, DAF (Delayed Auditory Feedback)] to modify the dysarthric behaviors are presented.

  • PDF

Orthographic and phonological links in Korean lexical processing (한국어 어휘 처리 과정에서 글짜 정보와 발음 정보의 연결성)

  • Kim, Jee-Sun;Taft, Marcus
    • Annual Conference on Human and Language Technology
    • /
    • 1995.10a
    • /
    • pp.211-214
    • /
    • 1995
  • At what level of orthographic representation is phonology linked in thelexicon? Is it at the whole word level, the syllable level, letter level, etc? This question can be addressed by comparing the two scripts used in Korean, logographic Hanmoon and alphabetic/syllabic Hangul, on a task where judgements must be made about the phonology of a visually presented word. Four experiments are reported using a "homophone decision task" and manipulating the sub-lexical relationship between orthography and phonology in Hanmoon and Hangul, and the lexical status of the stimuli. Hangul words showed a much higher error rate in judging whether there was another word identically pronounced than both Hangul nonwords and Hanmoon words. It is concluded that the relationship between orthography and phonology in the lexicon differs according tn the type of script owing to the availability of sub-lexical information: the process of making a homophone derision is based on a spread of activation exclusively among lexical entries, from orthography to phonology and vice versa (called "Orthography-Phonology-Orthography Rebound" or "OPO Rebound"). The results are explained within the mulitilevel interactive activation model with orthographic units linked to phonological units at each level.

  • PDF

wheelchair system design on speech recognition function (음성인식 기능을 탑재한 다기능 휠체어 시스템 설계 및 구현)

  • 김정훈;류홍석;강재명;강성인;김관형;이상배
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.05a
    • /
    • pp.1-5
    • /
    • 2002
  • The purpose of this paper is developing a speech recognition module in a wheelchair for the sake of convenience. of the disability. For this system, we used TMS320C32 as a main processor; eliminated noise by applying Winer filler while considering characteristics of noise environment in pre-processing stage, and; extracted 12 feature patterns per france using LPC&Cepstrum. Then, we implemented the hybrid form combining DTW (Dynamic Time Warping), which is generally used for isolated words in the conventional algorithms, in the recognition Part, and NN (Neural network) to prevent any error of recognition. In this research, we achieved a recognition rate of more than 96% on isolated words when DTW and Hybrid forms were individually experimented in noise environment

  • PDF

A Study on Lexical Ambiguity Resolution of Korean Morphological Analyzer (형태소 분석기의 어휘적 중의성 해결에 관한 연구)

  • Park, Yong-Uk
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.4
    • /
    • pp.783-787
    • /
    • 2012
  • It is not easy to find out syntactic error in a spelling checker systems of Korean, because the spelling checker is generally to correct each phrase and it cannot check the errors of contextual ill-matched words. Spelling checker system tests errors based on a words. Disambiguation of lexical ambiguities is important in natural language processing. Its outputs is used in syntactic analysis. For accurate analysis of a sentence, syntactic analysis system must find out the ambiguity of morphemes in a word. In this paper, we suggest several rules to resolve the ambiguities of morphemes in a word. Using these methods, we can reduce many lexical ambiguities in Korean.

Sentiment Analysis of User-Generated Content on Drug Review Websites

  • Na, Jin-Cheon;Kyaing, Wai Yan Min
    • Journal of Information Science Theory and Practice
    • /
    • v.3 no.1
    • /
    • pp.6-23
    • /
    • 2015
  • This study develops an effective method for sentiment analysis of user-generated content on drug review websites, which has not been investigated extensively compared to other general domains, such as product reviews. A clause-level sentiment analysis algorithm is developed since each sentence can contain multiple clauses discussing multiple aspects of a drug. The method adopts a pure linguistic approach of computing the sentiment orientation (positive, negative, or neutral) of a clause from the prior sentiment scores assigned to words, taking into consideration the grammatical relations and semantic annotation (such as disorder terms) of words in the clause. Experiment results with 2,700 clauses show the effectiveness of the proposed approach, and it performed significantly better than the baseline approaches using a machine learning approach. Various challenging issues were identified and discussed through error analysis. The application of the proposed sentiment analysis approach will be useful not only for patients, but also for drug makers and clinicians to obtain valuable summaries of public opinion. Since sentiment analysis is domain specific, domain knowledge in drug reviews is incorporated into the sentiment analysis algorithm to provide more accurate analysis. In particular, MetaMap is used to map various health and medical terms (such as disease and drug names) to semantic types in the Unified Medical Language System (UMLS) Semantic Network.

Efficient Keyword Extraction from Social Big Data Based on Cohesion Scoring

  • Kim, Hyeon Gyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.10
    • /
    • pp.87-94
    • /
    • 2020
  • Social reviews such as SNS feeds and blog articles have been widely used to extract keywords reflecting opinions and complaints from users' perspective, and often include proper nouns or new words reflecting recent trends. In general, these words are not included in a dictionary, so conventional morphological analyzers may not detect and extract those words from the reviews properly. In addition, due to their high processing time, it is inadequate to provide analysis results in a timely manner. This paper presents a method for efficient keyword extraction from social reviews based on the notion of cohesion scoring. Cohesion scores can be calculated based on word frequencies, so keyword extraction can be performed without a dictionary when using it. On the other hand, their accuracy can be degraded when input data with poor spacing is given. Regarding this, an algorithm is presented which improves the existing cohesion scoring mechanism using the structure of a word tree. Our experiment results show that it took only 0.008 seconds to extract keywords from 1,000 reviews in the proposed method while resulting in 15.5% error ratio which is better than the existing morphological analyzers.

An Iterative Approach to Graph-based Word Sense Disambiguation Using Word2Vec (Word2Vec을 이용한 반복적 접근 방식의 그래프 기반 단어 중의성 해소)

  • O, Dongsuk;Kang, Sangwoo;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.1
    • /
    • pp.43-60
    • /
    • 2016
  • Recently, Unsupervised Word Sense Disambiguation research has focused on Graph based disambiguation. Graph-based disambiguation has built a semantic graph based on words collocated in context or sentence. However, building such a graph over all ambiguous word lead to unnecessary addition of edges and nodes (and hence increasing the error). In contrast, our work uses Word2Vec to consider the most similar words to an ambiguous word in the context or sentences, to rebuild a graph of the matched words. As a result, we show a higher F1-Measure value than the previous methods by using Word2Vec.

  • PDF