• Title/Summary/Keyword: part of speech

Search Result 433, Processing Time 0.027 seconds

e-Learning Course Reviews Analysis based on Big Data Analytics (빅데이터 분석을 이용한 이러닝 수강 후기 분석)

  • Kim, Jang-Young;Park, Eun-Hye
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.2
    • /
    • pp.423-428
    • /
    • 2017
  • These days, various and tons of education information are rapidly increasing and spreading due to Internet and smart devices usage. Recently, as e-Learning usage increasing, many instructors and students (learners) need to set a goal to maximize learners' result of education and education system efficiency based on big data analytics via online recorded education historical data. In this paper, the author applied Word2Vec algorithm (neural network algorithm) to find similarity among education words and classification by clustering algorithm in order to objectively recognize and analyze online recorded education historical data. When the author applied the Word2Vec algorithm to education words, related-meaning words can be found, classified and get a similar vector values via learning repetition. In addition, through experimental results, the author proved the part of speech (noun, verb, adjective and adverb) have same shortest distance from the centroid by using clustering algorithm.

(A Method to Classify and Recognize Spelling Changes between Morphemes of a Korean Word) (한국어 어절의 철자변화 현상 분류와 인식 방법)

  • 김덕봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.476-486
    • /
    • 2003
  • There is no explicit spelling change information in part-of-speech tagged corpora of Korean. It causes some difficulties in acquiring the data to study Korean morphology, i.e. automatically in constructing a dictionary for morphological analysis and systematically in collecting the phenomena of the spelling changes from the corpora. To solve this problem, this paper presents a method to recognize spelling changes between morphemes of a Korean word in tagged corpora, only using a string matching, without using a dictionary and phonological rules. This method not only has an ability to robustly recognize the spelling changes because it doesn't use any phonological rules, but also can be implemented with few cost. This method has been experimented with a large tagged corpus of Korean, and recognized the 100% of spelling changes in the corpus with accuracy.

Light Weight Korean Morphological Analysis Using Left-longest-match-preference model and Hidden Markov Model (좌최장일치법과 HMM을 결합한 경량화된 한국어 형태소 분석)

  • Kang, Sangwoo;Yang, Jaechul;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.24 no.2
    • /
    • pp.95-109
    • /
    • 2013
  • With the rapid evolution of the personal device environment, the demand for natural language applications is increasing. This paper proposes a morpheme segmentation and part-of-speech tagging model, which provides the first step module of natural language processing for many languages; the model is designed for mobile devices with limited hardware resources. To reduce the number of morpheme candidates in morphological analysis, the proposed model uses a method that adds highly possible morpheme candidates to the original outputs of a conventional left-longest-match-preference method. To reduce the computational cost and memory usage, the proposed model uses a method that simplifies the process of calculating the observation probability of a word consisting of one or more morphemes in a conventional hidden Markov model.

  • PDF

Lip Shape Synthesis of the Korean Syllable for Human Interface (휴먼인터페이스를 위한 한글음절의 입모양합성)

  • 이용동;최창석;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.614-623
    • /
    • 1994
  • Synthesizing speech and facial images is necessary for human interface that man and machine converse naturally as human do. The target of this paper is synthesizing the facial images. In synthesis of the facial images a three-dimensional (3-D) shape model of the face is used for realizating the facial expression variations and the lip shape variations. The various facial expressions and lip shapes harmonized with the syllables are synthesized by deforming the three-dimensional model on the basis of the facial muscular actions. Combications with the consonants and the vowels make 14.364 syllables. The vowels dominate most lip shapes but the consonants do a part of them. For determining the lip shapes, this paper investigates all the syllables and classifies the lip shapes pattern according to the vowels and the consonants. As the results, the lip shapes are classified into 8 patterns for the vowels and 2patterns for the consonants. In advance, the paper determines the synthesis rules for the classified lip shape patterns. This method permits us to obtain the natural facial image with the various facial expressions and lip shape patterns.

  • PDF

Expansion of Word Representation for Named Entity Recognition Based on Bidirectional LSTM CRFs (Bidirectional LSTM CRF 기반의 개체명 인식을 위한 단어 표상의 확장)

  • Yu, Hongyeon;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.306-313
    • /
    • 2017
  • Named entity recognition (NER) seeks to locate and classify named entities in text into pre-defined categories such as names of persons, organizations, locations, expressions of times, etc. Recently, many state-of-the-art NER systems have been implemented with bidirectional LSTM CRFs. Deep learning models based on long short-term memory (LSTM) generally depend on word representations as input. In this paper, we propose an approach to expand word representation by using pre-trained word embedding, part of speech (POS) tag embedding, syllable embedding and named entity dictionary feature vectors. Our experiments show that the proposed approach creates useful word representations as an input of bidirectional LSTM CRFs. Our final presentation shows its efficacy to be 8.05%p higher than baseline NERs with only the pre-trained word embedding vector.

Computational Processing of Korean Dialogue and the Construction of Its Representation Structure Based on Situational Information (상황정보에 기반한 한국어대화의 전산적 처리와 표상구조의 구축)

  • Lee, Dong-Young
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.817-826
    • /
    • 2002
  • In Korean dialogue honorification phenomenon may occur, an honorific pronoun may be used, and a subject or an object may be completely omitted when it can be recovered based on context. This paper proposes that in order to process Korean dialogue in which such distinct linguistic phenomena occur and to construct its representation structure we mark and use the following information explicitly, not implicitly : information about dialogue participants, information about the speech act of an utterance, information about the relative order of social status for the people involved in dialogue, and information flow among utterances of dialogue. In addition, this paper presents a method of marking and using such situational information and an appropriate representation structure of Korean dialogue. In this paper we set up Korean dialogue representation structure by modifying and extending DRT (Discourse Representation Theory) and SDRT (Segmented Discourse Representation Theory). Futhermore, this paper shows how to process Korean dialogue computationally and construct its representation structure by using Prolog programming language, and then applies such representation structure to spontaneous Korean dialogue to know its validity.

Context-sensitive Word Error Detection and Correction for Automatic Scoring System of English Writing (영작문 자동 채점 시스템을 위한 문맥 고려 단어 오류 검사기)

  • Choi, Yong Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.1
    • /
    • pp.45-56
    • /
    • 2015
  • In this paper, we present a method that can detect context-sensitive word errors and generate correction candidates. Spelling error detection is one of the most widespread research topics, however, the approach proposed in this paper is adjusted for an automated English scoring system. A common strategy in context-sensitive word error detection is using a pre-defined confusion set to generate correction candidates. We automatically generate a confusion set in order to consider the characteristics of sentences written by second-language learners. We define a word error that cannot be detected by a conventional grammar checker because of part-of-speech ambiguity, and propose how to detect the error and generate correction candidates for this kind of error. An experiment is performed on the English writings composed by junior-high school students whose mother tongue is Korean. The f1 value of the proposed method is 70.48%, which shows that our method is promising comparing to the current-state-of-the art.

VOC Summarization and Classification based on Sentence Understanding (구문 의미 이해 기반의 VOC 요약 및 분류)

  • Kim, Moonjong;Lee, Jaean;Han, Kyouyeol;Ahn, Youngmin
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.50-55
    • /
    • 2016
  • To attain an understanding of customers' opinions or demands regarding a companies' products or service, it is important to consider VOC (Voice of Customer) data; however, it is difficult to understand contexts from VOC because segmented and duplicate sentences and a variety of dialog contexts. In this article, POS (part of speech) and morphemes were selected as language resources due to their semantic importance regarding documents, and based on these, we defined an LSP (Lexico-Semantic-Pattern) to understand the structure and semantics of the sentences and extracted summary by key sentences; furthermore the LSP was introduced to connect the segmented sentences and remove any contextual repetition. We also defined the LSP by categories and classified the documents based on those categories that comprise the main sentences matched by LSP. In the experiment, we classified the VOC-data documents for the creation of a summarization before comparing the result with the previous methodologies.

Audio Segmentation and Classification Using Support Vector Machine and Fuzzy C-Means Clustering Techniques (서포트 벡터 머신과 퍼지 클러스터링 기법을 이용한 오디오 분할 및 분류)

  • Nguyen, Ngoc;Kang, Myeong-Su;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.19-26
    • /
    • 2012
  • The rapid increase of information imposes new demands of content management. The purpose of automatic audio segmentation and classification is to meet the rising need for efficient content management. With this reason, this paper proposes a high-accuracy algorithm that segments audio signals and classifies them into different classes such as speech, music, silence, and environment sounds. The proposed algorithm utilizes support vector machine (SVM) to detect audio-cuts, which are boundaries between different kinds of sounds using the parameter sequence. We then extract feature vectors that are composed of statistical data and they are used as an input of fuzzy c-means (FCM) classifier to partition audio-segments into different classes. To evaluate segmentation and classification performance of the proposed SVM-FCM based algorithm, we consider precision and recall rates for segmentation and classification accuracy for classification. Furthermore, we compare the proposed algorithm with other methods including binary and FCM classifiers in terms of segmentation performance. Experimental results show that the proposed algorithm outperforms other methods in both precision and recall rates.

An Implementation of Interactive Voice Recognition Stock Trading System Using VoiceXML (VoiceXML을 이용한 대화형 음성 인식 증권 거래 시스템 구현)

  • Cho, Chang-Su;Shin, Jeong-Hoon;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.517-526
    • /
    • 2004
  • In this paper, we implemented practical application service using VoiceXML. Developers can utilize the advantages of using VoiceXML such as reducing development time and sharing contents between applications. Up to now, speech related services were developed using APIs and programming languages such as C/C++ or exclusive developing tools, which methods depend on system architectures. For this reasons, reuse of contents and resources was very difficult. If developers want to change scenarios of the application services or change platforms, they have to edit and recompile their program sources. To solve these problems, nowadays, companies develop their applications using VoiceXML. But, there's poor grip of actual problems can be occurred when they use VoiceXML. To overcome these problems, we implemented stock trading system using VoiceXML. We found out problems which occurred during developing services. We proposed solutions to these problems And, we analyzed strong points and weak points of applications using suggested system.