• 제목/요약/키워드: Natural language process

검색결과 248건 처리시간 0.076초

Ternary Decomposition and Dictionary Extension for Khmer Word Segmentation

  • Sung, Thaileang;Hwang, Insoo
    • Journal of Information Technology Applications and Management
    • /
    • 제23권2호
    • /
    • pp.11-28
    • /
    • 2016
  • In this paper, we proposed a dictionary extension and a ternary decomposition technique to improve the effectiveness of Khmer word segmentation. Most word segmentation approaches depend on a dictionary. However, the dictionary being used is not fully reliable and cannot cover all the words of the Khmer language. This causes an issue of unknown words or out-of-vocabulary words. Our approach is to extend the original dictionary to be more reliable with new words. In addition, we use ternary decomposition for the segmentation process. In this research, we also introduced the invisible space of the Khmer Unicode (char\u200B) in order to segment our training corpus. With our segmentation algorithm, based on ternary decomposition and invisible space, we can extract new words from our training text and then input the new words into the dictionary. We used an extended wordlist and a segmentation algorithm regardless of the invisible space to test an unannotated text. Our results remarkably outperformed other approaches. We have achieved 88.8%, 91.8% and 90.6% rates of precision, recall and F-measurement.

Design and Implementation of Typing Practice Application for Learning Using Web Contents (웹 콘텐츠를 활용한 학습용 타자 연습 어플리케이션의 설계와 구현)

  • Kim, Chaewon;Hwang, Soyoung
    • Journal of Korea Multimedia Society
    • /
    • 제24권12호
    • /
    • pp.1663-1672
    • /
    • 2021
  • There are various typing practice applications. In addition, research cases on learning applications that support typing practice have been reported. These services are usually provided in a way that utilizes their own built-in text. Learners collect various contents through web services and use them a lot for learning. Therefore, this paper proposes a learning application to increase the learning effect by collecting vast amounts of web content and applying it to typing practice. The proposed application is implemented using Tkinter, a GUI module of Python. BeautifulSoup module of Python is used to extract information from the web. In order to process the extracted data, the NLTK module, which is an English data preprocessor, and the KoNLPy module, which is a Korean language processing module, are used. The operation of the proposed function is verified in the implementation and experimental results.

The Effect of the Sentence Location on Arabic Sentiment Analysis

  • Alotaibi, Saud S.
    • International Journal of Computer Science & Network Security
    • /
    • 제22권5호
    • /
    • pp.317-319
    • /
    • 2022
  • Rich morphology language such as Arabic needs more investigation and method to improve the sentiment analysis task. Using all document parts in the process of the sentiment analysis may add some unnecessary information to the classifier. Therefore, this paper shows the ongoing work to use sentence location as a feature with Arabic sentiment analysis. Our proposed method employs a supervised sentiment classification method by enriching the feature space model with some information from the document. The experiments and evaluations that were conducted in this work show that our proposed feature in the sentiment analysis for Arabic improves the performance of the classifier compared to the baseline model.

Discussions on Auditory-Perceptual Evaluation Performed in Patients With Voice Disorders (음성장애 환자에서 시행되는 청지각적 평가에 대한 논의)

  • Lee, Seung Jin
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • 제32권3호
    • /
    • pp.109-117
    • /
    • 2021
  • The auditory-perceptual evaluation of speech-language pathologists (SLP) in patients with voice disorders is often regarded as a touchstone in the multi-dimensional voice evaluation procedures and provides important information not available in other assessment modalities. Therefore, it is necessary for the SLPs to conduct a comprehensive and in-depth evaluation of not only voice but also the overall speech production mechanism, and they often encounter various difficulties in the evaluation process. In addition, SLPs should strive to avoid bias during the evaluation process and to maintain a wide and constant spectrum of severity for each parameter of voice quality. Lastly, it is very important for the SLPs to perform a team approach by documenting and delivering important information pertaining to auditory-perceptual characteristics in an appropriate and efficient way through close communication with the laryngologists.

Enhancing the Text Mining Process by Implementation of Average-Stochastic Gradient Descent Weight Dropped Long-Short Memory

  • Annaluri, Sreenivasa Rao;Attili, Venkata Ramana
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.352-358
    • /
    • 2022
  • Text mining is an important process used for analyzing the data collected from different sources like videos, audio, social media, and so on. The tools like Natural Language Processing (NLP) are mostly used in real-time applications. In the earlier research, text mining approaches were implemented using long-short memory (LSTM) networks. In this paper, text mining is performed using average-stochastic gradient descent weight-dropped (AWD)-LSTM techniques to obtain better accuracy and performance. The proposed model is effectively demonstrated by considering the internet movie database (IMDB) reviews. To implement the proposed model Python language was used due to easy adaptability and flexibility while dealing with massive data sets/databases. From the results, it is seen that the proposed LSTM plus weight dropped plus embedding model demonstrated an accuracy of 88.36% as compared to the previous models of AWD LSTM as 85.64. This result proved to be far better when compared with the results obtained by just LSTM model (with 85.16%) accuracy. Finally, the loss function proved to decrease from 0.341 to 0.299 using the proposed model

Word-Level Embedding to Improve Performance of Representative Spatio-temporal Document Classification

  • Byoungwook Kim;Hong-Jun Jang
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.830-841
    • /
    • 2023
  • Tokenization is the process of segmenting the input text into smaller units of text, and it is a preprocessing task that is mainly performed to improve the efficiency of the machine learning process. Various tokenization methods have been proposed for application in the field of natural language processing, but studies have primarily focused on efficiently segmenting text. Few studies have been conducted on the Korean language to explore what tokenization methods are suitable for document classification task. In this paper, an exploratory study was performed to find the most suitable tokenization method to improve the performance of a representative spatio-temporal document classifier in Korean. For the experiment, a convolutional neural network model was used, and for the final performance comparison, tasks were selected for document classification where performance largely depends on the tokenization method. As a tokenization method for comparative experiments, commonly used Jamo, Character, and Word units were adopted. As a result of the experiment, it was confirmed that the tokenization of word units showed excellent performance in the case of representative spatio-temporal document classification task where the semantic embedding ability of the token itself is important.

A Generation-based Text Steganography by Maintaining Consistency of Probability Distribution

  • Yang, Boya;Peng, Wanli;Xue, Yiming;Zhong, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.4184-4202
    • /
    • 2021
  • Text steganography combined with natural language generation has become increasingly popular. The existing methods usually embed secret information in the generated word by controlling the sampling in the process of text generation. A candidate pool will be constructed by greedy strategy, and only the words with high probability will be encoded, which damages the statistical law of the texts and seriously affects the security of steganography. In order to reduce the influence of the candidate pool on the statistical imperceptibility of steganography, we propose a steganography method based on a new sampling strategy. Instead of just consisting of words with high probability, we select words with relatively small difference from the actual sample of the language model to build a candidate pool, thus keeping consistency with the probability distribution of the language model. What's more, we encode the candidate words according to their probability similarity with the target word, which can further maintain the probability distribution. Experimental results show that the proposed method can outperform the state-of-the-art steganographic methods in terms of security performance.

Applying document routing mode of information access in nursing diagnosis process (문서 라우팅 기법을 이용한 간호진단 과정에서의 정보접근)

  • Paik Woo-Jin
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 한국정보관리학회 2006년도 제13회 학술대회 논문집
    • /
    • pp.163-168
    • /
    • 2006
  • Nursing diagnosis process is described as nurses assessing the patients' conditions by applying reasoning and looking for patterns, which fit the defining characteristics of one or more diagnoses. This process is similar to using a typical document retrieval system if we consider the patients' conditions as queries, nursing diagnoses as documents, and the defining characteristics as index terms of the documents. However, there is a small fixed number of nursing diagnoses and infinite number of patients' conditions in a typical hospital setting. This state is more suitable to applying document routing mode of information access, which is defined as a number of archived profiles, compared to individual documents. In this paper, we describe a ROUting-based Nursing Diagnosis (ROUND) system and its Natural Language Processing-based query processing component, which converts the defining characteristics of nursing diagnoses into query representations.

  • PDF

Fuzzy Theory based Electronic Commerce Navigation Agent that can Process Natural Language (자연어 처리가 가능한 퍼지 이론 기반 전자상거래 검색 에이전트)

  • 김명순;정환묵
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제11권3호
    • /
    • pp.246-251
    • /
    • 2001
  • In this paper, we proposed the intelligent navigation agent model for successive electronic commerce system management. Fuzzy theory is very useful method where keywords have vague conditions and system must process that conditions. So, using fuzzy theory, we proposed the model that can process the vague keywords effectively. Through the this, we verified that we can get the more appropriate navigation result than any other crisp retrieval keywords condition.

  • PDF

Out-Of-Domain Detection Using Hierarchical Dirichlet Process

  • Jeong, Young-Seob
    • Journal of the Korea Society of Computer and Information
    • /
    • 제23권1호
    • /
    • pp.17-24
    • /
    • 2018
  • With improvement of speech recognition and natural language processing, dialog systems are recently adapted to various service domains. It became possible to get desirable services by conversation through the dialog system, but it is still necessary to improve separate modules, such as domain detection, intention detection, named entity recognition, and out-of-domain detection, in order to achieve stable service offer. When it misclassifies an in-domain sentence of conversation as out-of-domain, it will result in poor customer satisfaction and finally lost business. As there have been relatively small number of studies related to the out-of-domain detection, in this paper, we introduce a new method using a hierarchical Dirichlet process and demonstrate the effectiveness of it by experimental results on Korean dataset.