• Title/Summary/Keyword: Natural language

Search Result 1,519, Processing Time 0.034 seconds

Development of Fuzzy Inference Mechanism for Intelligent Data and Information Processing (지능적 정보처리를 위한 퍼지추론기관의 구축)

  • 송영배
    • Spatial Information Research
    • /
    • v.7 no.2
    • /
    • pp.191-207
    • /
    • 1999
  • Data and information necessary for solving the spatial decision making problems are imperfect or inaccurate and most are described by natural language. In order to process these arts of information by the computer, the obscure linguistic value need to be described quantitatively to let and computer understand natural language used by humans. For this , the fuzzy set theory and the fuzzy logic are used representative methodology. So this paper describes the construction of the language model by the natural language that user easily can understand and the logical concepts and construction process for building the fuzzy inference mechanism. It makes possible to solve the space related decision making problems intellectually through structuring and inference used by the computer, in case of the evaluation concern or decision making problems are described inaccurate, based on the inaccurate or indistinct data and information.

  • PDF

Syntactic Structured Framework for Resolving Reflexive Anaphora in Urdu Discourse Using Multilingual NLP

  • Nasir, Jamal A.;Din, Zia Ud.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1409-1425
    • /
    • 2021
  • In wide-ranging information society, fast and easy access to information in language of one's choice is indispensable, which may be provided by using various multilingual Natural Language Processing (NLP) applications. Natural language text contains references among different language elements, called anaphoric links. Resolving anaphoric links is a key problem in NLP. Anaphora resolution is an essential part of NLP applications. Anaphoric links need to be properly interpreted for clear understanding of natural languages. For this purpose, a mechanism is desirable for the identification and resolution of these naturally occurring anaphoric links. In this paper, a framework based on Hobbs syntactic approach and a system developed by Lappin & Leass is proposed for resolution of reflexive anaphoric links, present in Urdu text documents. Generally, anaphora resolution process takes three main steps: identification of the anaphor, location of the candidate antecedent(s) and selection of the appropriate antecedent. The proposed framework is based on exploring the syntactic structure of reflexive anaphors to find out various features for constructing heuristic rules to develop an algorithm for resolving these anaphoric references. System takes Urdu text containing reflexive anaphors as input, and outputs Urdu text with resolved reflexive anaphoric links. Despite having scarcity of Urdu resources, our results are encouraging. The proposed framework can be utilized in multilingual NLP (m-NLP) applications.

Building Hybrid Stop-Words Technique with Normalization for Pre-Processing Arabic Text

  • Atwan, Jaffar
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.65-74
    • /
    • 2022
  • In natural language processing, commonly used words such as prepositions are referred to as stop-words; they have no inherent meaning and are therefore ignored in indexing and retrieval tasks. The removal of stop-words from Arabic text has a significant impact in terms of reducing the size of a cor- pus text, which leads to an improvement in the effectiveness and performance of Arabic-language processing systems. This study investigated the effectiveness of applying a stop-word lists elimination with normalization as a preprocessing step. The idea was to merge statistical method with the linguistic method to attain the best efficacy, and comparing the effects of this two-pronged approach in reducing corpus size for Ara- bic natural language processing systems. Three stop-word lists were considered: an Arabic Text Lookup Stop-list, Frequency- based Stop-list using Zipf's law, and Combined Stop-list. An experiment was conducted using a selected file from the Arabic Newswire data set. In the experiment, the size of the cor- pus was compared after removing the words contained in each list. The results showed that the best reduction in size was achieved by using the Combined Stop-list with normalization, with a word count reduction of 452930 and a compression rate of 30%.

Analysis of the Korean Tokenizing Library Module (한글 토크나이징 라이브러리 모듈 분석)

  • Lee, Jae-kyung;Seo, Jin-beom;Cho, Young-bok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.78-80
    • /
    • 2021
  • Currently, research on natural language processing (NLP) is rapidly evolving. Natural language processing is a technology that allows computers to analyze the meanings of languages used in everyday life, and is used in various fields such as speech recognition, spelling tests, and text classification. Currently, the most commonly used natural language processing library is NLTK based on English, which has a disadvantage in Korean language processing. Therefore, after introducing KonLPy and Soynlp, the Korean Tokenizing libraries, we will analyze morphology analysis and processing techniques, compare and analyze modules with Soynlp that complement KonLPy's shortcomings, and use them as natural language processing models.

  • PDF

Subword Neural Language Generation with Unlikelihood Training

  • Iqbal, Salahuddin Muhammad;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.45-50
    • /
    • 2020
  • A Language model with neural networks commonly trained with likelihood loss. Such that the model can learn the sequence of human text. State-of-the-art results achieved in various language generation tasks, e.g., text summarization, dialogue response generation, and text generation, by utilizing the language model's next token output probabilities. Monotonous and boring outputs are a well-known problem of this model, yet only a few solutions proposed to address this problem. Several decoding techniques proposed to suppress repetitive tokens. Unlikelihood training approached this problem by penalizing candidate tokens probabilities if the tokens already seen in previous steps. While the method successfully showed a less repetitive generated token, the method has a large memory consumption because of the training need a big vocabulary size. We effectively reduced memory footprint by encoding words as sequences of subword units. Finally, we report competitive results with token level unlikelihood training in several automatic evaluations compared to the previous work.

Why Korean Is Not a Regular Language: A Proof

  • No, Yong-Kyoon
    • Language and Information
    • /
    • v.5 no.2
    • /
    • pp.1-8
    • /
    • 2001
  • Natural language string sets are known to require a grammar with a generative capacity slightly beyond that of Context Free Grammars. Proofs regarding complexity of natural language have involved particular properties of languages like English, Swiss German and Bambara. While it is not very difficult to prove that Korean is more complex than the simplest of the many infinite sets, no proof has been given of this in the literature. I identify two types of center embedding in Korean and use them in proving that Korean is not a regular set, i.e. that no FSA's can recognize its string set. The regular language i salam i (i salam ul$)^j$ michi (key ha)^k$ essta is intersected with Korean, to give {i salam i (i salam ul$)^j$ michi (key ha$)^k$ essta i $$\mid$$ j, k $\geq$ 0 and j $\leq$ k}. This latter language is proved to be nonregular. As the class of regular sets is closed under intersection, Korean cannot be regular.

  • PDF

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

A Korean Mobile Conversational Agent System (한국어 모바일 대화형 에이전트 시스템)

  • Hong, Gum-Won;Lee, Yeon-Soo;Kim, Min-Jeoung;Lee, Seung-Wook;Lee, Joo-Young;Rim, Hae-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.263-271
    • /
    • 2008
  • This paper presents a Korean conversational agent system in a mobile environment using natural language processing techniques. The aim of a conversational agent in mobile environment is to provide natural language interface and enable more natural interaction between a human and an agent. Constructing such an agent, it is required to develop various natural language understanding components and effective utterance generation methods. To understand spoken style utterance, we perform morphosyntactic analysis, shallow semantic analysis including modality classification and predicate argument structure analysis, and to generate a system utterance, we perform example based search which considers lexical similarity, syntactic similarity and semantic similarity.

  • PDF

Natural Language Processing and Cognition (자연언어처리와 인지)

  • 이정민
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.2
    • /
    • pp.161-174
    • /
    • 1992
  • The present discussion is concerned with showing the development of natural language processing and how it is related to information and cognition.On the basis of the computeational model,in which humans are viewed as processors of linguistic structures that use stored knowledge-grammar, lexicon and structures representing the encyclopedic information of the world,such programs of natural language understanding as Winograd's SHRDLU came out.However,such pragmatic factors as contexts and the speaker's beliefs,internts,goals and intentions are not easy to process yet.Language,ingormation and cognition are argued to be closely interrelated,and the study of them,the paper argues,can lead to the development of science on general.

Form-based Natural Langauge Dialogue Interface in a Restricted Domain (제한된 영역에서의 폼 기반 자연언어 대화 인터페이스)

  • Kim, Yong-Jae;Seo, Jung-Yun;Park, Jae-Duk
    • Annual Conference on Human and Language Technology
    • /
    • 1997.10a
    • /
    • pp.463-468
    • /
    • 1997
  • 자연언어 대화는 사람들이 사용하는 가장 자연스러운 의사소통 수단이다. 따라서, 자연언어 대화 인터페이스를 통해서 사용자와 시스템이 편리하고 자연스러운 방법으로 의사를 교환할 수 있다. 본 논문에서는 대화 인터페이스의 필요성과 폼에 기반한 대화 인터페이스 기법에 대해서 설명한다, 폼 기반 인터페이스란 데이터베이스 검색을 위해서 질의어를 생성할 때 검색에 대한 제한 조건을 폼(form)의 형태로 나타내어, 사용자와의 대화를 통해서 폼 정보를 추출하고, 이렇게 완성된 폼을 이용하여 질의어를 생성하는 것을 말한다. 본 논문에서는 이러한 폼 기반 대화 인터페이스에서 시스템이 대화를 적절히 유도하고 사용자의 응답이나 질문에 대해 적절히 대응하기 위한 폼과 재귀적 대화 전이망(recursive dialogue transition networks)을 이용한 대화 모델에 대해 제안한다.

  • PDF