• Title/Summary/Keyword: computational linguistic research

Search Result 9, Processing Time 0.034 seconds

An Algorithm for Predicting the Relationship between Lemmas and Corpus Size

  • Yang, Dan-Hee;Gomez, Pascual Cantos;Song, Man-Suk
    • ETRI Journal
    • /
    • v.22 no.2
    • /
    • pp.20-31
    • /
    • 2000
  • Much research on natural language processing (NLP), computational linguistics and lexicography has relied and depended on linguistic corpora. In recent years, many organizations around the world have been constructing their own large corporal to achieve corpus representativeness and/or linguistic comprehensiveness. However, there is no reliable guideline as to how large machine readable corpus resources should be compiled to develop practical NLP software and/or complete dictionaries for humans and computational use. In order to shed some new light on this issue, we shall reveal the flaws of several previous researches aiming to predict corpus size, especially those using pure regression or curve-fitting methods. To overcome these flaws, we shall contrive a new mathematical tool: a piecewise curve-fitting algorithm, and next, suggest how to determine the tolerance error of the algorithm for good prediction, using a specific corpus. Finally, we shall illustrate experimentally that the algorithm presented is valid, accurate and very reliable. We are confident that this study can contribute to solving some inherent problems of corpus linguistics, such as corpus predictability, compiling methodology, corpus representativeness and linguistic comprehensiveness.

  • PDF

Computational Linguistics Study of the Construction of the Auxiliary Verb 'hada' (보조용언 '하다' 구성의 전산언어학적 연구)

  • 홍혜란
    • Language Facts and Perspectives
    • /
    • v.47
    • /
    • pp.495-535
    • /
    • 2019
  • The purpose of this study is to investigate the distributional characteristics of the construction of the auxiliary verb 'hada' to morphological and semantic aspects in the corpus composed of four language register of academic prose, newspaper, fiction, and spoken language by applying computational linguistic research methodology. It is also aimed to analyze how the discourse function of the construction of the auxiliary verb 'hada' that express the meaning of 'conditions/assumptions' is performed, and from that, to investigate what the mechanism that the construction of the auxiliary verb 'hada' performs various semantic functions at the discourse level is. As a result of the study, it was shown that the construction of the auxiliary verb 'hada' performs the primary grammatical meaning by adding meaning of linguistic features such as connective ending which is combining with the preceding verb, final ending, particles, formulaic expression. And that meaning performs various discourse functions according to the contextual conditions such as the formality, the relationship between participant's of utterance, contents of utterance, speaker's attitude. From this, it can be seen that the function of discourse is not fixed, it is a new additional meaning obtain from the discourse level including various contexts, and it is characterized by contextual dependency that can change if some of these conditions are different.

Differentiation of Aphasic Patients from the Normal Control Via a Computational Analysis of Korean Utterances

  • Kim, HyangHee;Choi, Ji-Myoung;Kim, Hansaem;Baek, Ginju;Kim, Bo Seon;Seo, Sang Kyu
    • International Journal of Contents
    • /
    • v.15 no.1
    • /
    • pp.39-51
    • /
    • 2019
  • Spontaneous speech provides rich information defining the linguistic characteristics of individuals. As such, computational analysis of speech would enhance the efficiency involved in evaluating patients' speech. This study aims to provide a method to differentiate the persons with and without aphasia based on language usage. Ten aphasic patients and their counterpart normal controls participated, and they were all tasked to describe a set of given words. Their utterances were linguistically processed and compared to each other. Computational analyses from PCA (Principle Component Analysis) to machine learning were conducted to select the relevant linguistic features, and consequently to classify the two groups based on the features selected. It was found that functional words, not content words, were the main differentiator of the two groups. The most viable discriminators were demonstratives, function words, sentence final endings, and postpositions. The machine learning classification model was found to be quite accurate (90%), and to impressively be stable. This study is noteworthy as it is the first attempt that uses computational analysis to characterize the word usage patterns in Korean aphasic patients, thereby discriminating from the normal group.

Fuzzy Reliability Analysis Models for Maintenance of Bridge Structure Systems (교량구조시스템의 유지관리를 위한 퍼지 신뢰성해석 모델)

  • 김종길;손용우;이증빈;이채규;안영기
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2003.10a
    • /
    • pp.103-114
    • /
    • 2003
  • This paper aims to propose a method that helps maintenance engineers to evaluate the damage states of bridge structure systems by using a Fuzzy Fault Tree Analysis. It may be stated that Fuzzy Fault Tree Analysis may be very useful for the systematic and rational fuzzy reliability assessment for real bridge structure systems problems because the approach is able to effectively deal with all the related bridge structural element damages in terms of the linguistic variables that incorporate systematically experts experiences and subjective judgement. This paper considers these uncertainties by providing a fuzzy reliability-based framework and shows that the identification of the optimum maintenance scenario is a straightforward process. This is achieved by using a computer program for LIFETIME. This program can consider the effects of various types of actions on the fuzzy reliability index profile of a deteriorating structures. Only the effect of maintenance interventions is considered in this study. However. any environmental or mechanical action affecting the fuzzy reliability index profile can be considered in LIFETIME. Numerical examples of deteriorating bridges are presented to illustrate the capability of the proposed approach. Further development and implementation of this approach are recommended for future research.

  • PDF

Robust Syntactic Annotation of Corpora and Memory-Based Parsing

  • Hinrichs, Erhard W.
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.02a
    • /
    • pp.1-1
    • /
    • 2002
  • This talk provides an overview of current work in my research group on the syntactic annotation of the T bingen corpus of spoken German and of the German Reference Corpus (Deutsches Referenzkorpus: DEREKO) of written texts. Morpho-syntactic and syntactic annotation as well as annotation of function-argument structure for these corpora is performed automatically by a hybrid architecture that combines robust symbolic parsing with finite-state methods ("chunk parsing" in the sense Abney) with memory-based parsing (in the sense of Daelemans). The resulting robust annotations can be used by theoretical linguists, who lire interested in large-scale, empirical data, and by computational linguists, who are in need of training material for a wide range of language technology applications. To aid retrieval of annotated trees from the treebank, a query tool VIQTORYA with a graphical user interface and a logic-based query language has been developed. VIQTORYA allows users to query the treebanks for linguistic structures at the word level, at the level of individual phrases, and at the clausal level.

  • PDF

A Study of the Interface between Korean Sentence Parsing and Lexical Information (한국어 문장분석과 어휘정보의 연결에 관한 연구)

  • 최병진
    • Language and Information
    • /
    • v.4 no.2
    • /
    • pp.55-68
    • /
    • 2000
  • The efficiency and stability of an NLP system depends crucially on how is lexicon is orga- nized . Then lexicon ought to encode linguistic generalizations and exceptions thereof. Nowadays many computational linguists tend to construct such lexical information in an inheritance hierarchy DATR is good for this purpose In this research I will construct a DATR-lexicon in order to parse sentences in Korean using QPATR is implemented on the basis of a unification based grammar developed in Dusseldorf. In this paper I want to show the interface between a syntactic parser(QPATR) and DTAR-formalism representing lexical information. The QPATR parse can extract the lexical information from the DATR lexicon which is organised hierarchically.

  • PDF

A Study on the Development of Programming Education Model Applying English Subject in Elementary School (초등학교 영어교과를 적용한 프로그래밍 교육 모델 개발)

  • Heo, Miyun;Kim, Kapsu
    • Journal of The Korean Association of Information Education
    • /
    • v.21 no.5
    • /
    • pp.497-507
    • /
    • 2017
  • Research on software education and linking and convergence of other subjects has been mainly focused on mathematics and science subjects. The dissatisfaction of various preferences and types of learning personality cause to learning gap. In addition, it is not desirable considering the solution of various fusion problems that can apply the computational thinking. In this way, it is possible to embrace the diverse tendencies and preferences of students through the linkage with the English subject, which is a linguistic approach that deviates from the existing mathematical and scientific approach. By combining similarities in the process of learning a new language of English education and software education. For this purpose, based on the analysis of teaching - learning model of elementary English subject and software education, we developed a class model by modifying existing English subject and software teaching - learning model to be suitable for linkage. Then, the learning elements applicable to software education were extracted from the contents of elementary school English curriculum, and a program applied to the developed classroom model was designed and the practical application method of learning was searched.

The Stream of Uncertainty in Scientific Knowledge using Topic Modeling (토픽 모델링 기반 과학적 지식의 불확실성의 흐름에 관한 연구)

  • Heo, Go Eun
    • Journal of the Korean Society for information Management
    • /
    • v.36 no.1
    • /
    • pp.191-213
    • /
    • 2019
  • The process of obtaining scientific knowledge is conducted through research. Researchers deal with the uncertainty of science and establish certainty of scientific knowledge. In other words, in order to obtain scientific knowledge, uncertainty is an essential step that must be performed. The existing studies were predominantly performed through a hedging study of linguistic approaches and constructed corpus with uncertainty word manually in computational linguistics. They have only been able to identify characteristics of uncertainty in a particular research field based on the simple frequency. Therefore, in this study, we examine pattern of scientific knowledge based on uncertainty word according to the passage of time in biomedical literature where biomedical claims in sentences play an important role. For this purpose, biomedical propositions are analyzed based on semantic predications provided by UMLS and DMR topic modeling which is useful method to identify patterns in disciplines is applied to understand the trend of entity based topic with uncertainty. As time goes by, the development of research has been confirmed that uncertainty in scientific knowledge is moving toward a decreasing pattern.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF