• Title/Summary/Keyword: head noun

Search Result 14, Processing Time 0.031 seconds

A Study of Relative Clauses in Korean Used by Korean Learners (한국어 학습자들의 관계절 사용 양상 연구)

  • Jo, Su Hyun
    • Cross-Cultural Studies
    • /
    • v.19
    • /
    • pp.359-388
    • /
    • 2010
  • This study is aimed to investigate the aspect of using relative clauses in Korean. The data used for this study were extracted from the Korean text books for the foreign students and from the Chinese students' Korean compositions. They are the learners of Korean language at the early intermediated stage. As the result of analyzing them, the followings were found I)the majority of relative clauses in Korean consists of left-branching sentences. ii)The number of the subject relative clauses was higher than object ones in both of them. Especially in the aspect of using relative clauses, subject ones were used even more frequently than objective ones. This result is corresponded to the previous thesis, "the subject relative clauses was acquired earlier than object ones". iii)The relative clauses that those with a head noun function as subject in the main sentence showed in higher proportion in comparison of those as object. That is, this study showed that subjects were used more frequently than objects in the relative clauses used in their compositions. Finally, this study analyzed the errors of adnominal ending usage occurring in their compositions. More errors occurred when adjective form ended with '-hada' are changed into adnominal ending one.

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

Mention Detection Using Pointer Networks for Coreference Resolution

  • Park, Cheoneum;Lee, Changki;Lim, Soojong
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.652-661
    • /
    • 2017
  • A mention has a noun or noun phrase as its head and constructs a chunk that defines any meaning, including a modifier. Mention detection refers to the extraction of mentions from a document. In mentions, coreference resolution refers to determining any mentions that have the same meaning. Pointer networks, which are models based on a recurrent neural network encoder-decoder, outputs a list of elements corresponding to an input sequence. In this paper, we propose mention detection using pointer networks. This approach can solve the problem of overlapped mention detection, which cannot be solved by a sequence labeling approach. The experimental results show that the performance of the proposed mention detection approach is F1 of 80.75%, which is 8% higher than rule-based mention detection, and the performance of the coreference resolution has a CoNLL F1 of 56.67% (mention boundary), which is 7.68% higher than coreference resolution using rule-based mention detection.

The Extraction of Head words in Definition for Construction of a Semi-automatic Lexical-semantic Network of Verbs (동사 어휘의미망의 반자동 구축을 위한 사전정의문의 중심어 추출)

  • Kim Hae-Gyung;Yoon Ae-Sun
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.47-69
    • /
    • 2006
  • Recently, there has been a surge of interests concerning the construction and utilization of a Korean thesaurus. In this paper, a semi-automatic method for generating a lexical-semantic network of Korean '-ha' verbs is presented through an analysis of the lexical definitions of these verbs. Initially, through the use of several tools that can filter out and coordinate lexical data, pairs constituting a word and a definition were prepared for treatment in a subsequent step. While inspecting the various definitions of each verb, we extracted and coordinated the head words from the sentences that constitute the definition of each word. These words are thought to be the main conceptual words that represent the sense of the current verb. Using these head words and related information, this paper shows that the creation of a thesaurus could be achieved without any difficulty in a semi-automatic fashion.

  • PDF

Two-Level Machine Learning Approach to Identify Maximal Noun Phrase in Chinese (두 단계 학습을 통한 중국어 최장명사구 자동식별)

  • Yin, Chang-Hao;Lee, Yong-Hun;Jin, Mei-Xun;Kim, Dong-Il;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.53-61
    • /
    • 2004
  • 일반적으로 중국어의 명사구는 기본명사구(base noun phrase), 최장명사구(maximal noun phrase) 등으로 분류된다. 최장명사구에 대한 정확한 식별은 문장의 전체적인 구조를 파악하고 정확한 구문 트리(parse tree)를 찾아내는데 중요한 역할을 한다. 본 논문은 두 단계 학습모델을 이용하여 최장명사구 자동식별을 진행한다. 먼저 기본명사구, 기본동사구, 기본형용사구, 기본부사구, 기본수량사구, 기본단문구, 기본전치사구, 기본방향사구 등 8가지 기본구를 식별한다. 다음 기본구의 중심어(head)를 추출해 내고 이 정보를 이용하여 최장명사구의 식별을 진행한다. 본 논문에서 제안하는 방법은 기존의 단어레벨의 접근방법과는 달리구레벨에서 학습을 진행하기 때문에 주변문맥의 정보를 많이 고려해야 하는 최장명사구 식별에 있어서 아주 효과적인 접근방법이다. 후처리 작업을 하지 않고 기본구의 식별에서 25개 기본구 태그의 평균 F-measure가 96%, 평균길이가 7인 최장명사구의 식별에서 4개 태그의 평균 F-measure가 92.5%로 좋은 성능을 보여주었다.

  • PDF

Syntactic Attraction of Subject-Verb Agreement (주어-동사 일치의 통사적 유인)

  • Jang, Soyeong;Kim, Yangsoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.3
    • /
    • pp.353-358
    • /
    • 2021
  • This study provides the syntactic analysis for the agreement attraction by proposing three types of syntactic subject-verb agreement. Because subject-verb number agreement codifies the link between a predicate and its subject, it must be the purely syntactic processes of the head-to-head agreement or the feature percolation, where relevant agreement features percolate upward or downward through the hierarchical syntactic structure. The agreement errors are not affected by linear proximity or minimal interference, but instead are affected by the hierarchical relationship between an agreement target and a local attractor. The data in this paper includes the complex noun phrases with a modifier PP or a relative clause CP. Here, the [+PL] feature is suggested to be a local attractor for subject-verb agreement errors as a strong feature. Therefore, speakers tend to erroneously produce plural agreement for a singular subject in a main clause due to a plural NP in a modifier PP or plural agreement for a singular subject in a relative clause due to plural main subject.

한국어 합성 동사성 명사의 어휘구조와 다중 동사성명사 구문

  • 류병래
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2001.06a
    • /
    • pp.141-144
    • /
    • 2001
  • 본 논문의 목적은 ‘다중 동사성 명사 구문’(Multiple Verbal Noun Construe-tions)의 논항실현 양상을 이론 중립적으로 고찰해 보고, 이 분석을 제약기반 문법 이론인 최근의 핵 심어주도 구구조문법 (Head-driven Phrase Structure Grammar)틀 안에서, 특히 다중계승위 계를 가정하는 제약기반 어휘부를 기반으로 형식화해 논항의 실현과정을 기술하고 설명하는 것이다. 우선 일본어의 유사한 현상을 분석한 Grimshaw & Mester (1988)의 격실현 양상에 관한 일반화를 기반으로 한국어 동사성명사구문의 논항실현 양상을 ‘논항전이’ (argument transfer)라는 이론적 장치를 이용해 형식화할 수 있음을 보이고, 동사성 합성명사의 논항구조를 만들기 위해 ‘논항합성’(argument composition)이라는 이론적 장치를 제안한다. 나아가서 다중 동사성 명사구문의 논항실현 과정에서 보이는 겹격표지 현상을 ‘격 복사’(case copying)를 제안해 동사성 명사의 격표지가 합성 명사에서 분리되어 문장단위에서 실현될 때 동일한 격을 복사해 실현한다는 점을 주장하고자 한다. 이 주장을 뒷받침하기 위해 수동과 능동 등 문법기능의 변화현상에서 하위범주화된 요소들의 격변화가 자의적이 아님을 실례를 들어 보여 주고자 한다. 일본어의 경동사 (light verbs)에 관한 분석 인 Grimshaw Meste, (1988) 이래 한국어에서도 이와 유사한 구문에 대한 재조명이 활발하게 이루어져 왔다 (Ryu (1993b), 채희락 (1996), Chae (1997) 등 참조). 한국어에서 ‘하다’와 동사성명사(verbal nouns)가 결합하여 이루어진 ‘동사성명사구문’ (Verbal Noun Constructions)에 대한 기존의 논의는 대부분 하나의 동사성 명사가 ‘하다’나 ‘되다등 소위 문법기능을 바꾸는 ‘경동사’들과 결합하여 복합술어가 되는 문법적 현상에 초점이 맞춰져 있었다. 그와 비교해서 동사성 명사의 어근이 두 개 이상 결합하여 동사성명사들끼리 합성명사(compound nouns)를 이루고 그 동사성 합성명사가 문법기능의 변화를 바꾸는 ‘경동사’와 결합하여 이루어진 복합술어에 대해서는 논의가 거의 없는 형편이다. 특히 이 지적은 핵심어주도 구절구조문법틀 내에서는 논란의 여지가 없다. 본 논문의 대상은 바로 이러한 합성 동사성명사의 논항구조와 동사성명사에 의해 하위범주화된 논항들의 문법적 실현양상이다.

  • PDF

Multiple Case Marking Constructions in Korean Revisited

  • Ryu, Byong-Rae
    • Language and Information
    • /
    • v.17 no.2
    • /
    • pp.1-27
    • /
    • 2013
  • This paper presents a unified approach to multiple nominative and accusative constructions in Korean. We identify 16 semantic relations holding between two consecutive noun phrases (NPs) in multiple case marking constructions, and propose each semantic relation as a licensing condition on double case marking. We argue that the multiple case marking constructions are merely the sequences of double case marking, which are formed by dextrosinistrally sequencing the pairs of the same-case marked NPs of same or different type. Some appealing consequences of this proposal include a new comprehensive classification of the sequences of same-case NPs and a straightforward account of some long standing problems such as how the additional same-case NPs are licensed, and in what respects the multiple nominative marking and the multiple accusative marking are alike and different from each other.

  • PDF

Constructional Constraints in English Free Relative Constructions

  • Kim, Jong-Bok
    • Language and Information
    • /
    • v.5 no.1
    • /
    • pp.35-53
    • /
    • 2001
  • As a subtype of English relative clause constructions, free relative constructions like what John ate in I ate what John ate exhibit complicated syntactic and semantic properties. In particular, the constructions have mixed properties of nominal and verbal: they have the internal syntax of sentence and the external syntax of noun phrase. This paper provides a constraint-based approach to these mixed constructions, and shows that simple constructional constraints are enough to capture their complexities. The paper begins by surveying the properties of the constructions. In discusses two types(Specific and nonspecific) of free relatives, their ,lexical restrictions nominal properties and behavior with respect to extraposition, piped piping and stacking Following these it sketches the basic framework of the HPSG(Head-driven Phrase Structure Grammar) which is of relevance in this paper. As the main part, the paper presents a constraint- based analysis in which tight interactions between grammatical constructions and a rich network of inheritance relations play important roles in accounting for the basic as well as complex properties of the constructions is question.

  • PDF

Mention Detection with Pointer Networks (포인터 네트워크를 이용한 멘션탐지)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.774-781
    • /
    • 2017
  • Mention detection systems use nouns or noun phrases as a head and construct a chunk of text that defines any meaning, including a modifier. The term "mention detection" relates to the extraction of mentions in a document. In the mentions, a coreference resolution pertains to finding out if various mentions have the same meaning to each other. A pointer network is a model based on a recurrent neural network (RNN) encoder-decoder, and outputs a list of elements that correspond to input sequence. In this paper, we propose the use of mention detection using pointer networks. Our proposed model can solve the problem of overlapped mention detection, an issue that could not be solved by sequence labeling when applying the pointer network to the mention detection. As a result of this experiment, performance of the proposed mention detection model showed an F1 of 80.07%, a 7.65%p higher than rule-based mention detection; a co-reference resolution performance using this mention detection model showed a CoNLL F1 of 52.67% (mention boundary), and a CoNLL F1 of 60.11% (head boundary) that is high, 7.68%p, or 1.5%p more than coreference resolution using rule-based mention detection.