• Title/Summary/Keyword: Syntactic Complexity

Search Result 38, Processing Time 0.025 seconds

Phrase-Chunk Level Hierarchical Attention Networks for Arabic Sentiment Analysis

  • Abdelmawgoud M. Meabed;Sherif Mahdy Abdou;Mervat Hassan Gheith
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.120-128
    • /
    • 2023
  • In this work, we have presented ATSA, a hierarchical attention deep learning model for Arabic sentiment analysis. ATSA was proposed by addressing several challenges and limitations that arise when applying the classical models to perform opinion mining in Arabic. Arabic-specific challenges including the morphological complexity and language sparsity were addressed by modeling semantic composition at the Arabic morphological analysis after performing tokenization. ATSA proposed to perform phrase-chunks sentiment embedding to provide a broader set of features that cover syntactic, semantic, and sentiment information. We used phrase structure parser to generate syntactic parse trees that are used as a reference for ATSA. This allowed modeling semantic and sentiment composition following the natural order in which words and phrase-chunks are combined in a sentence. The proposed model was evaluated on three Arabic corpora that correspond to different genres (newswire, online comments, and tweets) and different writing styles (MSA and dialectal Arabic). Experiments showed that each of the proposed contributions in ATSA was able to achieve significant improvement. The combination of all contributions, which makes up for the complete ATSA model, was able to improve the classification accuracy by 3% and 2% on Tweets and Hotel reviews datasets, respectively, compared to the existing models.

Generalized LR Parser with Conditional Action Model(CAM) using Surface Phrasal Types (표층 구문 타입을 사용한 조건부 연산 모델의 일반화 LR 파서)

  • 곽용재;박소영;황영숙;정후중;이상주;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.81-92
    • /
    • 2003
  • Generalized LR parsing is one of the enhanced LR parsing methods so that it overcome the limit of one-way linear stack of the traditional LR parser using graph-structured stack, and it has been playing an important role of a firm starting point to generate other variations for NL parsing equipped with various mechanisms. In this paper, we propose a conditional Action Model that can solve the problems of conventional probabilistic GLR methods. Previous probabilistic GLR parsers have used relatively limited contextual information for disambiguation due to the high complexity of internal GLR stack. Our proposed model uses Surface Phrasal Types representing the structural characteristics of the parse for its additional contextual information, so that more specified structural preferences can be reflected into the parser. Experimental results show that our GLR parser with the proposed Conditional Action Model outperforms the previous methods by about 6-7% without any lexical information, and our model can utilize the rich stack information for syntactic disambiguation of probabilistic LR parser.

Visualization Techniques for Massive Source Code (대용량 소스코드 시각화기법 연구)

  • Seo, Dong-Su
    • The Journal of Korean Association of Computer Education
    • /
    • v.18 no.4
    • /
    • pp.63-70
    • /
    • 2015
  • Program source code is a set of complex syntactic information which are expressed in text forms, and contains complex logical structures. Structural and logical complexity inside source code become barriers in applying visualization techniques shown in traditional big-data approaches when the volume of source code become over ten-thousand lines of code. This paper suggests a procedure for making visualization of structural characteristics in source code. For this purpose, this paper defines internal data structures as well as inter-procedural relationships among functions. The paper also suggests a means of outlining the structural characteristics of source code by visualizing the source codes with network forms The result of the research work can be used as a means of controling and understanding the massive volume of source code.

Stability of Early Language Development of Verbally-Precocious Korean Children from 2 to 3 Year-old (조기언어발달 아동의 초기 언어능력의 안정성)

  • Lee, Kwee-Ock
    • The Korean Journal of Community Living Science
    • /
    • v.19 no.4
    • /
    • pp.673-684
    • /
    • 2008
  • The purpose of this study is to compare the complexity of language level between verbally-precocious and typically-developing children from 2 to 3 years-old. Participants were 15 children classified as verbally-precocious were scored at the mean 56.85(expressive language) and 88.82(receptive language), and another 15 children classified as typically developing did at the mean 33.51(expressive language) and 58.01(receptive language) on MCDI-K. Each child's spontaneous utterances in interaction with her caregiver were collected at three different times with 6 months interval. All of the utterances were transcribed and analyzed for the use of MLU and lexical diversity by using KCLA. Summarizing the overall results, verbally-precocious children had significantly higher language abilities than typically-developing children at each time, and there were significant differences between two groups in syntactic and semantic language development, showing that verbally-precocious children indicated distinctive MLU and lexical diversity. These results suggest a high degree of stability in precocious verbal status, with variations in language complexity during conversations contributing to later differences in their language ability.

  • PDF

Using Small Corpora of Critiques to Set Pedagogical Goals in First Year ESP Business English

  • Wang, Yu-Chi;Davis, Richard Hill
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.2
    • /
    • pp.17-29
    • /
    • 2021
  • The current study explores small corpora of critiques written by Chinese and non-Chinese university students and how strategies used by these writers compare with high-rated L1 students. Data collection includes three small corpora of student writing; 20 student critiques in 2017, 23 student critiques from 2018, and 23 critiques from the online Michigan MICUSP collection at the University of Michigan. The researchers employ Text Inspector and Lexical Complexity to identify university students' vocabulary knowledge and awareness of syntactic complexity. In addition, WMatrix4® is used to identify and support the comparison of lexical and semantic differences among the three corpora. The findings indicate that gaps between Chinese and non-Chinese writers in the same university classes exist in students' knowledge of grammatical features and interactional metadiscourse. In addition, critiques by Chinese writers are more likely to produce shorter clauses and sentences. In addition, the mean value of complex nominal and coordinate phrases is smaller for Chinese students than for non-Chinese and MICUSP writers. Finally, in terms of lexical bundles, Chinese student writers prefer clausal bundles instead of phrasal bundles, which, according to previous studies, are more often found in texts of skilled writers. The current study's findings suggest incorporating implicit and explicit instruction through the implementation of corpora in language classrooms to advance skills and strategies of all, but particularly of Chinese writers of English.

Efficient Analysis of Korean Dependency Structures Using Beam Search Algorithms (Beam Search 알고리즘을 이용한 효율적인 한국어 의존 구조 분석)

  • Kim, Hark-Soo;Seo, Jung-Yun
    • Annual Conference on Human and Language Technology
    • /
    • 1998.10c
    • /
    • pp.281-286
    • /
    • 1998
  • 구문분석(syntactic analysis)은 형태소 분석된 결과를 입력으로 받아 구문단위간의 관계를 결정해 주는 자연어 처리의 한 과정이다. 그러나 구문분석된 결과는 많은 중의성(ambiguity)을 갖게 되며, 이러한 중의성은 이후의 자연어 처리 수행과정에서 많은 복잡성(complexity)를 유발하게 된다. 지금까지 이러한 문제를 해결하기 위한 여러 가지 연구들이 있었으며, 그 중 하나가 대량의 데이터로부터 추출된 통계치를 이용한 방법이다. 그러나, 생성된 모든 구문 트리(parse tree)에 통계치를 부여하고, 그것들을 순위화하는 것은 굉장히 시간 소모적인 일(time-consuming job)이다. 그러므로, 생성 가능한 트리의 수를 효과적으로 줄이는 방법이 필요하다. 본 논문에서는 이러한 문제를 해결하기 위해 개선된 beam search 알고리즘을 제안하고, 기존의 방법과 비교한다. 본 논문에서 제안된 beam search 알고리즘을 사용한 구문분석기는 beam search를 사용하지 않은 구문분석기가 생성하는 트리 수의 1/3정도만으로도 같은 구문 구조 정확률을 보였다.

  • PDF

Component-Based VHDL Analyzer for Reuse and Embedment (재사용 및 내장 가능한 구성요소 기반 VHDL 분석기)

  • 박상헌;손영석
    • Proceedings of the IEEK Conference
    • /
    • 2003.07b
    • /
    • pp.1015-1018
    • /
    • 2003
  • As increasing the size and complexity of hard-ware and software system, more efficient design methodology has been developed. Especially design-reuse technique enables fast system development via integrating existing hardware and software. For this technique available hardware/software should be prepared as component-based parts, adaptable to various systems. This paper introduces a component-based VHDL analyzer allowing to be embedded in other applications, such as simulator, synthesis tool, or smart editor. VHDL analyzer parses VHDL description input, and performs lexical, syntactic, semantic checking, and finally generates intermediate-form data as the result. VHDL has full-features of object-oriented language such as data abstraction, inheritance, and polymorphism. To support these features special analysis algorithm and intermediate form is required. This paper summarizes practical issues on implementing high-performance/quality VHDL analyzer and provides its solution that is based on the intensive experience of VHDL analyzer development.

  • PDF

Parsing Korean Comparative Constructions in a Typed-Feature Structure Grammar

  • Kim, Jong-Bok;Yang, Jae-Hyung;Song, Sang-Houn
    • Language and Information
    • /
    • v.14 no.1
    • /
    • pp.1-24
    • /
    • 2010
  • The complexity of comparative constructions in each language has given challenges to both theoretical and computational analyses. This paper first identifies types of comparative constructions in Korean and discusses their main grammatical properties. It then builds a syntactic parser couched upon the typed feature structure grammar, HPSG and proposes a context-dependent interpretation for the comparison. To check the feasibility of the proposed analysis, we have implemented the grammar into the existing Korean Resource Grammar. The results show us that the grammar we have developed here is feasible enough to parse Korean comparative sentences and yield proper semantic representations though further development is needed for a finer model for contextual information.

  • PDF

Case Frames of the Old English Impersnal Cnstruction: Conceptual Semantic Analysis

  • Jun, Jong-Sup
    • Language and Information
    • /
    • v.9 no.2
    • /
    • pp.107-126
    • /
    • 2005
  • The impersonal or psyc-predicate construction in Old English (=OE) poses a special challenge for most case theories in generative linguistics. In the OE impersonal construction, the experiencer argument is marked by dative, accusative, or nominative, whereas the theme is marked by nominative, genitive, or accusative, or by a PP. The combinations of possible cases for experiencer and theme are not random, bringing about daunting complexity for possible and impossible case frames. In this paper, I develop a conceptual semantic case theory (a la Jackendoff 1990, 1997, 2002; Yip, Maling, and Jackendoff 1987) to provide a unified account for the complicated case frames of the OE impersonal construction. In the conceptual semantic case theory, syntax and semantics have their own independent case assignment principles. For impersonal verbs in OE, I propose that UG leave an option of determining either syntactic or semantic case to lexical items. This proposal opens a new window for the OE impersonal construction, in that it naturally explains both possible and impossible case frames of the construction.

  • PDF

Prediction of Prosodic Boundary Strength by means of Three POS(Part of Speech) sets (품사셋에 의한 운율경계강도의 예측)

  • Eom Ki-Wan;Kim Jin-Yeong;Kim Seon-Mi;Lee Hyeon-Bok
    • MALSORI
    • /
    • no.35_36
    • /
    • pp.145-155
    • /
    • 1998
  • This study intended to determine the most appropriate POS(Part of Speech) sets for predicting prosodic boundary strength efficiently. We used 3-level POB bets which Kim(1997), one of the authors, has devised. Three POS sets differ from each other according to how much grammatical information they have: the first set has maximal syntactic and morphological information which possibly affects prosodic phrasing, and the third set has minimal one. We hand-labelled 150 sentences using each of three POS sets and conducted perception test. Based on the results of the test, stochastic language modeling method was used to predict prosodic boundary strength. The results showed that the use of each POS set led to not too much different efficiency in the prediction, but the second set was a little more efficient than the other two. As far as the complexity in stochastic language modeling is concerned, however, the third set may be also preferable.

  • PDF