• Title/Summary/Keyword: Lexical complexity

Search Result 19, Processing Time 0.026 seconds

Effect of Using QuillBot on the Writing Quality of EFL College Students

  • Hye Kyung Kim
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.42-47
    • /
    • 2023
  • The majority of research on Automated Writing Evaluation (AWE) programs has focused primarily on Grammarly, whereas QuillBot and its use in English as a Foreign Language (EFL) classrooms remains limitedly explored. This study examined the effectiveness of using QuillBot on the writing quality of college students. A total of 26 participants took pre- and post-writing tests, and four analytical tools were applied to assess their writing quality in terms of syntactic complexity, lexical diversity, lexical richness, and readability. Results of the syntactic complexity analysis across the four indices demonstrates that the syntactic complexity of EFL writing increased significantly, and substantial differences were observed in lexical richness and readability. These results suggest that QuillBot can compensate for the drawbacks of Grammarly and assist EFL writers in improving their overall writing quality.

The Study of Convergence on Lexical Complexity, Syntax Complexity, and Correlation among Language Variables (한국어 학습자의 어휘복잡성, 구문복잡성 및 언어능력 변인들 간의 상관에 관한 융합 연구)

  • Kyung, Lee-MI;Noh, Byungho;Kang, Anyoung
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.4
    • /
    • pp.219-229
    • /
    • 2017
  • The study was conducted to find out lexical complexity and syntactic complexity for Korean learners by telling stories to see pictures. The results were as follows. First, there was no meaningful difference according to nationality. Second, we checked the differences on lexical complexity and syntactic complexity according to Korean studying period, only number of difference words showed meaningful difference among lexical complexity sub variables, but there was no difference among syntactic complexity sub variables. Third, we also checked correlation among staying period of Korea, Korean studying period, and other language related variables. It showed meaningful correlation staying period in Korea and other language related variable except Korean studying period and TTR. The directions for teaching Korean learners were suggested on the point of converge view according to results.

Predicting CEFR Levels in L2 Oral Speech, Based on Lexical and Syntactic Complexity

  • Hu, Xiaolin
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.35-45
    • /
    • 2021
  • With the wide spread of the Common European Framework of Reference (CEFR) scales, many studies attempt to apply them in routine teaching and rater training, while more evidence regarding criterial features at different CEFR levels are still urgently needed. The current study aims to explore complexity features that distinguish and predict CEFR proficiency levels in oral performance. Using a quantitative/corpus-based approach, this research analyzed lexical and syntactic complexity features over 80 transcriptions (includes A1, A2, B1 CEFR levels, and native speakers), based on an interview test, Standard Speaking Test (SST). ANOVA and correlation analysis were conducted to exclude insignificant complexity indices before the discriminant analysis. In the result, distinctive differences in complexity between CEFR speaking levels were observed, and with a combination of six major complexity features as predictors, 78.8% of the oral transcriptions were classified into the appropriate CEFR proficiency levels. It further confirms the possibility of predicting CEFR level of L2 learners based on their objective linguistic features. This study can be helpful as an empirical reference in language pedagogy, especially for L2 learners' self-assessment and teachers' prediction of students' proficiency levels. Also, it offers implications for the validation of the rating criteria, and improvement of rating system.

Aspects of Chinese Korean learners' production of Korean aspiration at different prosodic boundaries (운율 층위에 따른 중국인학습자들의 한국어 유기음화 적용 양상)

  • Yune, Youngsook
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.9-17
    • /
    • 2017
  • The aim of this study is to examine whether Chinese Korean learners (CKL) can correctly produce the aspiration in 'a lenis obstruents /k/, /t/, /p/, /ʧ/+/h/ sound' sequence at the lexical and post-lexical level. For this purpose 4 Korean native speakers (KNS), 10 advanced and 10 intermediate CKL participated in a production test. The material analyzed consisted of 10 Korean sentences in which aspiration can be applied at different prosodic boundaries (syllable, word, accentual phrase). The results showed that for KNS and CKL, the rate of application of aspiration was different according to prosodic boundaries. Aspiration was more frequently applied at the lexical level than at the post-lexical level and it was more frequent at the word boundary than at the accentual phrase boundary. For CKL, pronunciation errors were either non-application of aspiration or coda obstruent omission. In the case of non-application of aspiration, CKL produced the target syllable as an underling form and they did not transform it as a surface form. In the case of coda obstruent ommision, most of the errors were caused by the inherent complexity of phonological process.

Using Small Corpora of Critiques to Set Pedagogical Goals in First Year ESP Business English

  • Wang, Yu-Chi;Davis, Richard Hill
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.2
    • /
    • pp.17-29
    • /
    • 2021
  • The current study explores small corpora of critiques written by Chinese and non-Chinese university students and how strategies used by these writers compare with high-rated L1 students. Data collection includes three small corpora of student writing; 20 student critiques in 2017, 23 student critiques from 2018, and 23 critiques from the online Michigan MICUSP collection at the University of Michigan. The researchers employ Text Inspector and Lexical Complexity to identify university students' vocabulary knowledge and awareness of syntactic complexity. In addition, WMatrix4® is used to identify and support the comparison of lexical and semantic differences among the three corpora. The findings indicate that gaps between Chinese and non-Chinese writers in the same university classes exist in students' knowledge of grammatical features and interactional metadiscourse. In addition, critiques by Chinese writers are more likely to produce shorter clauses and sentences. In addition, the mean value of complex nominal and coordinate phrases is smaller for Chinese students than for non-Chinese and MICUSP writers. Finally, in terms of lexical bundles, Chinese student writers prefer clausal bundles instead of phrasal bundles, which, according to previous studies, are more often found in texts of skilled writers. The current study's findings suggest incorporating implicit and explicit instruction through the implementation of corpora in language classrooms to advance skills and strategies of all, but particularly of Chinese writers of English.

Deep Lexical Semantics: The Ontological Ascent

  • Hobbs, Jerry R.
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.29-41
    • /
    • 2007
  • Concepts of greater and greater complexity can be constructed by building systems of entities, by relating other entities to that system with a figure-ground relation, by embedding concepts of figure-ground in the concept of change, by embedding that in causality, and by coarsening the granularity and beginning the process over again. This process can be called the Ontological Ascent. It pervades natural language discourse, and suggests that to do lexical semantics properly, we must carefully axiomatize abstract theories of systems of entities, the figure-ground relation, change, causality, and granularity. In this paper, I outline what these theories should look like.

  • PDF

Stability of Early Language Development of Verbally-Precocious Korean Children from 2 to 3 Year-old (조기언어발달 아동의 초기 언어능력의 안정성)

  • Lee, Kwee-Ock
    • The Korean Journal of Community Living Science
    • /
    • v.19 no.4
    • /
    • pp.673-684
    • /
    • 2008
  • The purpose of this study is to compare the complexity of language level between verbally-precocious and typically-developing children from 2 to 3 years-old. Participants were 15 children classified as verbally-precocious were scored at the mean 56.85(expressive language) and 88.82(receptive language), and another 15 children classified as typically developing did at the mean 33.51(expressive language) and 58.01(receptive language) on MCDI-K. Each child's spontaneous utterances in interaction with her caregiver were collected at three different times with 6 months interval. All of the utterances were transcribed and analyzed for the use of MLU and lexical diversity by using KCLA. Summarizing the overall results, verbally-precocious children had significantly higher language abilities than typically-developing children at each time, and there were significant differences between two groups in syntactic and semantic language development, showing that verbally-precocious children indicated distinctive MLU and lexical diversity. These results suggest a high degree of stability in precocious verbal status, with variations in language complexity during conversations contributing to later differences in their language ability.

  • PDF

Multi-stage Speech Recognition Using Confidence Vector (신뢰도 벡터 기반의 다단계 음성인식)

  • Jeon, Hyung-Bae;Hwang, Kyu-Woong;Chung, Hoon;Kim, Seung-Hi;Park, Jun;Lee, Yun-Keun
    • MALSORI
    • /
    • no.63
    • /
    • pp.113-124
    • /
    • 2007
  • In this paper, we propose a use of confidence vector as an intermediate input feature for multi-stage based speech recognition architecture to improve recognition accuracy. A multi-stage speech recognition structure is introduced as a method to reduce the computational complexity of the decoding procedure and then accomplish faster speech recognition. Conventional multi-stage speech recognition is usually composed of three stages, acoustic search, lexical search, and acoustic re-scoring. In this paper, we focus on improving the accuracy of the lexical decoding by introducing a confidence vector as an input feature instead of phoneme which was used typically. We take experimental results on 220K Korean Point-of-Interest (POI) domain and the experimental results show that the proposed method contributes on improving accuracy.

  • PDF

Intra-Sentence Segmentation using Maximum Entropy Model for Efficient Parsing of English Sentences (효율적인 영어 구문 분석을 위한 최대 엔트로피 모델에 의한 문장 분할)

  • Kim Sung-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.385-395
    • /
    • 2005
  • Long sentence analysis has been a critical problem in machine translation because of high complexity. The methods of intra-sentence segmentation have been proposed to reduce parsing complexity. This paper presents the intra-sentence segmentation method based on maximum entropy probability model to increase the coverage and accuracy of the segmentation. We construct the rules for choosing candidate segmentation positions by a teaming method using the lexical context of the words tagged as segmentation position. We also generate the model that gives probability value to each candidate segmentation positions. The lexical contexts are extracted from the corpus tagged with segmentation positions and are incorporated into the probability model. We construct training data using the sentences from Wall Street Journal and experiment the intra-sentence segmentation on the sentences from four different domains. The experiments show about $88\%$ accuracy and about $98\%$ coverage of the segmentation. Also, the proposed method results in parsing efficiency improvement by 4.8 times in speed and 3.6 times in space.

Component-Based VHDL Analyzer for Reuse and Embedment (재사용 및 내장 가능한 구성요소 기반 VHDL 분석기)

  • 박상헌;손영석
    • Proceedings of the IEEK Conference
    • /
    • 2003.07b
    • /
    • pp.1015-1018
    • /
    • 2003
  • As increasing the size and complexity of hard-ware and software system, more efficient design methodology has been developed. Especially design-reuse technique enables fast system development via integrating existing hardware and software. For this technique available hardware/software should be prepared as component-based parts, adaptable to various systems. This paper introduces a component-based VHDL analyzer allowing to be embedded in other applications, such as simulator, synthesis tool, or smart editor. VHDL analyzer parses VHDL description input, and performs lexical, syntactic, semantic checking, and finally generates intermediate-form data as the result. VHDL has full-features of object-oriented language such as data abstraction, inheritance, and polymorphism. To support these features special analysis algorithm and intermediate form is required. This paper summarizes practical issues on implementing high-performance/quality VHDL analyzer and provides its solution that is based on the intensive experience of VHDL analyzer development.

  • PDF