• Title/Summary/Keyword: Corpus-based Lexical Development

Search Result 4, Processing Time 0.017 seconds

Novice Corpus Users' Gains and Views on Corpus-based Lexical Development: A Case Study of COVID-19-related Expressions

  • Chen, Mei-Hua
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Recently, corpus assisted vocabulary instruction has been attracting a lot of interest. Most studies have focused on understanding language learners' receptive vocabulary knowledge. Limited attention has been paid to learners' productive competence. To fill this gap, this study attended to learners' productive lexical development in terms of form, meaning and use respectively. This study introduced EFL learners to the corpus-based language pedagogy to learn COVID-19 theme-based vocabulary. To investigate the gains and views of 33 EFL first-year college students, a sentence completion task and a questionnaire were developed. Learners' productive performances in the three lexical knowledge aspects (i.e., form, meaning and use) were particularly targeted. The results revealed that the students achieved significant gains in all aspects regardless of their proficiency level. In particular, the less proficient students achieved greater knowledge retention compared with their highly proficient counterparts. Meanwhile, students showed positive attitudes towards the corpus-based approach to vocabulary learning.

A Study on the Development of English Inflectional Morphemes Based on the CHILDES Corpus (CHILDES 코퍼스를 기반으로 한 아동의 영어 굴절형태소 발달 연구)

  • Min, Myung Sook;Jun, Jongsup;Lee, Sun-Young
    • Korean Journal of Cognitive Science
    • /
    • v.24 no.3
    • /
    • pp.203-235
    • /
    • 2013
  • The goal of this paper is to test the findings about English-speaking children's acquisition of inflectional morphemes in the literature using a large-scale database. For this, we obtained a 4.7-million-word corpus from the CHILDES (Child Language Data Exchange System) database, and analyzed 1,630 British and American children's uses of English derivational morphemes up to age 7. We analyzed the type and token frequencies, type per token ratio (TTR), and the lexical diversity (D) for such inflectional morphemes as the present progressive -ing, the past tense -(e)d, the comparative and superlative -er/est with reference to children's nationality and age groups. To sum up our findings, the correlations between the D value and children's age varied from morpheme to morpheme; e.g. we found no correlation for -ing, a marginal correlation for -ed, and a strong correlation for -er/-est. Our findings are consistent with Brown's (1973) classical observation that children learn progressive forms earlier than the past tense marker. In addition, overgeneralization errors were frequently found for -ed, but rarely for -ing, showing a U-shaped developmental pattern at ages 2-3. Finally, American children showed higher D scores than British children, which showed that American children used inflectional morphemes for more word types compared with British children. The present study has its significance in testing the earlier findings in the literature by setting up well-defined methodology for analyzing the entire CHILDES database.

  • PDF

Development and Evaluation of a Document Summarization System using Features and a Text Component Identification Method (텍스트 구성요소 판별 기법과 자질을 이용한 문서 요약 시스템의 개발 및 평가)

  • Jang, Dong-Hyun;Myaeng, Sung-Hyon
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.678-689
    • /
    • 2000
  • This paper describes an automatic summarization approach that constructs a summary by extracting sentences that are likely to represent the main theme of a document. As a way of selecting summary sentences, the system uses a model that takes into account lexical and statistical information obtained from a document corpus. As such, the system consists of two parts: the training part and the summarization part. The former processes sentences that have been manually tagged for summary sentences and extracts necessary statistical information of various kinds, and the latter uses the information to calculate the likelihood that a given sentence is to be included in the summary. There are at least three unique aspects of this research. First of all, the system uses a text component identification model to categorize sentences into one of the text components. This allows us to eliminate parts of text that are not likely to contain summary sentences. Second, although our statistically-based model stems from an existing one developed for English texts, it applies the framework to individual features separately and computes the final score for each sentence by combining the pieces of evidence using the Dempster-Shafer combination rule. Third, not only were new features introduced but also all the features were tested for their effectiveness in the summarization framework.

  • PDF

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.