• Title/Summary/Keyword: Corpus-based

Search Result 568, Processing Time 0.029 seconds

A Corpus-based Analysis of EFL Learners' Use of Hedges in Cross-cultural Communication

  • Min, Su-Jung
    • English Language & Literature Teaching
    • /
    • v.16 no.4
    • /
    • pp.91-106
    • /
    • 2010
  • This study examines the use of hedges in cross-cultural communication between EFL learners in an e-learning environment. The study analyzes the use of hedges in a corpus of an interactive web with a bulletin board system through which college students of English at Japanese and Korean universities interacted with each other discussing the topics of local and global issues. It compares the use of hedges in the students' corpus to that of a native English speakers' corpus. The result shows that EFL learners tend to use relatively smaller number of hedges than the native speakers in terms of the frequencies of the total tokens. It further reveals that the learners' overuse of a single versatile high-frequency hedging item, I think, results in relative underuse of other hedging devices. This indicates that due to their small repertoire of hedges, EFL learners' overuse of a limited number of hedging items may cause their speech or writing to become less competent. Based on the result and interviews with the learners, the study also argues that hedging should be understood in its social contexts and should not be understood just as a lack of conviction or a mark of low proficiency. Suggestions were made for using computer corpora in understanding EFL learners' language difficulties and helping them develop communicative and pragmatic competence.

  • PDF

In My Opinion: Modality in Japanese EFL Learners' Argumentative Essays

  • Pemberton, Christine
    • Asia Pacific Journal of Corpus Research
    • /
    • v.1 no.2
    • /
    • pp.57-72
    • /
    • 2020
  • This study seeks to add to the current understanding of learners' use of modality in argumentative writing. A learner corpus of argumentative essays on four topics was created and compared to native English speaker data from the International Corpus Network of Asian Learners of English (ICNALE). The relationship between learners' use of modal devices (MDs) and the devices' appearance in the school's curriculum was also examined. The results showed that learners relied on a very narrow range of MDs compared to those in previous studies. The frequency of use of MDs varied based on the topic and did not seem to be driven by cultural factors as has been previously suggested. Learners used more hedges than boosters on all topics, contradicting most previous studies. Curriculum was determined to have a direct correlation with MD use, and other important factors may include perception of topic and overreliance on certain MDs over others (the One-to-One principal). This research implies that learners' perception of topic should be explored further as a variable affecting MD use. Curricula should be designed based on frequency of MD use by English native speakers, and learners should receive instruction that teaches the norms of MD use in academic writing. The methodology used in the study to determine correlations between MD use and the curriculum has a wide range of potential applications in the field of Contrastive Interlanguage Analysis.

Corpus-based Analysis on Vocabulary Found in 『Donguibogam』 (코퍼스 분석방법을 이용한 『동의보감(東醫寶鑑)』의 어휘 분석)

  • Jung, Ji-Hun;Kim, Dongryul
    • The Journal of Korean Medical History
    • /
    • v.28 no.1
    • /
    • pp.135-141
    • /
    • 2015
  • The purpose of this study is to analyze vocabulary found in "Donguibogam", one of the medical books in mid-Chosun, through Corpus-based analysis, one of the text analysis methods. According to it, Donguibogam has total 871,000 words in it, and Chinese characters used in it are total 5,130. Among them, 2,430 characters form 99% of the entire text. The most frequently appearing 20 Chinese characters are mainly function words, and with this, we can see that "Donguibogam" is a book equipped with complete forms of sentences just like other books. Examining the chapters of "Donguibogam" by comparison, Remedies and Acupuncture indicated lower frequencies of function words than Internal Medicine, External Medicine, and Miscellaneous Diseases. "Yixuerumen (Introduction to Medicine)" which influenced "Donguibogam" very much has lower frequencies of function words than "Donguibogam" in its most frequently appearing words. This may be because "Yixuerumen" maintains the form of Chileonjeolgu (a quatrain with seven Chinese characters in each line with seven-word lines) and adds footnotes below it. Corpus-based analysis helps us to see the words mainly used by measuring their frequencies in the book of medicine. Therefore, this researcher suggests that the results of this analysis can be used for education of Chinese characters at the college of Korean Medicine.

Development of a Lipsync Algorithm Based on Audio-visual Corpus (시청각 코퍼스 기반의 립싱크 알고리듬 개발)

  • 김진영;하영민;이화숙
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.63-69
    • /
    • 2001
  • A corpus-based lip sync algorithm for synthesizing natural face animation is proposed in this paper. To get the lip parameters, some marks were attached some marks to the speaker's face, and the marks' positions were extracted with some Image processing methods. Also, the spoken utterances were labeled with HTK and prosodic information (duration, pitch and intensity) were analyzed. An audio-visual corpus was constructed by combining the speech and image information. The basic unit used in our approach is syllable unit. Based on this Audio-visual corpus, lip information represented by mark's positions was synthesized. That is. the best syllable units are selected from the audio-visual corpus and each visual information of selected syllable units are concatenated. There are two processes to obtain the best units. One is to select the N-best candidates for each syllable. The other is to select the best smooth unit sequences, which is done by Viterbi decoding algorithm. For these process, the two distance proposed between syllable units. They are a phonetic environment distance measure and a prosody distance measure. Computer simulation results showed that our proposed algorithm had good performances. Especially, it was shown that pitch and intensity information is also important as like duration information in lip sync.

  • PDF

A New Fine-grain SMS Corpus and Its Corresponding Classifier Using Probabilistic Topic Model

  • Ma, Jialin;Zhang, Yongjun;Wang, Zhijian;Chen, Bolun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.604-625
    • /
    • 2018
  • Nowadays, SMS spam has been overflowing in many countries. In fact, the standards of filtering SMS spam are different from country to country. However, the current technologies and researches about SMS spam filtering all focus on dividing SMS message into two classes: legitimate and illegitimate. It does not conform to the actual situation and need. Furthermore, they are facing several difficulties, such as: (1) High quality and large-scale SMS spam corpus is very scarce, fine categorized SMS spam corpus is even none at all. This seriously handicaps the researchers' studies. (2) The limited length of SMS messages lead to lack of enough features. These factors seriously degrade the performance of the traditional classifiers (such as SVM, K-NN, and Bayes). In this paper, we present a new fine categorized SMS spam corpus which is unique and the largest one as far as we know. In addition, we propose a classifier, which is based on the probability topic model. The classifier can alleviate feature sparse problem in the task of SMS spam filtering. Moreover, we compare the approach with three typical classifiers on the new SMS spam corpus. The experimental results show that the proposed approach is more effective for the task of SMS spam filtering.

Detecting and correcting errors in Korean POS-tagged corpora (한국어 품사 부착 말뭉치의 오류 검출 및 수정)

  • Choi, Myung-Gil;Seo, Hyung-Won;Kwon, Hong-Seok;Kim, Jae-Hoon
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.37 no.2
    • /
    • pp.227-235
    • /
    • 2013
  • The quality of the part-of-speech (POS) annotation in a corpus plays an important role in developing POS taggers. There, however, are several kinds of errors in Korean POS-tagged corpora like Sejong Corpus. Such errors are likely to be various like annotation errors, spelling errors, insertion and/or deletion of unexpected characters. In this paper, we propose a method for detecting annotation errors using error patterns, and also develop a tool for effectively correcting them. Overall, based on the proposed method, we have hand-corrected annotation errors in Sejong POS Tagged Corpus using the developed tool. As the result, it is faster at least 9 times when compared without using any tools. Therefore we have observed that the proposed method is effective for correcting annotation errors in POS-tagged corpus.

Effects of Corpus Use on Error Identification in L2 Writing

  • Yoshiho Satake
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.1
    • /
    • pp.61-71
    • /
    • 2023
  • This study examines the effects of data-driven learning (DDL)-an approach employing corpora for inductive language pattern learning-on error identification in second language (L2) writing. The data consists of error identification instances from fifty-five participants, compared across different reference materials: the Corpus of Contemporary American English (COCA), dictionaries, and no use of reference materials. There are three significant findings. First, the use of COCA effectively identified collocational and form-related errors due to inductive inference drawn from multiple example sentences. Secondly, dictionaries were beneficial for identifying lexical errors, where providing meaning information was helpful. Finally, the participants often employed a strategic approach, identifying many simple errors without reference materials. However, while maximizing error identification, this strategy also led to mislabeling correct expressions as errors. The author has concluded that the strategic selection of reference materials can significantly enhance the effectiveness of error identification in L2 writing. The use of a corpus offers advantages such as easy access to target phrases and frequency information-features especially useful given that most errors were collocational and form-related. The findings suggest that teachers should guide learners to effectively use appropriate reference materials to identify errors based on error types.

Lexical Bundles in Computer Science Research Articles: A Corpus-Based Study

  • Lee, Je-Young;Lee, Hye Jin
    • International Journal of Contents
    • /
    • v.14 no.4
    • /
    • pp.70-75
    • /
    • 2018
  • The purpose of this corpus-based study was to find 4-word lexical bundles in computer science research articles. As the demand for research articles (RAs) for international publication increases, the need for acquiring field-specific writing conventions for this academic genre has become a burning issue. Particularly, one area of burgeoning interest in the examination of rhetorical structures and linguistic features of RAs is the use of lexical bundles, the indispensable building blocks that make up an academic discourse. To illustrate, different academic discourses rely on distinctive repertoires of lexical bundles. Because lexical bundles are often acquired as a whole, the recurring multi-word sequences can be retrieved automatically to make written discourse more fluent and natural. Therefore, the proper use of rhetorical devices specific to a particular discipline can be a vital indicator of success within the discourse communities. Hence, to identify linguistic features that make up specific registers, this corpus-based study examines the types and usage frequency of lexical bundles in the discipline of CS, one of the most in-demand fields world over. Given that lexical bundles are empirically-derived formulaic multi-word units, identifying core lexical bundles used in RAs, they may provide insights into the specificity of particular CS text types. This will in turn provide empirical evidence of register specificity and technicality within the academic discourse of computer science. As in the results, pedagogical implications and suggestions for future research are discussed.

On the Analysis of Natural Language Processing Morphology for the Specialized Corpus in the Railway Domain

  • Won, Jong Un;Jeon, Hong Kyu;Kim, Min Joong;Kim, Beak Hyun;Kim, Young Min
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.189-197
    • /
    • 2022
  • Today, we are exposed to various text-based media such as newspapers, Internet articles, and SNS, and the amount of text data we encounter has increased exponentially due to the recent availability of Internet access using mobile devices such as smartphones. Collecting useful information from a lot of text information is called text analysis, and in order to extract information, it is performed using technologies such as Natural Language Processing (NLP) for processing natural language with the recent development of artificial intelligence. For this purpose, a morpheme analyzer based on everyday language has been disclosed and is being used. Pre-learning language models, which can acquire natural language knowledge through unsupervised learning based on large numbers of corpus, are a very common factor in natural language processing recently, but conventional morpheme analysts are limited in their use in specialized fields. In this paper, as a preliminary work to develop a natural language analysis language model specialized in the railway field, the procedure for construction a corpus specialized in the railway field is presented.

A Corpus-Based Longitudinal Study of Diction in Chinese and British News Reports on Chang'e Project

  • Lu, Rong;Xie, Xue;Qi, Jiashuang;Ali, Afida Mohamad;Zhao, Jie
    • Asia Pacific Journal of Corpus Research
    • /
    • v.3 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • As a milestone progression in China's space exploration history, Chang'e Project has attracted a lot of media attention since its first launching. This study aims to examine and compare the similarities and differences between the Chinese media and the British media in using nouns, verbs, and adjectives to report the Chang'e Project. After categorising the documents based on specific project phases, we created two diachronic corpora to explore the linguistic shifts and similarities and differences of diction employed by the Chinese and British media on the Chang'e Project ideology. This longitudinal study was performed with Lancsbox and the CLAWS web tagger through critical discourse analysis as the theoretical framework. The findings of the current study showed that the Chang'e Project coverage in both media increased on an annual basis, especially after 2019. In contrast to the objectivity and positivity in the Chinese Media, the British Media seemed to be more subjective with more appraisal adjectives in the news reports. Nonetheless, both countries were trying to be objective and formal in choosing nouns and verbs. Ideology-wise, the Chinese news media reports portrayed more positivity on domestic circumstances while the British counterpart was typically more critical. Notably, the study outcomes could catalyse future research on the Chang'e Project and facilitate diplomatic policies.