• Title/Summary/Keyword: Korean language training

Search Result 442, Processing Time 0.025 seconds

Translating English By-Phrase Passives into Korean: A Parallel Corpus Analysis (영한 병렬 코퍼스에 나타난 영어 수동문의 한국어 번역)

  • Lee, Seung-Ah
    • Journal of English Language & Literature
    • /
    • v.56 no.5
    • /
    • pp.871-905
    • /
    • 2010
  • This paper is motivated by Watanabe's (2001) observation that English byphrase passives are sometimes translated into Japanese object topicalization constructions. That is, the original English sentence in the passive may be translated into the active voice with the logical object topicalized. A number of scholars, including Chomsky (1981) and Baker (1992), have remarked that languages have various ways to avoid focusing on the logical subject. The aim of the present study is to examine the translation equivalents of the English by-phrase passives in an English-Korean parallel corpus compiled by the author. A small sample of articles from Newsweek magazine and its published Korean translation reveals that there are indeed many ways to translate English by-phrase passives, including object topicalization (12.5%). Among the 64 translated sentences analyzed and classified, 12 (18.8%) examples were problematic in terms of agent defocusing, which is the primary function of passives. Of these 12 instances, five cases were identified where an alternative translation would be more suitable. The results suggest that the functional characteristics of English by-phrase passives should be highlighted in translator training as well as language teaching.

Europass and the CEFR: Implications for Language Teaching in Korea

  • Finch, Andrew Edward
    • English Language & Literature Teaching
    • /
    • v.15 no.2
    • /
    • pp.71-92
    • /
    • 2009
  • Europass was established in 2005 by the European Parliament and the Council of Europe as a single framework for language qualifications and competences, helping citizens to gain accreditation throughout the European Community. In addition, the 1996 Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR) provides a common basis for language syllabi, curriculum guidelines, examination, and textbooks in Europe. This framework describes the required knowledge and skills, the cultural context, and the levels of proficiency that learners should achieve. In combination, Europass and the CEFR provide employers and educational institutes with internationally recognized standards. This paper proposes that current trends such as globalization and international mobility require a similar approach to accreditation in Asia. As jobs and workers become independent of national boundaries and restrictions, it becomes necessary to educate students as multilingual world citizens, using standards that are accepted around the world. It is suggested, therefore, that assessment models such as Europass and the CEFR, along with successful language teaching models in Europe and Canada, present opportunities of adaptation for the Korean education system. Finally, rigorous teacher training to internationally recognized levels is recommended, if Korea is to produce a workforce of highly-skilled, plurilingual world citizens.

  • PDF

A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development (사전 학습된 한국어 BERT의 전이학습을 통한 한국어 기계독해 성능개선에 관한 연구)

  • Lee, Chi Hoon;Lee, Yeon Ji;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.19 no.5
    • /
    • pp.83-91
    • /
    • 2020
  • Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also require huge amount of data to train. Hence, it became mandatory to do fine-tuning large pre-trained language models which are trained by Google or some companies can afford the resources and cost. There are various techniques for fine tuning the language models and this paper examines three techniques, which are data augmentation, tuning the hyper paramters and partly re-constructing the neural networks. For data augmentation, we use no-answer augmentation and back-translation method. Also, some useful combinations of hyper parameters are observed by conducting a number of experiments. Finally, we have GRU, LSTM networks to boost our model performance with adding those networks to BERT pre-trained model. We do fine-tuning the pre-trained korean-based language model through the methods mentioned above and push the F1 score from baseline up to 89.66. Moreover, some failure attempts give us important lessons and tell us the further direction in a good way.

A Qualitative Study on Early Childhood Teachers' Experiences in Teaching Young Children with Language Development Delays (보육교사의 언어발달지체 유아 지원 경험에 관한 질적 연구)

  • Younwoo Lee;Sohee Kim
    • Korean Journal of Childcare and Education
    • /
    • v.20 no.3
    • /
    • pp.85-106
    • /
    • 2024
  • Objective: The purpose of this study is to explore the experiences of early childhood teachers in teaching young children with language development delays. Methods: Eight early childhood teachers with experience teaching children with language development delays were interviewed. The collected data were analyzed through transcription, coding, and theme generation processes, resulting in three main themes and seven sub-themes. Results: First, early childhood teachers mentioned difficulties in communication due to language development delays, the need for communication support with peers, and a lack of support from families. Second, the guidance for young children with language development delays was provided by considering the characteristics of these children and through collaboration among various stakeholders. Third, early childhood teachers requested tailored training for teaching young children with language development delays. They also called for the establishment of a cooperative system among early childhood education institutions, families, and specialized agencies. Conclusion/Implications: Based on the research findings, a discussion was conducted on the support needed for guiding young children with language development delays, and suggestions were made for further research in this area.

Figure Identification Method By KoNLPy And Image Object Analysis (KoNLPy와 이미지 객체 분석을 통한 그림 식별 방법)

  • Jihye Kim;Mikyeong Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.697-698
    • /
    • 2023
  • 최근 딥 러닝 분야의 기술이 발달하면서 Chat GPT, Google Bard와 같은 자연어 처리 기술이 확대되고 있고 이미지 객체를 분석하는 CLIP, BLIP와 같은 기술도 발전되고 있다. 그러나 전시회와 같은 예술 분야는 딥 러닝 기술 기반의 이미지 데이터 활용이 제한적이다. 본 논문은 전시회장에서의 그림 내부의 객체 데이터를 분석하기 위해 이미지 객체 분석 기술을 사용하고 자연어 처리 기반으로 관람객이 특정 그림에 대한 질문을 입력하면 해당 그림을 식별하는 방법을 제시한다. 이를 통해 관람객이 원하는 그림을 선별하여 관람할 수 있도록 한다.

  • PDF

Necessity of Intercultural Training Program in MET

  • Choe, Jin-Cheol;Dayna, Nollan
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2015.10a
    • /
    • pp.224-226
    • /
    • 2015
  • Outwardly, the people in the shipping industry are aware that multicultural working environments and conditions could have a strong influence on the operation of ships. With a lack of cultural awareness and foreign language skill of crew members on ships, there are lots of misunderstandings and miscommunications among (cross-cultural) crews. More and more maritime accidents are caused by human error in the world's oceans. Nevertheless the research on cultural diversity and human interaction on ships is still in its infancy. Due to the rapid change of the demographic make-up of crews, not only teaching and training technical skills for the crews, but also education in nontechnical skills such as cultural awareness, cultural sensitivity, intercultural competence is urgently needed. This study will deal with intercultural issues on ships. It aims to emphasize the necessity of intercultural training in MET.

  • PDF

AI-based language tutoring systems with end-to-end automatic speech recognition and proficiency evaluation

  • Byung Ok Kang;Hyung-Bae Jeon;Yun Kyung Lee
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.48-58
    • /
    • 2024
  • This paper presents the development of language tutoring systems for nonnative speakers by leveraging advanced end-to-end automatic speech recognition (ASR) and proficiency evaluation. Given the frequent errors in non-native speech, high-performance spontaneous speech recognition must be applied. Our systems accurately evaluate pronunciation and speaking fluency and provide feedback on errors by relying on precise transcriptions. End-to-end ASR is implemented and enhanced by using diverse non-native speaker speech data for model training. For performance enhancement, we combine semisupervised and transfer learning techniques using labeled and unlabeled speech data. Automatic proficiency evaluation is performed by a model trained to maximize the statistical correlation between the fluency score manually determined by a human expert and a calculated fluency score. We developed an English tutoring system for Korean elementary students called EBS AI Peng-Talk and a Korean tutoring system for foreigners called KSI Korean AI Tutor. Both systems were deployed by South Korean government agencies.

DeNERT: Named Entity Recognition Model using DQN and BERT

  • Yang, Sung-Min;Jeong, Ok-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.29-35
    • /
    • 2020
  • In this paper, we propose a new structured entity recognition DeNERT model. Recently, the field of natural language processing has been actively researched using pre-trained language representation models with a large amount of corpus. In particular, the named entity recognition, which is one of the fields of natural language processing, uses a supervised learning method, which requires a large amount of training dataset and computation. Reinforcement learning is a method that learns through trial and error experience without initial data and is closer to the process of human learning than other machine learning methodologies and is not much applied to the field of natural language processing yet. It is often used in simulation environments such as Atari games and AlphaGo. BERT is a general-purpose language model developed by Google that is pre-trained on large corpus and computational quantities. Recently, it is a language model that shows high performance in the field of natural language processing research and shows high accuracy in many downstream tasks of natural language processing. In this paper, we propose a new named entity recognition DeNERT model using two deep learning models, DQN and BERT. The proposed model is trained by creating a learning environment of reinforcement learning model based on language expression which is the advantage of the general language model. The DeNERT model trained in this way is a faster inference time and higher performance model with a small amount of training dataset. Also, we validate the performance of our model's named entity recognition performance through experiments.

Implementation and Evaluation of an HMM-Based Speech Synthesis System for the Tagalog Language

  • Mesa, Quennie Joy;Kim, Kyung-Tae;Kim, Jong-Jin
    • MALSORI
    • /
    • v.68
    • /
    • pp.49-63
    • /
    • 2008
  • This paper describes the development and assessment of a hidden Markov model (HMM) based Tagalog speech synthesis system, where Tagalog is the most widely spoken indigenous language of the Philippines. Several aspects of the design process are discussed here. In order to build the synthesizer a speech database is recorded and phonetically segmented. The constructed speech corpus contains approximately 89 minutes of Tagalog speech organized in 596 spoken utterances. Furthermore, contextual information is determined. The quality of the synthesized speech is assessed by subjective tests employing 25 native Tagalog speakers as respondents. Experimental results show that the new system is able to obtain a 3.29 MOS which indicates that the developed system is able to produce highly intelligible neutral Tagalog speech with stable quality even when a small amount of speech data is used for HMM training.

  • PDF

COMPUTER AND INTERNET RESOURCES FOR PRONUNCIATION AND PHONETICS TEACHING

  • Makarova, Veronika
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.338-349
    • /
    • 2000
  • Pronunciation teaching is once again coming into the foreground of ELT. Japan is, however, lagging far behind many countries in the development of pronunciation curricula and in the actual speech performance of the Japanese learners of English. The reasons for this can be found in the prevalence of communicative methodologies unfavorable for pronunciation teaching, in the lack of trained professionals, and in the large numbers of students in Japanese foreign language classes. This paper offers a way to promote foreign language pronunciation teaching in Japan and other countries by means of employing computer and internet facilities. The paper outlines the major directions of using modem speech technologies in pronunciation classes, like EVF (electronic visual feedback) training at segmental and prosodic levels; automated error detection, testing, grading and fluency assessment. The author discusses the applicability of some specific software packages (CSLU, SUGIspeech, Multispeech, Wavesurfer, etc.) for the needs of pronunciation teaching. Finally, the author talks about the globalization of pronunciation education via internet resources, such as computer corpora and speech and pronunciation training related web pages.

  • PDF