• Title/Summary/Keyword: Language Models

Search Result 884, Processing Time 0.028 seconds

Designing Cultural Syllabus and Lesson Plan Based on Developmental Stages of Acculturation of Intercultural Communicative Competence

  • Jang, Eun-Suk
    • English Language & Literature Teaching
    • /
    • v.17 no.1
    • /
    • pp.37-52
    • /
    • 2011
  • The purposes of this study were to review developmental stages of acculturation, to establish dimensions and components of intercultural communicative competence, and to suggest teaching methods in the elementary school based on the dimensions and the components of the stages. In order to achieve these purposes, theoretical research on the nature of intercultural communicative competence and teaching methods of intercultural dimensions and components was carried out in terms of developmental stages of acculturation. The stages of acculturation have relation to cognitive domain, affective domain, and cultural awareness. In the domain of cognitive development, the models such as Cummins (1981), Wong-Fillmore (1983), and Ausubel (1968) were presented. In the affective domain of second language research, the models of Gardener and Lambert (1972), Maslow (1954), and Bloom (1974) were argued. Modifying the models of Ausubel, Cummins, Wong-Fillmore, the dimensions and components of intercultural communicative competence were established. In addition, it was suggested that cultural syllabus and lesson plan based on tourist and survivor stage should be considered.

  • PDF

Effective Models of English Team Teaching in Korean Middle Schools

  • Kim, Jeong-Ok
    • English Language & Literature Teaching
    • /
    • v.15 no.3
    • /
    • pp.105-127
    • /
    • 2009
  • This study investigates effective models of team teaching in Korean middle school classrooms based upon a questionnaire survey and two English listening tests. The data from 349 first year middle school students from 3 different middle schools were collected and compared between team teaching (TT) types in terms of participants' background language learning methods and their opinions about TT. The findings of the present study indicate that students appear to have different opinions about TT according to the TT types. Also the results of the English listening tests between students who took TT and those who didn't take TT show significant differences between TT groups. This study gives both native English teachers (NETs) and Korean English teachers (KETs) the perspectives about effective TT type and the opportunities that both types of teachers could reconsider their TT in order to develop students' English communicative competence more successfully.

  • PDF

DEVSIF Composer: A Synthesis Tool for Fast Interpretation of Simulation Models

  • Lee, Wan-Bok
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.1
    • /
    • pp.59-63
    • /
    • 2008
  • The methods or algorithms which can accelerate simulation speed became of great importance, as the modeling and simulation methodology for discrete event systems is used in many areas such as model validation/verification and performance evaluation. This paper proposes a tool named, DEVSIF composer. The tool is made of an automated compiled simulation technology and it builds a new composed model which can be executed much fast by composing the component models together. Models are described by our new specification language DEVSIF, which is compatible with object-oriented language and supports representation of a hierarchical model structure. Experimental results demonstrates that DEVSIF composer enhances the simulation speed of a transformed DEVS model 5 times faster than that of the original ones in average.

Document Summarization Model Based on General Context in RNN

  • Kim, Heechan;Lee, Soowon
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1378-1391
    • /
    • 2019
  • In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.

A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

  • Gim, Jangwon
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.139-151
    • /
    • 2020
  • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

Technical Trends in Hyperscale Artificial Intelligence Processors (초거대 인공지능 프로세서 반도체 기술 개발 동향)

  • W. Jeon;C.G. Lyuh
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.1-11
    • /
    • 2023
  • The emergence of generative hyperscale artificial intelligence (AI) has enabled new services, such as image-generating AI and conversational AI based on large language models. Such services likely lead to the influx of numerous users, who cannot be handled using conventional AI models. Furthermore, the exponential increase in training data, computations, and high user demand of AI models has led to intensive hardware resource consumption, highlighting the need to develop domain-specific semiconductors for hyperscale AI. In this technical report, we describe development trends in technologies for hyperscale AI processors pursued by domestic and foreign semiconductor companies, such as NVIDIA, Graphcore, Tesla, Google, Meta, SAPEON, FuriosaAI, and Rebellions.

Zero-shot Lexical Semantics based on Perplexity of Pretrained Language Models (사전학습 언어모델의 Perplexity에 기반한 Zero-shot 어휘 의미 모델)

  • Choi, Heyong-Jun;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.473-475
    • /
    • 2021
  • 유의어 추천을 구현하기 위해서는 각 단어 사이의 유사도를 계산하는 것이 필수적이다. 하지만, 기존의 단어간 유사도를 계산하는 여러 방법들은 데이터셋에 등장하지 않은 단어에 대해 유사도를 계산 할 수 없다. 이 논문에서는 이를 해결하기 위해 언어모델의 PPL을 활용하여 단어간 유사도를 계산하였고, 이를 통해 유의어를 추천했을 때 MRR 41.31%의 성능을 확인했다.

  • PDF

Machine Learning Based Domain Classification for Korean Dialog System (기계학습을 이용한 한국어 대화시스템 도메인 분류)

  • Jeong, Young-Seob
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.8
    • /
    • pp.1-8
    • /
    • 2019
  • Dialog system is becoming a new dominant interaction way between human and computer. It allows people to be provided with various services through natural language. The dialog system has a common structure of a pipeline consisting of several modules (e.g., speech recognition, natural language understanding, and dialog management). In this paper, we tackle a task of domain classification for the natural language understanding module by employing machine learning models such as convolutional neural network and random forest. For our dataset of seven service domains, we showed that the random forest model achieved the best performance (F1 score 0.97). As a future work, we will keep finding a better approach for domain classification by investigating other machine learning models.

Properties and Quantitative Analysis of Bias in Korean Language Models: A Comparison with English Language Models and Improvement Suggestions (한국어 언어모델의 속성 및 정량적 편향 분석: 영어 언어모델과의 비교 및 개선 제안)

  • Jaemin Kim;Dong-Kyu Chae
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.558-562
    • /
    • 2023
  • 최근 ChatGPT의 등장으로 텍스트 생성 모델에 대한 관심이 높아지면서, 텍스트 생성 태스크의 성능평가를 위한 지표에 대한 연구가 활발히 이뤄지고 있다. 전통적인 단어 빈도수 기반의 성능 지표는 의미적인 유사도를 고려하지 못하기 때문에, 사전학습 언어모델을 활용한 지표인 BERTScore를 주로 활용해왔다. 하지만 이러한 방법은 사전학습 언어모델이 학습한 데이터에 존재하는 편향으로 인해 공정성에 대한 문제가 우려된다. 이에 따라 한국어 사전학습 언어모델의 편향에 대한 분석 연구가 필요한데, 기존의 한국어 사전학습 언어모델의 편향 분석 연구들은 사회에서 생성되는 다양한 속성 별 편향을 고려하지 못했다는 한계가 있다. 또한 서로 다른 언어를 기반으로 하는 사전학습 언어모델들의 속성 별 편향을 비교 분석하는 연구 또한 미비하였다. 이에 따라 본 논문에서는 한국어 사전학습 언어모델의 속성 별 편향을 비교 분석하며, 영어 사전학습 언어모델이 갖고 있는 속성 별 편향과 비교 분석하였고, 비교 가능한 데이터셋을 구축하였다. 더불어 한국어 사전학습 언어모델의 종류 및 크기 별 편향 분석을 통해 적합한 모델을 선택할 수 있도록 가이드를 제시한다.

  • PDF

Leveraging LLMs for Corporate Data Analysis: Employee Turnover Prediction with ChatGPT (대형 언어 모델을 활용한 기업데이터 분석: ChatGPT를 활용한 직원 이직 예측)

  • Sungmin Kim;Jee Yong Chung
    • Knowledge Management Research
    • /
    • v.25 no.2
    • /
    • pp.19-47
    • /
    • 2024
  • Organizational ability to analyze and utilize data plays an important role in knowledge management and decision-making. This study aims to investigate the potential application of large language models in corporate data analysis. Focusing on the field of human resources, the research examines the data analysis capabilities of these models. Using the widely studied IBM HR dataset, the study reproduces machine learning-based employee turnover prediction analyses from previous research through ChatGPT and compares its predictive performance. Unlike past research methods that required advanced programming skills, ChatGPT-based machine learning data analysis, conducted through the analyst's natural language requests, offers the advantages of being much easier and faster. Moreover, its prediction accuracy was found to be competitive compared to previous studies. This suggests that large language models could serve as effective and practical alternatives in the field of corporate data analysis, which has traditionally demanded advanced programming capabilities. Furthermore, this approach is expected to contribute to the popularization of data analysis and the spread of data-driven decision-making (DDDM). The prompts used during the data analysis process and the program code generated by ChatGPT are also included in the appendix for verification, providing a foundation for future data analysis research using large language models.