• Title/Summary/Keyword: language performance

Search Result 1,539, Processing Time 0.027 seconds

Null Subjects and Objects in Child English

  • Han, Ho;Choe, Soon-Gwon;Park, Yeon-Sook
    • English Language & Literature Teaching
    • /
    • v.10 no.2
    • /
    • pp.25-42
    • /
    • 2004
  • This paper explores some possible interpretations of subject/object in child language, pointing out some potential problems in recent works within the minimalist framework and suggesting different views on it. Particularly, we will focus on how to identify and/or license objects, since most of the studies relevant to this issue have accounted for subjects only. Discussing the results of the studies on child language data, we will show that previous syntactic explanations on subjects, which have seemed quite attractive and refined, may not hold when accounting for objects and various aspects and properties of arguments in those child languages. In doing so, we will suggest and support a performance-based account, a discourse-based account, and a markedness account.

  • PDF

A Research on Test Suites for Machine Translation Systems. (기계번역 시스템 측정 장치 연구)

  • Lee, Min-Haeng;Jee, Kwang-Sin;Chung, So-Woo
    • Language and Information
    • /
    • v.2 no.2
    • /
    • pp.185-220
    • /
    • 1998
  • The purpose of this research is to propose a set of basic guidelines for the construction of English test suites, a set of basic guidelines for the construction of Korean test suites to objectively evaluate the performance of machine translation systems. For this end, we constructed 650 English test sentences, 650 Korean test sentences, and developed the statistical methods and tools for the comparative evaluation of the English-Korean machine translation systems. It also evaluates the existing commercial English-Korean machine translation systems. The importance of this research lies in that it will promote an awareness of the importance and need of testing machine translation systems within the Natural Language Community. This research will also make a big contribution to the development of evaluation methods and techniques for appropriate test suites for Korean information processing systems. The results of this research can be used by the natural language community to test the performance and development of their information processing systems or machine translation systems.

  • PDF

Syntactic Analysis and Keyword Expansion for Performance Enhancement of Information Retrieval System (정보 검색 시스템의 성능 향상을 위한 구문 분석과 검색어 확장)

  • 윤성희
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.5 no.4
    • /
    • pp.303-308
    • /
    • 2004
  • Natural language query is the best user interface for the users of information retrieval systems. This paper Proposes a retrieval system with expanded keyword from syntactically-analyzed structures of user's natural language query based on natural language processing technique. Through the steps combining or splitting the compound nouns based on syntactic tree traversal, and expanding the other-formed or shorten-formed keyword into multiple keyword, the system performance was enhanced up to 11.3% precision and 4.7% correctness.

  • PDF

AI-based language tutoring systems with end-to-end automatic speech recognition and proficiency evaluation

  • Byung Ok Kang;Hyung-Bae Jeon;Yun Kyung Lee
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.48-58
    • /
    • 2024
  • This paper presents the development of language tutoring systems for nonnative speakers by leveraging advanced end-to-end automatic speech recognition (ASR) and proficiency evaluation. Given the frequent errors in non-native speech, high-performance spontaneous speech recognition must be applied. Our systems accurately evaluate pronunciation and speaking fluency and provide feedback on errors by relying on precise transcriptions. End-to-end ASR is implemented and enhanced by using diverse non-native speaker speech data for model training. For performance enhancement, we combine semisupervised and transfer learning techniques using labeled and unlabeled speech data. Automatic proficiency evaluation is performed by a model trained to maximize the statistical correlation between the fluency score manually determined by a human expert and a calculated fluency score. We developed an English tutoring system for Korean elementary students called EBS AI Peng-Talk and a Korean tutoring system for foreigners called KSI Korean AI Tutor. Both systems were deployed by South Korean government agencies.

Implementation and Analysis of Multi-Precision Multiplication for Public Key Cryptography Based on Android Platform (안드로이드 기반 공개키 암호를 위한 곱셈기 구현 및 분석)

  • Seo, Hwa-Jeong;Kim, Ho-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.10
    • /
    • pp.940-948
    • /
    • 2012
  • Android program is developed with JAVA SDK and executed over virtual machine. For this reason, programming is easier than traditional C language but performance of operating speed decreases. To enhance the performance, NDK development tool, which provides C language, assembly language environment, was proposed. Furthermore, with NEON function provided by ARM, we can utilize the vector operation and enhance performance. In the paper, we explore effectiveness of NDK and then propose advanced multiplication structure with NEON function.

A Study of Pre-trained Language Models for Korean Language Generation (한국어 자연어생성에 적합한 사전훈련 언어모델 특성 연구)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.309-328
    • /
    • 2022
  • This study empirically analyzed a Korean pre-trained language models (PLMs) designed for natural language generation. The performance of two PLMs - BART and GPT - at the task of abstractive text summarization was compared. To investigate how performance depends on the characteristics of the inference data, ten different document types, containing six types of informational content and creation content, were considered. It was found that BART (which can both generate and understand natural language) performed better than GPT (which can only generate). Upon more detailed examination of the effect of inference data characteristics, the performance of GPT was found to be proportional to the length of the input text. However, even for the longest documents (with optimal GPT performance), BART still out-performed GPT, suggesting that the greatest influence on downstream performance is not the size of the training data or PLMs parameters but the structural suitability of the PLMs for the applied downstream task. The performance of different PLMs was also compared through analyzing parts of speech (POS) shares. BART's performance was inversely related to the proportion of prefixes, adjectives, adverbs and verbs but positively related to that of nouns. This result emphasizes the importance of taking the inference data's characteristics into account when fine-tuning a PLMs for its intended downstream task.

Classification Performance Analysis of Cross-Language Text Categorization using Machine Translation (기계번역을 이용한 교차언어 문서 범주화의 분류 성능 분석)

  • Lee, Yong-Gu
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.43 no.1
    • /
    • pp.313-332
    • /
    • 2009
  • Cross-language text categorization(CLTC) can classify documents automatically using training set from other language. In this study, collections appropriated for CLTC were extracted from KTSET. Classification performance of various CLTC methods were compared by SVM classifier using machine translation. Results showed that the classification performance in the order of poly-lingual training method, training-set translation and test-set translation. However, training-set translation could be regarded as the most useful method among CLTC, because it was efficient for machine translation and easily adapted to general environment. On the other hand, low performance was shown to be due to the feature reduction or features with no subject characteristics, which occurred in the process of machine translation of CLTC.

Study on Legato in Vocal Music Performance (성악 발성에서의 레가토(Legato)에 대한 연구)

  • Lu, Xiaozhou
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.4
    • /
    • pp.17-26
    • /
    • 2019
  • In order to study the role of legato to singers in vocal music performance, this paper adopts the methods of demonstration and induction and comparison to analyze the three factors affecting the creation of legato in performance and propose solutions. When thinking about the influence of singing skills on legato, singers will be able to solve such issues as breathing, articulation and vocal register, thus improving their singing skills. When thinking about the influence of singing language on legato, singers will make an in-depth study on the phonetic and structural features of the language to reach legato in language, thus promoting the in-depth study of singers on the singing language. When thinking about the influence of singing emotion on legato, singers will solve the problem of emotional coherence from two aspects of thought and emotion to create legato of emotion, thus promoting the expression of singers' emotion in the work. Obviously, legato plays an important role in improving singers' singing skills, singing language and singing emotion. The aim of the paper is to encourage singers to further deepen their attention to legato in vocal music performance and make better use of legato in singing.

A Study on Recognition of Citation Metadata using Bidirectional GRU-CRF Model based on Pre-trained Language Model (사전학습 된 언어 모델 기반의 양방향 게이트 순환 유닛 모델과 조건부 랜덤 필드 모델을 이용한 참고문헌 메타데이터 인식 연구)

  • Ji, Seon-yeong;Choi, Sung-pil
    • Journal of the Korean Society for information Management
    • /
    • v.38 no.1
    • /
    • pp.221-242
    • /
    • 2021
  • This study applied reference metadata recognition using bidirectional GRU-CRF model based on pre-trained language model. The experimental group consists of 161,315 references extracted by 53,562 academic documents in PDF format collected from 40 journals published in 2018 based on rules. In order to construct an experiment set. This study was conducted to automatically extract the references from academic literature in PDF format. Through this study, the language model with the highest performance was identified, and additional experiments were conducted on the model to compare the recognition performance according to the size of the training set. Finally, the performance of each metadata was confirmed.

A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay

  • Lee, Jung Hee;Park, Ji Su;Shon, Jin Gon
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.282-291
    • /
    • 2022
  • This research applies a pre-trained bidirectional encoder representations from transformers (BERT) handwriting recognition model to predict foreign Korean-language learners' writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers.