• Title/Summary/Keyword: language performance

Search Result 1,547, Processing Time 0.022 seconds

A Study on Finger Language Translation System using Machine Learning and Leap Motion (머신러닝과 립 모션을 활용한 지화 번역 시스템 구현에 관한 연구)

  • Son, Da Eun;Go, Hyeong Min;Shin, Haeng yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.552-554
    • /
    • 2019
  • Deaf mutism (a hearing-impaired person and speech disorders) communicates using sign language. There are difficulties in communicating by voice. However, sign language can only be limited in communicating with people who know sign language because everyone doesn't use sign language when they communicate. In this paper, a finger language translation system is proposed and implemented as a means for the disabled and the non-disabled to communicate without difficulty. The proposed algorithm recognizes the finger language data by leap motion and self-learns the data using machine learning technology to increase recognition rate. We show performance improvement from the simulation results.

Spatial Big Data Query Processing System Supporting SQL-based Query Language in Hadoop (Hadoop에서 SQL 기반 질의언어를 지원하는 공간 빅데이터 질의처리 시스템)

  • Joo, In-Hak
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper we present a spatial big data query processing system that can store spatial data in Hadoop and query the data with SQL-based query language. The system stores large-scale spatial data in HDFS-based storage system, and supports spatial queries expressed in SQL-based query language extended for spatial data processing. It supports standard spatial data types and functions defined in OGC simple feature model in the query language. This paper presents the development of core functions of the system including query language parsing, query validation, query planning, and connection with storage system. We compares the performance of the suggested system with an existing system, and our experiments show that the system shows about 58% performance improvement of query execution time over the existing system when executing region query for spatial data stored in Hadoop.

Natural Language based Video Retrieval System with Event Analysis of Multi-camera Image Sequence in Office Environment (사무실 환경 내 다중카메라 영상의 이벤트분석을 통한 자연어 기반 동영상 검색시스템)

  • Lim, Soo-Jung;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.384-389
    • /
    • 2008
  • Recently, the necessity of systems which effectively store and retrieve video data has increased. Conventional video retrieval systems retrieve data using menus or text based keywords. Due to the lack of information, many video clips are simultaneously searched, and the user must have a certain level of knowledge to utilize the system. In this paper, we suggest a natural language based conversational video retrieval system that reflects users' intentions and includes more information than keyword based queries. This system can also retrieve from events or people to their movements. First, an event database is constructed based on meta-data which are generated by domain analysis for collected video in an office environment. Then, a script database is also constructed based on the query pre-processing and analysis. From that, a method to retrieve a video through a matching technique between natural language queries and answers is suggested and validated through performance and process evaluation for 10 users The natural language based retrieval system has shown its better efficiency in performance and user satisfaction than the menu based retrieval system.

  • PDF

Early Linguistic Developments of Simultaneous Bilateral Cochlear Implantees (양이 동시 인공와우 사용자의 조기 언어발달)

  • Suh, Michelle J.;Lee, Hyun-Jin;Choi, Hyun Seung
    • Korean Journal of Otorhinolaryngology-Head and Neck Surgery
    • /
    • v.61 no.12
    • /
    • pp.650-657
    • /
    • 2018
  • Background and Objectives The present study aimed to compare receptive and expressive language development in children who have undergone simultaneous bilateral cochlear implantation (SCI) and those who have undergone bimodal stimulation (unilateral CI+ hearing aid). Subjects and Method In a retrospective analysis of clinical data, 15 pediatric patients who have received SCI and nine patients who have received bimodal stimulation (BM group) were enrolled. CI was performed for all patients at 24 months of age. Category of Auditory Performance (CAP) scores, Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) scores, and developmental quotients (DQ) for expressive and receptive language were compared between the groups at 12 month of follow-up. The Percentage of Consonants Correct (PCC) of children evaluated at 4 years old was also compared. Results At 12 months of follow-up, significantly greater improvements in CAP scores (${\Delta}4.25{\pm}0.5$) were noted in the SCI group compared to the BM group (${\Delta}3.56{\pm}0.88$, p=0.041). Significantly greater improvements in IT-MAIS scores were also noted in the SCI group (${\Delta}36.17{\pm}4.09$) than in the BM group (${\Delta}30.17{\pm}2.91$, p=0.004). The DQ of receptive language was higher in the SCI group than in the BM group ($87.6{\pm}15.4%$ vs. $75.5{\pm}12.0%$, p=0.023) at 12 months of follow-up. Moreover, early SCI was associated with better receptive language skills. PCC index of children at 4 years old was higher in the SCI group than in the BM group ($88.5{\pm}13.2%$ vs. $62{\pm}15.8%$, p=0.014). Earlier SCI was associated with even greater improvements. Conclusion Bilateral SCI is associated with significant improvements in language development when compared with bimodal stimulation. Earlier SCI was associated with better outcomes.

A Survey on Open Source based Large Language Models (오픈 소스 기반의 거대 언어 모델 연구 동향: 서베이)

  • Ha-Young Joo;Hyeontaek Oh;Jinhong Yang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.4
    • /
    • pp.193-202
    • /
    • 2023
  • In recent years, the outstanding performance of large language models (LLMs) trained on extensive datasets has become a hot topic. Since studies on LLMs are available on open-source approaches, the ecosystem is expanding rapidly. Models that are task-specific, lightweight, and high-performing are being actively disseminated using additional training techniques using pre-trained LLMs as foundation models. On the other hand, the performance of LLMs for Korean is subpar because English comprises a significant proportion of the training dataset of existing LLMs. Therefore, research is being carried out on Korean-specific LLMs that allow for further learning with Korean language data. This paper identifies trends of open source based LLMs and introduces research on Korean specific large language models; moreover, the applications and limitations of large language models are described.

The Effects of the Provision of Topical Knowledge on EFL Learners' Listening Performance

  • Huh, Jin-Hee
    • English Language & Literature Teaching
    • /
    • v.17 no.1
    • /
    • pp.1-16
    • /
    • 2011
  • Listening has been a neglected research area despite the crucial role it plays. The present investigation was aimed at examining how the provision of topical knowledge and learners' listening proficiency level affect learners' listening performance under four different preparatory activity conditions: topical knowledge, vocabulary list, language structure, and no activity. A total of 134 participants, assigned to the four different activity groups, took part in the study. The results revealed that the learners who were provided with topical knowledge before listening performed significantly better than the other learners, followed by the vocabulary list group and language structure group, which might be attributed to the activation of their content schemata. The learners who did not perform any preparatory activities achieved the lowest scores. When it comes to the impact of listening proficiency, it was revealed that learners' proficiency level had a significant influence on learners' listening performance, and there was a significant interaction between the learners' level of listening proficiency and preparatory activity. Providing relevant knowledge was effective for both higher level and lower level learners, whereas teaching vocabulary before listening was effective for higher level learners but was not for lower level ones. Based on the results, some pedagogical implications and suggestions for future research were discussed.

  • PDF

The Role and Importance of Gesture in Science Exploration (과학 탐구에서 몸짓의 역할과 중요성)

  • Han Jae young;Choi Jung hoon;Shin Young Joo;Son Jeong woo;Cha Jeong Ho;Hong Jun Euy
    • Journal of Korean Elementary Science Education
    • /
    • v.25 no.1
    • /
    • pp.51-58
    • /
    • 2006
  • The language and the gestures of a teacher, generally, have a great influence on the effect of a lesson. This is because subject content is transferred to students by teachers' language and gestures. In the science lessons which focus on experiments, the language and gestures of both students and teachers will help the learning of scientific content. However, the role of gestures, despite its importance, has rarely been investigated in science education research. The role of gestures of students and teachers is a much needed area of study. This study investigated the gestures observed in the experimental process performed by students who participated in a science exploration activity. Students' gestures play an essential role in the successful performance of the experiment. and they could function as a process of solving the contradictory situation. In addition, the demonstration and the communication of gestures should be performed very cautiously. There were a number of implications for the long-standing problem of the relation between the understanding of science concepts and the performance of experiments.

  • PDF

Static Dalvik Bytecode Optimization for Android Applications

  • Kim, Jeehong;Kim, Inhyeok;Min, Changwoo;Jun, Hyung Kook;Lee, Soo Hyung;Kim, Won-Tae;Eom, Young Ik
    • ETRI Journal
    • /
    • v.37 no.5
    • /
    • pp.1001-1011
    • /
    • 2015
  • Since just-in-time (JIT) has considerable overhead to detect hot spots and compile them at runtime, using sophisticated optimization techniques for embedded devices means that any resulting performance improvements will be limited. In this paper, we introduce a novel static Dalvik bytecode optimization framework, as a complementary compilation of the Dalvik virtual machine, to improve the performance of Android applications. Our system generates optimized Dalvik bytecodes by using Low Level Virtual Machine (LLVM). A major obstacle in using LLVM for optimizing Dalvik bytecodes is determining how to handle the high-level language features of the Dalvik bytecode in LLVM IR and how to optimize LLVM IR conforming to the language information of the Dalvik bytecode. To this end, we annotate the high-level language features of Dalvik bytecode to LLVM IR and successfully optimize Dalvik bytecodes through instruction selection processes. Our experimental results show that our system with JIT improves the performance of Android applications by up to 6.08 times, and surpasses JIT by up to 4.34 times.

A Basic Study on the Development of a Grading Scale of Discourse Competence in Korean Speaking Assessment -Focusing on the Scale of 'REFUSAL' Task (한국어 말하기 평가에서 '담화 능력' 등급 기술을 위한 기초 연구 -'부탁'에 대한 '거절하기' 과제를 중심으로-)

  • Lee, Haeyong;Lee, Hyang
    • Journal of Korean language education
    • /
    • v.29 no.3
    • /
    • pp.255-292
    • /
    • 2018
  • Most grading scales of Korean language proficiency tests are based on existing grading scales that are not empirically verified. The purpose of this study is to develop an empirically verified scale descriptor. The 'Performance data-driven approach' that is suggested by Fulcher (1987) was used to develop the detailed description of characteristics for each level of performance. This study is focused on the functional phase of speech samples analysis (coding data) to create explanatory categories of discourse skills into which individual observations of speech phenomena can be scored. The speech samples that were collected through this study demonstrated stages of speech that can be a foundation of a grading scale. The data used in the study was collected from 23 native speakers of Korean. Speech samples were recorded from simulated speaking tests using the 'REFUSAL' task, and transcribed for analysis. The transcript was analyzed using discourse analysis. The result showed that the 'REFUSAL' task needs to go through four functional phases in actual communication. Furthermore, this study found specific and detailed explanatory categories of discourse competence based on the actual native speaker's speech data. Such findings are expected to contribute to the development of more valid and reliable speaking assessment.

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.