• Title/Summary/Keyword: 언어TEXT

Search Result 756, Processing Time 0.022 seconds

Study on Knowledge Augmented Prompting for Text to SPARQL (Text to SPARQL을 위한 지식 증강 프롬프팅 연구)

  • Yeonjin Lee;Jeongjae Nam;Wooyoung Kim;Wooju Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.185-189
    • /
    • 2023
  • Text to SPARQL은 지식 그래프 기반 질의응답의 한 형태로 자연어 질문을 지식 그래프 검색 쿼리로 변환하는 태스크이다. SPARQL 쿼리는 지식 그래프의 정보를 기반으로 작성되어야 하기 때문에 기존 언어 모델을 통한 코드 생성방법으로는 잘 동작하지 않는다. 이에 우리는 거대 언어 모델을 활용하여 Text to SPARQL를 해결하기 위해 프롬프트에 지식 그래프의 정보를 증강시켜주는 방법론을 제안한다. 이에 더하여 다국어 정보 활용에 대한 영향을 검증하기 위해 한국어, 영어 각각의 레이블을 교차적으로 실험하였다. 추가로 한국어 Text to SPARQL 실험을 위하여 대표적인 Text to SPARQL 벤치마크 데이터셋 QALD-10을 한국어로 번역하여 공개하였다. 위 데이터를 이용해 지식 증강 프롬프팅의 효과를 실험적으로 입증하였다.

  • PDF

Evaluation of Large Language Models' Korean-Text to SQL Capability (대형 언어 모델의 한국어 Text-to-SQL 변환 능력 평가)

  • Jooyoung Choi;Kyungkoo Min;Myoseop Sim;Haemin Jung;Minjun Park;Stanley Jungkyu Choi
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.171-176
    • /
    • 2023
  • 최근 등장한 대규모 데이터로 사전학습된 자연어 생성 모델들은 대화 능력 및 코드 생성 태스크등에서 인상적인 성능을 보여주고 있어, 본 논문에서는 대형 언어 모델 (LLM)의 한국어 질문을 SQL 쿼리 (Text-to-SQL) 변환하는 성능을 평가하고자 한다. 먼저, 영어 Text-to-SQL 벤치마크 데이터셋을 활용하여 영어 질의문을 한국어 질의문으로 번역하여 한국어 Text-to-SQL 데이터셋으로 만들었다. 대형 생성형 모델 (GPT-3 davinci, GPT-3 turbo) 의 few-shot 세팅에서 성능 평가를 진행하며, fine-tuning 없이도 대형 언어 모델들의 경쟁력있는 한국어 Text-to-SQL 변환 성능을 확인한다. 또한, 에러 분석을 수행하여 한국어 문장을 데이터베이스 쿼리문으로 변환하는 과정에서 발생하는 다양한 문제와 프롬프트 기법을 활용한 가능한 해결책을 제시한다.

  • PDF

The Effects of Paralanguage Utilization Training for Audiobook Text Shaping - Professor's Friendly Behavior as a Parameters - (유사언어 활용 훈련이 오디오북 텍스트 형상화에 미치는 영향 연구 - 교수자의 우호적 행동을 매개변수로 -)

  • Cho, Ye-Shin
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.2
    • /
    • pp.141-153
    • /
    • 2020
  • The purpose of study is to examine the role of the Professor's friendly behavior as a parameters in the course of Paralanguage Utilization Training using pronunciation, stress, voice tone, speed, pause and expression of feelings affecting of Audiobook text shaping. the results of this study will be a reference to training on the use of Paralanguage for dynamic shaping of Audiobook text and recognizing the need and influence of professors' friendly behavior as a parameters. The results of the study are as follows. First, training in the use of Paralanguage was shown to have a positive effect on the Shaping of Audiobook text and served as a key factor in conveying the original meaning of text. Therefore, if we look at the significance and content of training using Paralanguage and continue training using Paralanguage, it will actually help to shape Audiobook text. Second, the professor's friendly behavior partially acted as a parameters role between training in the use of Paralanguage and shaping Audiobook text. The professor's friendly behavior has helped form Audiobook text by providing a sense of trust and will increase the level of completion for training in the use of Paralanguage. Thus, training in the use of Paralanguage Utilization Training could result in more effective Audiobook text shaping when conducted in conjunction with the professors' friendly actions. Therefore, it was shown that the ability to use Paralanguage and the professor's caring and friendly behavior to help them perform better were more effective when they simultaneously affected Audiobook text shaping.

A Content Analysis of Journal Articles Using the Language Network Analysis Methods (언어 네트워크 분석 방법을 활용한 학술논문의 내용분석)

  • Lee, Soo-Sang
    • Journal of the Korean Society for information Management
    • /
    • v.31 no.4
    • /
    • pp.49-68
    • /
    • 2014
  • The purpose of this study is to perform content analysis of research articles using the language network analysis method in Korea and catch the basic point of the language network analysis method. Six analytical categories are used for content analysis: types of language text, methods of keyword selection, methods of forming co-occurrence relation, methods of constructing network, network analytic tools and indexes. From the results of content analysis, this study found out various features as follows. The major types of language text are research articles and interview texts. The keywords were selected from words which are extracted from text content. To form co-occurrence relation between keywords, there use the co-occurrence count. The constructed networks are multiple-type networks rather than single-type ones. The network analytic tools such as NetMiner, UCINET/NetDraw, NodeXL, Pajek are used. The major analytic indexes are including density, centralities, sub-networks, etc. These features can be used to form the basis of the language network analysis method.

Benford's Law in Linguistic Texts: Its Principle and Applications (언어 텍스트에 나타나는 벤포드 법칙: 원리와 응용)

  • Hong, Jung-Ha
    • Language and Information
    • /
    • v.14 no.1
    • /
    • pp.145-163
    • /
    • 2010
  • This paper aims to propose that Benford's Law, non-uniform distribution of the leading digits in lists of numbers from many real-life sources, also appears in linguistic texts. The first digits in the frequency lists of morphemes from Sejong Morphologically Analyzed Corpora represent non-uniform distribution following Benford's Law, but showing complexity of numerical sources from complex systems like earthquakes. Benford's Law in texts is a principle reflecting regular distribution of low-frequency linguistic types, called LNRE(large number of rare events), and governing texts, corpora, or sample texts relatively independent of text sizes and the number of types. Although texts share a similar distribution pattern by Benford's Law, we can investigate non-uniform distribution slightly varied from text to text that provides useful applications to evaluate randomness of texts distribution focused on low-frequency types.

  • PDF

Comparison Between Optimal Features of Korean and Chinese for Text Classification (한중 자동 문서분류를 위한 최적 자질어 비교)

  • Ren, Mei-Ying;Kang, Sinjae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.4
    • /
    • pp.386-391
    • /
    • 2015
  • This paper proposed the optimal attributes for text classification based on Korean and Chinese linguistic features. The experiments committed to discover which is the best feature among n-grams which is known as language independent, morphemes that have language dependency and some other feature sets consisted with n-grams and morphemes showed best results. This paper used SVM classifier and Internet news for text classification. As a result, bi-gram was the best feature in Korean text categorization with the highest F1-Measure of 87.07%, and for Chinese document classification, 'uni-gram+noun+verb+adjective+idiom', which is the combined feature set, showed the best performance with the highest F1-Measure of 82.79%.

The Frequency Analysis of Teacher's Emotional Response in Mathematics Class (수학 담화에서 나타나는 교사의 감성적 언어 빈도 분석)

  • Son, Bok Eun;Ko, Ho Kyoung
    • Communications of Mathematical Education
    • /
    • v.32 no.4
    • /
    • pp.555-573
    • /
    • 2018
  • The purpose of this study is to identify the emotional language of math teachers in math class using text mining techniques. For this purpose, we collected the discourse data of the teachers in the class by using the excellent class video. The analysis of the extracted unstructured data proceeded to three stages: data collection, data preprocessing, and text mining analysis. According to text mining analysis, there was few emotional language in teacher's response in mathematics class. This result can infer the characteristics of mathematics class in the aspect of affective domain.

Tool Utilization Strategy for Using Block Programming Language as a Preceding Organizer for Text Programming Language Learning (텍스트 프로그래밍 언어 학습을 위한 블록 프로그래밍 언어를 선행조직자로 활용할 수 있는 도구 활용 전략)

  • Go, HakNeung;Lee, Youngjun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.395-396
    • /
    • 2022
  • 본 논문에서는 블록 프로그래밍 언어를 선행조직자로 하여 텍스트 프로그래밍 언어를 학습하는 도구 활용 전략을 연구하였다. 텍스트 프로그래밍 언어는 파이썬이며, 블록 프로그래밍 언어는 엔트리, 활용하는 도구는 주피터 노트북으로 선정하였다. 주피터 노트북을 활용한 블록 프로그래밍 언어 선행조직자 학습 전략은 code cell에 IPython.display.IFrame 클래스를 활용하여 결과 창에 엔트리 작업환경을 불러와 선행조직자로 제시하여 엔트리를 학습 후 code cell에서 파이썬으로 학습한다. 주피터 노트북을 통해 블록 프로그래밍 언어를 선행조직자로 제시 후 텍스트 프로그래밍 언어를 제시함으로써 텍스트 프로그래밍 언어를 학습할 때 인지적 부담을 줄어들고 긍정적 전이가 일어나 효과적인 학습이 될 것으로 기대된다.

  • PDF