• Title/Summary/Keyword: NLP

Search Result 351, Processing Time 0.03 seconds

A Study on the Construction of Financial-Specific Language Model Applicable to the Financial Institutions (금융권에 적용 가능한 금융특화언어모델 구축방안에 관한 연구)

  • Jae Kwon Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.79-87
    • /
    • 2024
  • Recently, the importance of pre-trained language models (PLM) has been emphasized for natural language processing (NLP) such as text classification, sentiment analysis, and question answering. Korean PLM shows high performance in NLP in general-purpose domains, but is weak in domains such as finance, medicine, and law. The main goal of this study is to propose a language model learning process and method to build a financial-specific language model that shows good performance not only in the financial domain but also in general-purpose domains. The five steps of the financial-specific language model are (1) financial data collection and preprocessing, (2) selection of model architecture such as PLM or foundation model, (3) domain data learning and instruction tuning, (4) model verification and evaluation, and (5) model deployment and utilization. Through this, a method for constructing pre-learning data that takes advantage of the characteristics of the financial domain and an efficient LLM training method, adaptive learning and instruction tuning techniques, were presented.

The Influence and Impact of syntactic-grammatical knowledge on the Phonetic Outputs of a 'Reading Machine' (통사문법적 지식이 '독서기계'의 음성출력에 미치는 영향과 중요성)

  • Hong, Sungshim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.225-230
    • /
    • 2020
  • This paper highlights the influence and the importance of the syntactic-grammatical knowledge on "the reading machine", appeared in Jackendoff (1999). Due to the lack of the detailed testing and implementation in his research, this paper tests an extensive data array using a component of Google Translate, currently available freely and most widely on the internet. Although outdated, Jackendoff's paper, "Why can't Computers use English?", argues that syntactic-grammatical knowledge plays a key role in the outputs of computers and computer-based reading machines. The current research has implemented some testings of his thought-provoking examples, in order to find out whether Google Translate can handle the same problems after two decades or so. As a result, it is argued that in the field of NLP, I-language in the sense of Chomsky (1986, 1995 etc) is real and the syntactic, grammatical, and categorial knowledge is essential in the faculty of language. Therefore, it is reassured in this paper that when it comes to human language, even the most advanced "machine" is still no match for human faculty of language, the syntactic-grammatical knowledge.

Consolidation of Subtasks for Target Task in Pipelined NLP Model

  • Son, Jeong-Woo;Yoon, Heegeun;Park, Seong-Bae;Cho, Keeseong;Ryu, Won
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.704-713
    • /
    • 2014
  • Most natural language processing tasks depend on the outputs of some other tasks. Thus, they involve other tasks as subtasks. The main problem of this type of pipelined model is that the optimality of the subtasks that are trained with their own data is not guaranteed in the final target task, since the subtasks are not optimized with respect to the target task. As a solution to this problem, this paper proposes a consolidation of subtasks for a target task ($CST^2$). In $CST^2$, all parameters of a target task and its subtasks are optimized to fulfill the objective of the target task. $CST^2$ finds such optimized parameters through a backpropagation algorithm. In experiments in which text chunking is a target task and part-of-speech tagging is its subtask, $CST^2$ outperforms a traditional pipelined text chunker. The experimental results prove the effectiveness of optimizing subtasks with respect to the target task.

A Collaborative Framework for Discovering the Organizational Structure of Social Networks Using NER Based on NLP (NLP기반 NER을 이용해 소셜 네트워크의 조직 구조 탐색을 위한 협력 프레임 워크)

  • Elijorde, Frank I.;Yang, Hyun-Ho;Lee, Jae-Wan
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.99-108
    • /
    • 2012
  • Many methods had been developed to improve the accuracy of extracting information from a vast amount of data. This paper combined a number of natural language processing methods such as NER (named entity recognition), sentence extraction, and part of speech tagging to carry out text analysis. The data source is comprised of texts obtained from the web using a domain-specific data extraction agent. A framework for the extraction of information from unstructured data was developed using the aforementioned natural language processing methods. We simulated the performance of our work in the extraction and analysis of texts for the detection of organizational structures. Simulation shows that our study outperformed other NER classifiers such as MUC and CoNLL on information extraction.

A Distance Approach for Open Information Extraction Based on Word Vector

  • Liu, Peiqian;Wang, Xiaojie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2470-2491
    • /
    • 2018
  • Web-scale open information extraction (Open IE) plays an important role in NLP tasks like acquiring common-sense knowledge, learning selectional preferences and automatic text understanding. A large number of Open IE approaches have been proposed in the last decade, and the majority of these approaches are based on supervised learning or dependency parsing. In this paper, we present a novel method for web scale open information extraction, which employs cosine distance based on Google word vector as the confidence score of the extraction. The proposed method is a purely unsupervised learning algorithm without requiring any hand-labeled training data or dependency parse features. We also present the mathematically rigorous proof for the new method with Bayes Inference and Artificial Neural Network theory. It turns out that the proposed algorithm is equivalent to Maximum Likelihood Estimation of the joint probability distribution over the elements of the candidate extraction. The proof itself also theoretically suggests a typical usage of word vector for other NLP tasks. Experiments show that the distance-based method leads to further improvements over the newly presented Open IE systems on three benchmark datasets, in terms of effectiveness and efficiency.

YDK : A Thesaurus Developing System for Korean Language (한국어 통합정보사전 시스템)

  • Hwang, Do-Sam;Choi, Key-Sun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.9
    • /
    • pp.2885-2893
    • /
    • 2000
  • Dictionaries are indispensable for NLP(natural language processing) systems. Sophisticated algorithms in the NLP systems can be fully appreciated only with matching dictionaries that are built systematically based on computational linguistics. Only few dictionaries are developed for natural language processing. Available dictionaries are far from complete specifications for practical uses. So, it is necessary to develop an integrated information dictionary that includes useful lexical information for processing and understanding natural languages such as morphology and syntactic and semantic information. In this paper, we propose a method to build an integrated dictionary, and introduce a dictionary developing system.

  • PDF

Attention-based Unsupervised Style Transfer by Noising Input Sentences (입력 문장 Noising과 Attention 기반 비교사 한국어 문체 변환)

  • Noh, Hyungjong;Lee, Yeonsoo
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.434-439
    • /
    • 2018
  • 문체 변환 시스템을 학습하는 데 있어서 가장 큰 어려움 중 하나는 병렬 말뭉치가 부족하다는 것이다. 최근 대량의 비병렬 말뭉치만으로 문체 변환 문제를 해결하려는 많은 연구들이 발표되었지만, 아직까지도 원 문장의 정보 보존(Content preservation)과 문체 변환(Style transfer) 모두를 이루는 것이 쉽지 않은 상태이다. 특히 비교사 학습의 특성상 문체 변환과 동시에 정보를 보존하는 것이 매우 어렵다. Attention 기반의 Seq2seq 네트워크를 이용할 경우에는 과도하게 원문의 정보가 보존되어 문체 변환 능력이 떨어지기도 한다. 그리고 OOV(Out-Of-Vocabulary) 문제 또한 존재한다. 본 논문에서는 Attention 기반의 Seq2seq 네트워크를 이용하여 어절 단위의 정보 보존력을 최대한 높이면서도, 입력 문장에 효과적으로 Noise를 넣어 문체 변환 성능을 저해하는 과도한 정보 보존 현상을 막고 문체의 특성을 나타내는 어절들이 잘 변환되도록 할 뿐 아니라 OOV 문제도 줄일 수 있는 방법을 제안한다. 우리는 비교 실험을 통해 본 논문에서 제안한 방법들이 한국어 문장뿐 아니라 영어 문장에 대해서도 state-of-the-art 시스템들에 비해 향상된 성능을 보여준다는 사실을 확인하였다.

  • PDF

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

Linguistic Analysis of Bumwoo KIM Chi Young's Cogitation on Mathematics (범우 김치영선생의 수학에 대한 사유의 언어적 분석)

  • Lee, Kang Sup;Lee, Hyun Soo
    • Communications of Mathematical Education
    • /
    • v.32 no.2
    • /
    • pp.207-223
    • /
    • 2018
  • In this study, we studied Bumwoo KIM Chi Young's cogitation on mathematics, and analyzed his typical 3 essays on mathematics by KoNLP. Approximately 80% of Bumwoo's sentences consist of less than 30. His writing became clearer over the years. It is verified from the mean and standard deviation of the number of words in a sentence are decreasing. Bumwoo emphasized the structure in mathematics, and he was a strong advocate of importancy on axiom, topolized and category as the characteristics of modern mathematics. In particular, it can be seen that the relations between 'mathematics', 'axiom', 'structure', 'Euclid', 'axiomatic system' and 'set' were his main topic.