• Title/Summary/Keyword: Natural language processing (NLP)

Search Result 168, Processing Time 0.029 seconds

OryzaGP: rice gene and protein dataset for named-entity recognition

  • Larmande, Pierre;Do, Huy;Wang, Yue
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.17.1-17.3
    • /
    • 2019
  • Text mining has become an important research method in biology, with its original purpose to extract biological entities, such as genes, proteins and phenotypic traits, to extend knowledge from scientific papers. However, few thorough studies on text mining and application development, for plant molecular biology data, have been performed, especially for rice, resulting in a lack of datasets available to solve named-entity recognition tasks for this species. Since there are rare benchmarks available for rice, we faced various difficulties in exploiting advanced machine learning methods for accurate analysis of the rice literature. To evaluate several approaches to automatically extract information from gene/protein entities, we built a new dataset for rice as a benchmark. This dataset is composed of a set of titles and abstracts, extracted from scientific papers focusing on the rice species, and is downloaded from PubMed. During the 5th Biomedical Linked Annotation Hackathon, a portion of the dataset was uploaded to PubAnnotation for sharing. Our ultimate goal is to offer a shared task of rice gene/protein name recognition through the BioNLP Open Shared Tasks framework using the dataset, to facilitate an open comparison and evaluation of different approaches to the task.

LitCovid-AGAC: cellular and molecular level annotation data set based on COVID-19

  • Ouyang, Sizhuo;Wang, Yuxing;Zhou, Kaiyin;Xia, Jingbo
    • Genomics & Informatics
    • /
    • v.19 no.3
    • /
    • pp.23.1-23.7
    • /
    • 2021
  • Currently, coronavirus disease 2019 (COVID-19) literature has been increasing dramatically, and the increased text amount make it possible to perform large scale text mining and knowledge discovery. Therefore, curation of these texts becomes a crucial issue for Bio-medical Natural Language Processing (BioNLP) community, so as to retrieve the important information about the mechanism of COVID-19. PubAnnotation is an aligned annotation system which provides an efficient platform for biological curators to upload their annotations or merge other external annotations. Inspired by the integration among multiple useful COVID-19 annotations, we merged three annotations resources to LitCovid data set, and constructed a cross-annotated corpus, LitCovid-AGAC. This corpus consists of 12 labels including Mutation, Species, Gene, Disease from PubTator, GO, CHEBI from OGER, Var, MPA, CPA, NegReg, PosReg, Reg from AGAC, upon 50,018 COVID-19 abstracts in LitCovid. Contain sufficient abundant information being possible to unveil the hidden knowledge in the pathological mechanism of COVID-19.

Development and Evaluation of Information Extraction Module for Postal Address Information (우편주소정보 추출모듈 개발 및 평가)

  • Shin, Hyunkyung;Kim, Hyunseok
    • Journal of Creative Information Culture
    • /
    • v.5 no.2
    • /
    • pp.145-156
    • /
    • 2019
  • In this study, we have developed and evaluated an information extracting module based on the named entity recognition technique. For the given purpose in this paper, the module was designed to apply to the problem dealing with extraction of postal address information from arbitrary documents without any prior knowledge on the document layout. From the perspective of information technique practice, our approach can be said as a probabilistic n-gram (bi- or tri-gram) method which is a generalized technique compared with a uni-gram based keyword matching. It is the main difference between our approach and the conventional methods adopted in natural language processing that applying sentence detection, tokenization, and POS tagging recursively rather than applying the models sequentially. The test results with approximately two thousands documents are presented at this paper.

A Study on the Development Methodology for User-Friendly Interactive Chatbot (사용자 친화적인 대화형 챗봇 구축을 위한 개발방법론에 관한 연구)

  • Hyun, Young Geun;Lim, Jung Teak;Han, Jeong Hyeon;Chae, Uri;Lee, Gi-Hyun;Ko, Jin Deuk;Cho, Young Hee;Lee, Joo Yeoun
    • Journal of Digital Convergence
    • /
    • v.18 no.11
    • /
    • pp.215-226
    • /
    • 2020
  • Chatbot is emerging as an important interface window for business. This change is due to the continued development of chatbot-related research from NLP to NLU and NLG. However, the reality is that the methodological study of drawing domain knowledge and developing it into a user-friendly interactive interface is weak in the process of developing chatbot. In this paper, in order to present the process criteria of chatbot development, we applied it to the actual project based on the methodology presented in the previous paper and improved the development methodology. In conclusion, the productivity of the test phase, which is the most important step, was improved by 33.3%, and the number of iterations was reduced to 37.5%. Based on these results, the "3 Phase and 17 Tasks Development Methodology" was presented, which is expected to dramatically improve the trial and error of the chatbot development.

A Spelling Error Correction Model in Korean Using a Correction Dictionary and a Newspaper Corpus (교정사전과 신문기사 말뭉치를 이용한 한국어 철자 오류 교정 모델)

  • Lee, Se-Hee;Kim, Hark-Soo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.427-434
    • /
    • 2009
  • With the rapid evolution of the Internet and mobile environments, text including spelling errors such as newly-coined words and abbreviated words are widely used. These spelling errors make it difficult to develop NLP (natural language processing) applications because they decrease the readability of texts. To resolve this problem, we propose a spelling error correction model using a spelling error correction dictionary and a newspaper corpus. The proposed model has the advantage that the cost of data construction are not high because it uses a newspaper corpus, which we can easily obtain, as a training corpus. In addition, the proposed model has an advantage that additional external modules such as a morphological analyzer and a word-spacing error correction system are not required because it uses a simple string matching method based on a correction dictionary. In the experiments with a newspaper corpus and a short message corpus collected from real mobile phones, the proposed model has been shown good performances (a miss-correction rate of 7.3%, a F1-measure of 97.3%, and a false positive rate of 1.1%) in the various evaluation measures.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Assignment Semantic Category of a Word using Word Embedding and Synonyms (워드 임베딩과 유의어를 활용한 단어 의미 범주 할당)

  • Park, Da-Sol;Cha, Jeong-Won
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.946-953
    • /
    • 2017
  • Semantic Role Decision defines the semantic relationship between the predicate and the arguments in natural language processing (NLP) tasks. The semantic role information and semantic category information should be used to make Semantic Role Decisions. The Sejong Electronic Dictionary contains frame information that is used to determine the semantic roles. In this paper, we propose a method to extend the Sejong electronic dictionary using word embedding and synonyms. The same experiment is performed using existing word-embedding and retrofitting vectors. The system performance of the semantic category assignment is 32.19%, and the system performance of the extended semantic category assignment is 51.14% for words that do not appear in the Sejong electronic dictionary of the word using the word embedding. The system performance of the semantic category assignment is 33.33%, and the system performance of the extended semantic category assignment is 53.88% for words that do not appear in the Sejong electronic dictionary of the vector using retrofitting. We also prove it is helpful to extend the semantic category word of the Sejong electronic dictionary by assigning the semantic categories to new words that do not have assigned semantic categories.

Coreference Resolution for Korean Pronouns using Pointer Networks (포인터 네트워크를 이용한 한국어 대명사 상호참조해결)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.496-502
    • /
    • 2017
  • Pointer Networks is a deep-learning model for the attention-mechanism outputting of a list of elements that corresponds to the input sequence and is based on a recurrent neural network (RNN). The coreference resolution for pronouns is the natural language processing (NLP) task that defines a single entity to find the antecedents that correspond to the pronouns in a document. In this paper, a pronoun coreference-resolution method that finds the relation between the antecedents and the pronouns using the Pointer Networks is proposed; furthermore, the input methods of the Pointer Networks-that is, the chaining order between the words in an entity-are proposed. From among the methods that are proposed in this paper, the chaining order Coref2 showed the best performance with an F1 of MUC 81.40 %. The method showed performances that are 31.00 % and 19.28 % better than the rule-based (50.40 %) and statistics-based (62.12 %) coreference resolution systems, respectively, for the Korean pronouns.

A Comparative Study on Deep Learning Topology for Event Extraction from Biomedical Literature (생의학 분야 학술 문헌에서의 이벤트 추출을 위한 심층 학습 모델 구조 비교 분석 연구)

  • Kim, Seon-Wu;Yu, Seok Jong;Lee, Min-Ho;Choi, Sung-Pil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.77-97
    • /
    • 2017
  • A recent sharp increase of the biomedical literature causes researchers to struggle to grasp the current research trends and conduct creative studies based on the previous results. In order to alleviate their difficulties in keeping up with the latest scholarly trends, numerous attempts have been made to develop specialized analytic services that can provide direct, intuitive and formalized scholarly information by using various text mining technologies such as information extraction and event detection. This paper introduces and evaluates total 8 Convolutional Neural Network (CNN) models for extracting biomedical events from academic abstracts by applying various feature utilization approaches. Also, this paper conducts performance comparison evaluation for the proposed models. As a result of the comparison, we confirmed that the Entity-Type-Fully-Connected model, one of the introduced models in the paper, showed the most promising performance (72.09% in F-score) in the event classification task while it achieved a relatively low but comparable result (21.81%) in the entire event extraction process due to the imbalance problem of the training collections and event identify model's low performance.

A Word Embedding used Word Sense and Feature Mirror Model (단어 의미와 자질 거울 모델을 이용한 단어 임베딩)

  • Lee, JuSang;Shin, JoonChoul;Ock, CheolYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.226-231
    • /
    • 2017
  • Word representation, an important area in natural language processing(NLP) used machine learning, is a method that represents a word not by text but by distinguishable symbol. Existing word embedding employed a large number of corpora to ensure that words are positioned nearby within text. However corpus-based word embedding needs several corpora because of the frequency of word occurrence and increased number of words. In this paper word embedding is done using dictionary definitions and semantic relationship information(hypernyms and antonyms). Words are trained using the feature mirror model(FMM), a modified Skip-Gram(Word2Vec). Sense similar words have similar vector. Furthermore, it was possible to distinguish vectors of antonym words.