• Title/Summary/Keyword: Named entity dictionary

Search Result 28, Processing Time 0.025 seconds

Expansion of Word Representation for Named Entity Recognition Based on Bidirectional LSTM CRFs (Bidirectional LSTM CRF 기반의 개체명 인식을 위한 단어 표상의 확장)

  • Yu, Hongyeon;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.306-313
    • /
    • 2017
  • Named entity recognition (NER) seeks to locate and classify named entities in text into pre-defined categories such as names of persons, organizations, locations, expressions of times, etc. Recently, many state-of-the-art NER systems have been implemented with bidirectional LSTM CRFs. Deep learning models based on long short-term memory (LSTM) generally depend on word representations as input. In this paper, we propose an approach to expand word representation by using pre-trained word embedding, part of speech (POS) tag embedding, syllable embedding and named entity dictionary feature vectors. Our experiments show that the proposed approach creates useful word representations as an input of bidirectional LSTM CRFs. Our final presentation shows its efficacy to be 8.05%p higher than baseline NERs with only the pre-trained word embedding vector.

Automatic Construction of a Named Entity Dictionary for Named Entity Recognition (개체명 인식을 위한 개체명 사전 자동 구축)

  • Jeon, Wonpyo;Song, Yeongkil;Choi, Maengsik;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.82-85
    • /
    • 2013
  • 개체명 인식기에 대한 연구에서 개체명 사전은 필수적으로 필요하다. 그러나 공개된 개체명 사전은 거의 없기 때문에, 본 논문에서는 디비피디아의 데이터로부터 개체명을 효과적으로 추출하여 자동으로 구축할 수 있는 방법을 제안한다. 제안 방법은 엔트리의 '이름'과 '분류' 정보를 사용한다. 엔트리의 '이름'은 개체명으로 사용하고, 엔트리의 '분류'는 각 개체명 클래스와의 상호정보량을 계산하여 엔트리와 개체명 클래스 사이의 점수를 계산한다. 이렇게 계산된 점수를 이용하여 개체명과 개체명 클래스를 매핑한다. 그 결과 76.7%의 평균 정확률을 보였다.

  • PDF

KONG-DB: Korean Novel Geo-name DB & Search and Visualization System Using Dictionary from the Web (KONG-DB: 웹 상의 어휘 사전을 활용한 한국 소설 지명 DB, 검색 및 시각화 시스템)

  • Park, Sung Hee
    • Journal of the Korean Society for information Management
    • /
    • v.33 no.3
    • /
    • pp.321-343
    • /
    • 2016
  • This study aimed to design a semi-automatic web-based pilot system 1) to build a Korean novel geo-name, 2) to update the database using automatic geo-name extraction for a scalable database, and 3) to retrieve/visualize the usage of an old geo-name on the map. In particular, the problem of extracting novel geo-names, which are currently obsolete, is difficult to solve because obtaining a corpus used for training dataset is burden. To build a corpus for training data, an admin tool, HTML crawler and parser in Python, crawled geo-names and usages from a vocabulary dictionary for Korean New Novel enough to train a named entity tagger for extracting even novel geo-names not shown up in a training corpus. By means of a training corpus and an automatic extraction tool, the geo-name database was made scalable. In addition, the system can visualize the geo-name on the map. The work of study also designed, implemented the prototype and empirically verified the validity of the pilot system. Lastly, items to be improved have also been addressed.

Detection of Adverse Drug Reactions Using Drug Reviews with BERT+ Algorithm (BERT+ 알고리즘 기반 약물 리뷰를 활용한 약물 이상 반응 탐지)

  • Heo, Eun Yeong;Jeong, Hyeon-jeong;Kim, Hyon Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.465-472
    • /
    • 2021
  • In this paper, we present an approach for detection of adverse drug reactions from drug reviews to compensate limitations of the spontaneous adverse drug reactions reporting system. Considering negative reviews usually contain adverse drug reactions, sentiment analysis on drug reviews was performed and extracted negative reviews. After then, MedDRA dictionary and named entity recognition were applied to the negative reviews to detect adverse drug reactions. For the experiment, drug reviews of Celecoxib, Naproxen, and Ibuprofen from 5 drug review sites, and analyzed. Our results showed that detection of adverse drug reactions is able to compensate to limitation of under-reporting in the spontaneous adverse drugs reactions reporting system.

Title Named Entity Recognition based on Automatically Constructed Context Patterns and Entity Dictionary (자동 구축된 문맥 패턴과 개체명 사전에 기반한 제목 개체명 인식)

  • Lee, Joo-Young;Song, Young-In;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.40-45
    • /
    • 2004
  • 본 논문에서는 영화명, 도서명, 음악명 등의 제목 개체명 인식을 위한 새로운 방법에 대해 기술한다. 제목 개체명은 개체명 내부에 기존 MUC에서 분류한 인명, 지명, 기관명 등과 같은 일반적인 개체명과는 달리, 철자 자질 등 내부 자질을 사용하기 어려우며, 제목 개체명 부착 말뭉치가 없기 때문에 기존 연구에서 좋은 성능을 보인 방법들을 적용하기는 힘들다. 이러한 문제를 해결하기 위해 본 논문에서는 원시 말뭉치에서 자동으로 구축한 문맥 패턴 정보와 개체명 사전을 사용하여 제목 개체명을 인식하는 방법을 제안한다. 패턴과 제목 개체명 사전 구축을 위해, 사전 정보를 이용한 패턴 확장과 이렇게 구축된 패턴 정보를 사용한 사전 확장 단계를 반복 수행하여 문맥 패턴과 제목 개체명 사진을 점진적으로 증가시키는 방법을 사용하였으며, 이러한 정보가 제목 개체명 인식에 도움이 됨을 실험적으로 입증하였다.

  • PDF

A Study on the Integration of Information Extraction Technology for Detecting Scientific Core Entities based on Large Resources (대용량 자원 기반 과학기술 핵심개체 탐지를 위한 정보추출기술 통합에 관한 연구)

  • Choi, Yun-Soo;Cheong, Chang-Hoo;Choi, Sung-Pil;You, Beom-Jong;Kim, Jae-Hoon
    • Journal of Information Management
    • /
    • v.40 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • Large-scaled information extraction plays an important role in advanced information retrieval as well as question answering and summarization. Information extraction can be defined as a process of converting unstructured documents into formalized, tabular information, which consists of named-entity recognition, terminology extraction, coreference resolution and relation extraction. Since all the elementary technologies have been studied independently so far, it is not trivial to integrate all the necessary processes of information extraction due to the diversity of their input/output formation approaches and operating environments. As a result, it is difficult to handle scientific documents to extract both named-entities and technical terms at once. In this study, we define scientific as a set of 10 types of named entities and technical terminologies in a biomedical domain. in order to automatically extract these entities from scientific documents at once, we develop a framework for scientific core entity extraction which embraces all the pivotal language processors, named-entity recognizer, co-reference resolver and terminology extractor. Each module of the integrated system has been evaluated with various corpus as well as KEEC 2009. The system will be utilized for various information service areas such as information retrieval, question-answering(Q&A), document indexing, dictionary construction, and so on.

A Method to Solve the Entity Linking Ambiguity and NIL Entity Recognition for efficient Entity Linking based on Wikipedia (위키피디아 기반의 효과적인 개체 링킹을 위한 NIL 개체 인식과 개체 연결 중의성 해소 방법)

  • Lee, Hokyung;An, Jaehyun;Yoon, Jeongmin;Bae, Kyoungman;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.813-821
    • /
    • 2017
  • Entity Linking find the meaning of an entity mention, which indicate the entity using different expressions, in a user's query by linking the entity mention and the entity in the knowledge base. This task has four challenges, including the difficult knowledge base construction problem, multiple presentation of the entity mention, ambiguity of entity linking, and NIL entity recognition. In this paper, we first construct the entity name dictionary based on Wikipedia to build a knowledge base and solve the multiple presentation problem. We then propose various methods for NIL entity recognition and solve the ambiguity of entity linking by training the support vector machine based on several features, including the similarity of the context, semantic relevance, clue word score, named entity type similarity of the mansion, entity name matching score, and object popularity score. We sequentially use the proposed two methods based on the constructed knowledge base, to obtain the good performance in the entity linking. In the result of the experiment, our system achieved 83.66% and 90.81% F1 score, which is the performance of the NIL entity recognition to solve the ambiguity of the entity linking.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Information Extraction Using Context and Position (문맥과 위치정보를 사용한 정보추출)

  • Min Kyungkoo;Sun Choong-Nyoung;Seo Jungyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.490-492
    • /
    • 2005
  • 인터넷의 발달로 전자문서가 증가함에 따라 정보추출기술의 중요성도 함께 증가하게 되었다. 정보추출 (IE)은 다양한 형태의 문서로부터 필요한 내용만을 추출하여 정형화된 형태로 저장하는 문서 처리기술이다. SIES (Sogang Information Extraction System)는 기계학습 방법과 고정밀의 수동작성 된 규칙기반의 방법론을 함께 사용하는 정보 추출시스템으로 문법에 맞지 않는 문장 등의 입력에 대해 견고한 문장분석을 위해 Lexico-Semantic Pattern (LSP)과 개체명사전(Named Entity Dictionary)를 사용하였으며, SIES의 기계학습의 성능향상을 위친 기존에 널리 사용되는 문맥점보 외에 후보단어들의 위치정보를 고려한 특성자질과 스코어링 방법을 사용하였다.

  • PDF

Integrated Char-Word Embedding on Chinese NER using Transformer (트랜스포머를 이용한 중국어 NER 관련 문자와 단어 통합 임배딩)

  • Jin, ChunGuang;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.415-417
    • /
    • 2021
  • Since the words and words in Chinese sentences are continuous and the length of vocabulary is huge, Chinese NER(Named Entity Recognition) always based on character representation. In recent years, many Chinese research has been reconsidered how to integrate the word information into the Chinese NER model. However, the traditional sequence model has complex structure, the slow inference speed, and an additional dictionary information is needed, which is difficult to implement in the industry. The approach in this paper has the state of the art and parallelizable, which is integrated the char-word embeddings, so that the model learns word information. The proposed model is easy to implement, and outperforms traditional model in terms of speed and efficiency, which is improved f1-score on two dataset.