• Title/Summary/Keyword: 한국어 문서분류

Search Result 158, Processing Time 0.027 seconds

Using Dynamic Programming for Word Segmentation in OCR (동적 프로그래밍을 이용한 OCR에서의 띄어쓰기 교정)

  • Park, Ho-Min;Kim, Chang-Hyun;Noh, Kyung-Mok;Cheon, Min-Ah;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.243-245
    • /
    • 2016
  • 광학 문자 인식(OCR)을 통해 문서의 글자를 인식할 때 띄어쓰기 오류가 발생한다. 본 논문에서는 이를 해결하기 위해 OCR의 후처리 과정으로 동적 프로그래밍을 이용한 분절(Segmentation) 방식의 띄어쓰기 오류 교정 시스템을 제안한다. 제안하는 시스템의 띄어쓰기 오류 교정 과정은 다음과 같다. 첫째, 띄어쓰기 오류가 있다고 분류된 어절 내의 공백을 모두 제거한다. 둘째, 공백이 제거된 문자열을 동적 프로그래밍을 이용한 분절로 입력 문자열에 대하여 가능한 모든 띄어쓰기 후보들을 찾는다. 셋째, 뉴스 기사 말뭉치와 그 말뭉치에 기반을 둔 띄어쓰기 확률 모델을 참조하여 각 후보의 띄어쓰기 확률을 계산한다. 마지막으로 띄어쓰기 후보들 중 확률이 가장 높은 후보를 교정 결과로 제시한다. 본 논문에서 제안하는 시스템을 이용하여 OCR의 띄어쓰기 오류를 해결할 수 있었다. 향후 띄어쓰기 오류 교정에 필요한 언어 규칙 등을 시스템에 추가한 띄어쓰기 교정시스템을 통하여 OCR의 최종적인 인식률을 향상에 대해 연구할 예정이다.

  • PDF

Record Information Question-Answering System Using Question Rules (질문 규칙을 이용한 기록정보 질의-응답 시스템)

  • Oh, Su-Hyun;Ahn, Young-Min;Park, Hee-Geun;Lee, Chung-Hee;Seo, Young-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.228-232
    • /
    • 2006
  • 본 논문에서는 기네스 기록정보, 즉 기록적 가치가 있는 기록정보에 대한 질의를 처리하는 시스템에 대하여 기술한다. 기록정보 질의의 경우 일반적으로 정형화된 형태로 나타나며 이 형태를 규칙으로 사용하여 질의에 해당되는 정답을 추출하게 된다. 기록적 가치가 있는 문장에서 해당 문장이 기록 문장임을 나타내어 주는 부사를 기록부사로 정의하고, 예로 가장 제일, 최고의, 최대의, 최소의, 최초의, 최초로 등을 들 수 있다. 기록정보 질의의 경우 용언의 포함여부에 따라 기록부사는 두 가지 유형으로 분류된다. 기록부사는 질의문 내의 지역정보 및 정답유형과 함께 정답 추출의 중요한 요소로 사용되고, 용언정보는 기록 부사의 유형, 질의문 내의 용언 포함 여부에 따라 정답 추출의 요소로 결정되어진다. 제안한 시스템은 질의분석을 통하여 정답 추출을 위한 단서를 찾고 이를 이용하여 후보 문서와 후보 문장을 검색한 후 정답 추출 규칙을 이용하여 정답을 추출하게 된다.

  • PDF

Decision of the Korean Speech Act using Feature Selection Method (자질 선택 기법을 이용한 한국어 화행 결정)

  • 김경선;서정연
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.278-284
    • /
    • 2003
  • Speech act is the speaker's intentions indicated through utterances. It is important for understanding natural language dialogues and generating responses. This paper proposes the method of two stage that increases the performance of the korean speech act decision. The first stage is to select features from the part of speech results in sentence and from the context that uses previous speech acts. We use x$^2$ statistics(CHI) for selecting features that have showed high performance in text categorization. The second stage is to determine speech act with selected features and Neural Network. The proposed method shows the possibility of automatic speech act decision using only POS results, makes good performance by using the higher informative features and speed up by decreasing the number of features. We tested the system using our proposed method in Korean dialogue corpus transcribed from recording in real fields, and this corpus consists of 10,285 utterances and 17 speech acts. We trained it with 8,349 utterances and have test it with 1,936 utterances, obtained the correct speech act for 1,709 utterances(88.3%). This result is about 8% higher accuracy than without selecting features.

A Suggestion of the Direction of Construction Disaster Document Management through Text Data Classification Model based on Deep Learning (딥러닝 기반 분류 모델의 성능 분석을 통한 건설 재해사례 텍스트 데이터의 효율적 관리방향 제안)

  • Kim, Hayoung;Jang, YeEun;Kang, HyunBin;Son, JeongWook;Yi, June-Seong
    • Korean Journal of Construction Engineering and Management
    • /
    • v.22 no.5
    • /
    • pp.73-85
    • /
    • 2021
  • This study proposes an efficient management direction for Korean construction accident cases through a deep learning-based text data classification model. A deep learning model was developed, which categorizes five categories of construction accidents: fall, electric shock, flying object, collapse, and narrowness, which are representative accident types of KOSHA. After initial model tests, the classification accuracy of fall disasters was relatively high, while other types were classified as fall disasters. Through these results, it was analyzed that 1) specific accident-causing behavior, 2) similar sentence structure, and 3) complex accidents corresponding to multiple types affect the results. Two accuracy improvement experiments were then conducted: 1) reclassification, 2) elimination. As a result, the classification performance improved with 185.7% when eliminating complex accidents. Through this, the multicollinearity of complex accidents, including the contents of multiple accident types, was resolved. In conclusion, this study suggests the necessity to independently manage complex accidents while preparing a system to describe the situation of future accidents in detail.

Extracting Supporting Evidence with High Precision via Bi-LSTM Network (양방향 장단기 메모리 네트워크를 활용한 높은 정밀도의 지지 근거 추출)

  • Park, ChaeHun;Yang, Wonsuk;Park, Jong C.
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.285-290
    • /
    • 2018
  • 논지가 높은 설득력을 갖기 위해서는 충분한 지지 근거가 필요하다. 논지 내의 주장을 논리적으로 지지할 수 있는 근거 자료 추출의 자동화는 자동 토론 시스템, 정책 투표에 대한 의사 결정 보조 등 여러 어플리케이션의 개발 및 상용화를 위해 필수적으로 해결되어야 한다. 하지만 웹문서로부터 지지 근거를 추출하는 시스템을 위해서는 다음과 같은 두 가지 연구가 선행되어야 하고, 이는 높은 성능의 시스템 구현을 어렵게 한다: 1) 논지의 주제와 직접적인 관련성은 낮지만 지지 근거로 사용될 수 있는 정보를 확보하기 위한 넓은 검색 범위, 2) 수집한 정보 내에서 논지의 주장을 명확하게 지지할 수 있는 근거를 식별할 수 있는 인지 능력. 본 연구는 높은 정밀도와 확장 가능성을 가진 지지 근거 추출을 위해 다음과 같은 단계적 지지 근거 추출 시스템을 제안한다: 1) TF-IDF 유사도 기반 관련 문서 선별, 2) 의미적 유사도를 통한 지지 근거 1차 추출, 3) 신경망 분류기를 통한 지지 근거 2차 추출. 제안하는 시스템의 유효성을 검증하기 위해 사설 4008개 내의 주장에 대해 웹 상에 있는 845675개의 뉴스에서 지지 근거를 추출하는 실험을 수행하였다. 주장과 지지 근거를 주석한 정보에 대하여 성능 평가를 진행한 결과 본 연구에서 제안한 단계적 시스템은 1,2차 추출 과정에서 각각 0.41, 0.70의 정밀도를 보였다. 이후 시스템이 추출한 지지 근거를 분석하여, 논지에 대한 적절한 이해를 바탕으로 한 지지 근거 추출이 가능하다는 것을 확인하였다.

  • PDF

Re-defining Named Entity Type for Personal Information De-identification and A Generation method of Training Data (개인정보 비식별화를 위한 개체명 유형 재정의와 학습데이터 생성 방법)

  • Choi, Jae-hoon;Cho, Sang-hyun;Kim, Min-ho;Kwon, Hyuk-chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.206-208
    • /
    • 2022
  • As the big data industry has recently developed significantly, interest in privacy violations caused by personal information leakage has increased. There have been attempts to automate this through named entity recognition in natural language processing. In this paper, named entity recognition data is constructed semi-automatically by identifying sentences with de-identification information from de-identification information in Korean Wikipedia. This can reduce the cost of learning about information that is not subject to de-identification compared to using general named entity recognition data. In addition, it has the advantage of minimizing additional systems based on rules and statistics to classify de-identification information in the output. The named entity recognition data proposed in this paper is classified into twelve categories. There are included de-identification information, such as medical records and family relationships. In the experiment using the generated dataset, KoELECTRA showed performance of 0.87796 and RoBERTa of 0.88.

  • PDF

A School-tailored High School Integrated Science Q&A Chatbot with Sentence-BERT: Development and One-Year Usage Analysis (인공지능 문장 분류 모델 Sentence-BERT 기반 학교 맞춤형 고등학교 통합과학 질문-답변 챗봇 -개발 및 1년간 사용 분석-)

  • Gyeongmo Min;Junehee Yoo
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.3
    • /
    • pp.231-248
    • /
    • 2024
  • This study developed a chatbot for first-year high school students, employing open-source software and the Korean Sentence-BERT model for AI-powered document classification. The chatbot utilizes the Sentence-BERT model to find the six most similar Q&A pairs to a student's query and presents them in a carousel format. The initial dataset, built from online resources, was refined and expanded based on student feedback and usability throughout over the operational period. By the end of the 2023 academic year, the chatbot integrated a total of 30,819 datasets and recorded 3,457 student interactions. Analysis revealed students' inclination to use the chatbot when prompted by teachers during classes and primarily during self-study sessions after school, with an average of 2.1 to 2.2 inquiries per session, mostly via mobile phones. Text mining identified student input terms encompassing not only science-related queries but also aspects of school life such as assessment scope. Topic modeling using BERTopic, based on Sentence-BERT, categorized 88% of student questions into 35 topics, shedding light on common student interests. A year-end survey confirmed the efficacy of the carousel format and the chatbot's role in addressing curiosities beyond integrated science learning objectives. This study underscores the importance of developing chatbots tailored for student use in public education and highlights their educational potential through long-term usage analysis.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.