• Title/Summary/Keyword: Text data

Search Result 2,959, Processing Time 0.029 seconds

Automated Text Categorization using high quality Bigrams (효율적인 바이그램을 이용한 자동문서 범주화)

  • Choi, Joon-Young;Lee, Chan-Do
    • Annual Conference of KIPS
    • /
    • 2003.05a
    • /
    • pp.261-264
    • /
    • 2003
  • 본 연구는 바이그램을 이용하여 자동문서범주화 성능을 향상시키는 알고리즘의 개발을 목표로 한다. 기존의 문서 범주화 알고리즘의 장단점을 비교하여 개선된 바이그램 추출 알고리즘을 구현하고, 이 알고리즘을 실험한 결과 Reuters-21579 data set은 개별 단어를 사용하여 시험한 결과보다 단어+바이그램을 사용하였을 경우 BEP은 2.07%, F1은 1.40% 향상률을 보였고, Korea-web data set은 BEP의 8.12%, F1의 6.25% 향상을 보였다. 이와 같은 실험결과는 단어를 사용한 경우보다 단어+바이그램을 사용한 자동문서 범주화 시스템이 더 효율적이라는 것을 보여준다.

  • PDF

Matching Algorithm for Hangul Recognition Based on PDA

  • Kim Hyeong-Gyun;Choi Gwang-Mi
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.3
    • /
    • pp.161-166
    • /
    • 2004
  • Electronic Ink is a stored data in the form of the handwritten text or the script without converting it into ASCII by handwritten recognition on the pen-based computers and Personal Digital Assistants(PDA) for supporting natural and convenient data input. One of the most important issue is to search the electronic ink in order to use it. We proposed and implemented a script matching algorithm for the electronic ink. Proposed matching algorithm separated the input stroke into a set of primitive stroke using the curvature of the stroke curve. After determining the type of separated strokes, it produced a stroke feature vector. And then it calculated the distance between the stroke feature vector of input strokes and one of strokes in the database using the dynamic programming technique.

Bioinformatics and Genomic Medicine (생명정보학과 유전체의학)

  • Kim, Ju-Han
    • Journal of Preventive Medicine and Public Health
    • /
    • v.35 no.2
    • /
    • pp.83-91
    • /
    • 2002
  • Bioinformatics is a rapidly emerging field of biomedical research. A flood of large-scale genomic and postgenomic data means that many of the challenges in biomedical research are now challenges in computational sciences. Clinical informatics has long developed methodologies to improve biomedical research and clinical care by integrating experimental and clinical information systems. The informatics revolutions both in bioinformatics and clinical informatics will eventually change the current practice of medicine, including diagnostics, therapeutics, and prognostics. Postgenome informatics, powered by high throughput technologies and genomic-scale databases, is likely to transform our biomedical understanding forever much the same way that biochemistry did a generation ago. The paper describes how these technologies will impact biomedical research and clinical care, emphasizing recent advances in biochip-based functional genomics and proteomics. Basic data preprocessing with normalization, primary pattern analysis, and machine learning algorithms will be presented. Use of integrated biochip informatics technologies, text mining of factual and literature databases, and integrated management of biomolecular databases will be discussed. Each step will be given with real examples in the context of clinical relevance. Issues of linking molecular genotype and clinical phenotype information will be discussed.

A Structural Analysis of Dictionary Text for the Construction of Lexical Data Base (어휘정보구축을 위한 사전텍스트의 구조분석 및 변환)

  • 최병진
    • Language and Information
    • /
    • v.6 no.2
    • /
    • pp.33-55
    • /
    • 2002
  • This research aims at transforming the definition tort of an English-English-Korean Dictionary (EEKD) which is encoded in EST files for the purpose of publishing into a structured format for Lexical Data Base (LDB). The construction of LDB is very time-consuming and expensive work. In order to save time and efforts in building new lexical information, the present study tries to extract useful linguistic information from an existing printed dictionary. In this paper, the process of extraction and structuring of lexical information from a printed dictionary (EEKD) as a lexical resource is described. The extracted information is represented in XML format, which can be transformed into another representation for different application requirements.

  • PDF

An Automatic Spam e-mail Filter System Using χ2 Statistics and Support Vector Machines (카이 제곱 통계량과 지지벡터기계를 이용한 자동 스팸 메일 분류기)

  • Lee, Songwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.592-595
    • /
    • 2009
  • We propose an automatic spam mail classifier for e-mail data using Support Vector Machines (SVM). We use a lexical form of a word and its part of speech (POS) tags as features. We select useful features with ${\chi}^2$ statistics and represent each feature using text frequency (TF) and inversed document frequency (IDF) values for each feature. After training SVM with the features, SVM classifies each email as spam mail or not. In experiment, we acquired 82.7% of accuracy with e-mail data collected from a web mail system.

  • PDF

Causal model analysis between quantity and quality for deriving ranking model of Online reviews (온라인리뷰의 랭킹모델링을 위한 양과 질의 인과모형 분석)

  • Lee, Changyong;Kim, Keunhyung
    • The Journal of Information Systems
    • /
    • v.28 no.1
    • /
    • pp.1-16
    • /
    • 2019
  • Purpose The purpose of this study is to analyze causal relationship between quantity and quality for deriving ranking model of Online reviews. Thus, we propose implications for deriving the ranking model for retrieving Online reviews more effectively. Design/methodology/approach We collected Online review from Tripadvisor web sites which might be a kind of world-famous tourism web sites. We transformed the natural text reviews to quantified data which consists of quantified positive opinions, quantified negative opinions, quantified modification opinions, reviews lengths and grade scores by using opinion mining technologies in R package. We executed corelation and regression analysis about the data. Findings According to the empirical analysis result, this study confirmed that the review length influenced positive opinion, negative opinion and modification opinion. We also confirmed that negative opinion and modification opinion influenced the grade score.

The Next Generation of Energy News Big Data Analytics (차세대 에너지 관련 뉴스 빅데이터 분석)

  • Lee, YeChan;Cho, HaeChan;Ban, ChaeHoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.451-453
    • /
    • 2016
  • 대규모의 데이터가 생산되고 저장되는 정보화 시대에서 현재와 과거의 데이터를 바탕으로 미래를 추측하고 방향성을 알아갈 수 있는 빅데이터의 중요성이 강조되고 있다. 정형되지 못한 대규모 데이터를 빅데이터 분석 도구인 R을 통해 통계를 기초로 데이터의 정보분석과 정형화하도록 한다. 본 논문에서는 R을 이용하여 뉴스에서 나타나는 차세대 에너지 관련 빅데이터를 분석한다. 뉴스 기사에서 차세대 에너지 관련 데이터를 수집하고 수집된 키워드를 이용하여 근미래의 효율적인 차세대 에너지의 등장을 예측한다. 에너지 산업의 추진에 대한 흐름과 방향성을 제시하고 의사결정을 위한 기술적 과제를 도출함으로 탄력적인 경영과 의사결정에 도움을 주며 기술적 문제의 근원을 사전에 예측하고 방지할 수 있을 것으로 보여진다.

  • PDF

Algorithm Design to Judge Fake News based on Bigdata and Artificial Intelligence

  • Kang, Jangmook;Lee, Sangwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.2
    • /
    • pp.50-58
    • /
    • 2019
  • The clear and specific objective of this study is to design a false news discriminator algorithm for news articles transmitted on a text-based basis and an architecture that builds it into a system (H/W configuration with Hadoop-based in-memory technology, Deep Learning S/W design for bigdata and SNS linkage). Based on learning data on actual news, the government will submit advanced "fake news" test data as a result and complete theoretical research based on it. The need for research proposed by this study is social cost paid by rumors (including malicious comments) and rumors (written false news) due to the flood of fake news, false reports, rumors and stabbings, among other social challenges. In addition, fake news can distort normal communication channels, undermine human mutual trust, and reduce social capital at the same time. The final purpose of the study is to upgrade the study to a topic that is difficult to distinguish between false and exaggerated, fake and hypocrisy, sincere and false, fraud and error, truth and false.

Enhancement of Text Classification Method (텍스트 분류 기법의 발전)

  • Shin, Kwang-Seong;Shin, Seong-Yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.155-156
    • /
    • 2019
  • Traditional machine learning based emotion analysis methods such as Classification and Regression Tree (CART), Support Vector Machine (SVM), and k-nearest neighbor classification (kNN) are less accurate. In this paper, we propose an improved kNN classification method. Improved methods and data normalization achieve the goal of improving accuracy. Then, three classification algorithms and an improved algorithm were compared based on experimental data.

  • PDF

A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay

  • Lee, Jung Hee;Park, Ji Su;Shon, Jin Gon
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.282-291
    • /
    • 2022
  • This research applies a pre-trained bidirectional encoder representations from transformers (BERT) handwriting recognition model to predict foreign Korean-language learners' writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers.