• Title/Summary/Keyword: Text data

Search Result 2,953, Processing Time 0.03 seconds

Strategies on Text Screen Design Of The Electronic Textbook For Focused Attention Using Automatic Text Scroll (자동 스크롤 가능을 이용한 주의력 집중을 위한 웹기반 전자교과서 텍스트 화면 설계전략)

  • Kwon, Hyunggyu
    • The Journal of Korean Association of Computer Education
    • /
    • v.5 no.4
    • /
    • pp.134-145
    • /
    • 2002
  • The purpose of this study is to present the functional and technical solutions for text learning of web-based textbook in which each letter has its own focal point. The solutions help learners not to lose the main focus when eye moves to the next letter or line. The text screen of the electronic textbook automatically scrolls the text to up and down or left and right directions which are preassigned by learner. It doesn't need the operation of mouse or keyboard. And learner can change scroll speed and types anytime during scrolling. Automatic text scroll function is a solution for controlling data and screen to reflect the personal favor and ability. It contains the content structure of the text(characteristics, categorizations etc.), the appearance of the text(density, size, font etc.), scroll options(scroll, speed etc.), program control type(ram resident program etc.), and the application of the screen design principles(legibility etc.). To resolve these functional problems, technical 8 phases are provided, which are environment setting, scroll option setting, copy, data analysis, scroll coding, centered focus coding, left and right focus coding, implementation. The learner can focus on text without dispersion because the text focal points stay in the fixed area of screen. 1bey read the text following their preferences for fonts, sizes, line spacing and so on.

  • PDF

Topic Model Analysis of Research Trend on Spatial Big Data (공간빅데이터 연구 동향 파악을 위한 토픽모형 분석)

  • Lee, Won Sang;Sohn, So Young
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.1
    • /
    • pp.64-73
    • /
    • 2015
  • Recent emergence of spatial big data attracts the attention of various research groups. This paper analyzes the research trend on spatial big data by text mining the related Scopus DB. We apply topic model and network analysis to the extracted abstracts of articles related to spatial big data. It was observed that optics, astronomy, and computer science are the major areas of spatial big data analysis. The major topics discovered from the articles are related to mobile/cloud/smart service of spatial big data in urban setting. Trends of discovered topics are provided over periods along with the results of topic network. We expect that uncovered areas of spatial big data research can be further explored.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

Application of text-mining technique and machine-learning model with clinical text data obtained from case reports for Sasang constitution diagnosis: a feasibility study (자연어 처리에 기반한 사상체질 치험례의 텍스트 마이닝 분석과 체질 진단을 위한 머신러닝 모델 선정)

  • Jinseok Kim;So-hyun Park;Roa Jeong;Eunsu Lee;Yunseo Kim;Hyundong Sung;Jun-sang Yu
    • The Journal of Korean Medicine
    • /
    • v.45 no.3
    • /
    • pp.193-210
    • /
    • 2024
  • Objectives: We analyzed Sasang constitution case reports using text mining to derive network analysis results and designed a classification algorithm using machine learning to select a model suitable for classifying Sasang constitution based on text data. Methods: Case reports on Sasang constitution published from January 1, 2000, to December 31, 2022, were searched. As a result, 343 papers were selected, yielding 454 cases. Extracted texts were pretreated and tokenized with the Python-based KoNLPy package. Each morpheme was vectorized using TF-IDF values. Word cloud visualization and centrality analysis identified keywords mainly used for classifying Sasang constitution in clinical practice. To select the most suitable classification model for diagnosing Sasang constitution, the performance of five models-XGBoost, LightGBM, SVC, Logistic Regression, and Random Forest Classifier-was evaluated using accuracy and F1-Score. Results: Through word cloud visualization and centrality analysis, specific keywords for each constitution were identified. Logistic regression showed the highest accuracy (0.839416), while random forest classifier showed the lowest (0.773723). Based on F1-Score, XGBoost scored the highest (0.739811), and random forest classifier scored the lowest (0.643421). Conclusions: This is the first study to analyze constitution classification by applying text mining and machine learning to case reports, providing a concrete research model for follow-up research. The keywords selected through text mining were confirmed to effectively reflect the characteristics of each Sasang constitution type. Based on text data from case reports, the most suitable machine learning models for diagnosing Sasang constitution are logistic regression and XGBoost.

Methodology Using Text Analysis for Packaging R&D Information Services on Pending National Issues (텍스트 분석을 활용한 국가 현안 대응 R&D 정보 패키징 방법론)

  • Hyun, Yoonjin;Han, Heejun;Choi, Heeseok;Park, Junhyung;Lee, Kyuha;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Information Technology Applications and Management
    • /
    • v.20 no.3_spc
    • /
    • pp.231-257
    • /
    • 2013
  • The recent rise in the unstructured data generated by social media has resulted in an increasing need to collect, store, search, analyze, and visualize it. These data cannot be managed effectively by using traditional data analysis methodologies because of their vast volume and unstructured nature. Therefore, many attempts are being made to analyze these unstructured data (e.g., text files and log files) by using commercial and noncommercial analytical tools. Especially, the attempt to discover meaningful knowledge by using text mining is being made in business and other areas such as politics, economics, and cultural studies. For instance, several studies have examined pending national issues by analyzing large volumes of texts on various social issues. However, it is difficult to create satisfactory information services that can identify R&D documents on specific national issues from among the various R&D resources. In other words, although users specify some words related to pending national issues as search keywords, they usually fail to retrieve the R&D information they are looking for. This is usually because of the discrepancy between the terms defining pending national issues and the corresponding terms used in R&D documents. We need a mediating logic to overcome this discrep 'ancy so that we can identify and package appropriate R&D information on specific pending national issues. In this paper, we use association analysis and social network analysis to devise a mediator for bridging the gap between the keywords defining pending national issues and those used in R&D documents. Further, we propose a methodology for packaging R&D information services for pending national issues by using the devised mediator. Finally, in order to evaluate the practical applicability of the proposed methodology, we apply it to the NTIS(National Science & Technology Information Service) system, and summarize the results in the case study section.

An Efficient Damage Information Extraction from Government Disaster Reports

  • Shin, Sungho;Hong, Seungkyun;Song, Sa-Kwang
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.55-63
    • /
    • 2017
  • One of the purposes of Information Technology (IT) is to support human response to natural and social problems such as natural disasters and spread of disease, and to improve the quality of human life. Recent climate change has happened worldwide, natural disasters threaten the quality of life, and human safety is no longer guaranteed. IT must be able to support tasks related to disaster response, and more importantly, it should be used to predict and minimize future damage. In South Korea, the data related to the damage is checked out by each local government and then federal government aggregates it. This data is included in disaster reports that the federal government discloses by disaster case, but it is difficult to obtain raw data of the damage even for research purposes. In order to obtain data, information extraction may be applied to disaster reports. In the field of information extraction, most of the extraction targets are web documents, commercial reports, SNS text, and so on. There is little research on information extraction for government disaster reports. They are mostly text, but the structure of each sentence is very different from that of news articles and commercial reports. The features of the government disaster report should be carefully considered. In this paper, information extraction method for South Korea government reports in the word format is presented. This method is based on patterns and dictionaries and provides some additional ideas for tokenizing the damage representation of the text. The experiment result is F1 score of 80.2 on the test set. This is close to cutting-edge information extraction performance before applying the recent deep learning algorithms.

An Investigation on the Periodical Transition of News related to North Korea using Text Mining (텍스트마이닝을 활용한 북한 관련 뉴스의 기간별 변화과정 고찰)

  • Park, Chul-Soo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.63-88
    • /
    • 2019
  • The goal of this paper is to investigate changes in North Korea's domestic and foreign policies through automated text analysis over North Korea represented in South Korean mass media. Based on that data, we then analyze the status of text mining research, using a text mining technique to find the topics, methods, and trends of text mining research. We also investigate the characteristics and method of analysis of the text mining techniques, confirmed by analysis of the data. In this study, R program was used to apply the text mining technique. R program is free software for statistical computing and graphics. Also, Text mining methods allow to highlight the most frequently used keywords in a paragraph of texts. One can create a word cloud, also referred as text cloud or tag cloud. This study proposes a procedure to find meaningful tendencies based on a combination of word cloud, and co-occurrence networks. This study aims to more objectively explore the images of North Korea represented in South Korean newspapers by quantitatively reviewing the patterns of language use related to North Korea from 2016. 11. 1 to 2019. 5. 23 newspaper big data. In this study, we divided into three periods considering recent inter - Korean relations. Before January 1, 2018, it was set as a Before Phase of Peace Building. From January 1, 2018 to February 24, 2019, we have set up a Peace Building Phase. The New Year's message of Kim Jong-un and the Olympics of Pyeong Chang formed an atmosphere of peace on the Korean peninsula. After the Hanoi Pease summit, the third period was the silence of the relationship between North Korea and the United States. Therefore, it was called Depression Phase of Peace Building. This study analyzes news articles related to North Korea of the Korea Press Foundation database(www.bigkinds.or.kr) through text mining, to investigate characteristics of the Kim Jong-un regime's South Korea policy and unification discourse. The main results of this study show that trends in the North Korean national policy agenda can be discovered based on clustering and visualization algorithms. In particular, it examines the changes in the international circumstances, domestic conflicts, the living conditions of North Korea, the South's Aid project for the North, the conflicts of the two Koreas, North Korean nuclear issue, and the North Korean refugee problem through the co-occurrence word analysis. It also offers an analysis of South Korean mentality toward North Korea in terms of the semantic prosody. In the Before Phase of Peace Building, the results of the analysis showed the order of 'Missiles', 'North Korea Nuclear', 'Diplomacy', 'Unification', and ' South-North Korean'. The results of Peace Building Phase are extracted the order of 'Panmunjom', 'Unification', 'North Korea Nuclear', 'Diplomacy', and 'Military'. The results of Depression Phase of Peace Building derived the order of 'North Korea Nuclear', 'North and South Korea', 'Missile', 'State Department', and 'International'. There are 16 words adopted in all three periods. The order is as follows: 'missile', 'North Korea Nuclear', 'Diplomacy', 'Unification', 'North and South Korea', 'Military', 'Kaesong Industrial Complex', 'Defense', 'Sanctions', 'Denuclearization', 'Peace', 'Exchange and Cooperation', and 'South Korea'. We expect that the results of this study will contribute to analyze the trends of news content of North Korea associated with North Korea's provocations. And future research on North Korean trends will be conducted based on the results of this study. We will continue to study the model development for North Korea risk measurement that can anticipate and respond to North Korea's behavior in advance. We expect that the text mining analysis method and the scientific data analysis technique will be applied to North Korea and unification research field. Through these academic studies, I hope to see a lot of studies that make important contributions to the nation.

Development of transmission program for Radio Text in Radio Data System (Radio Data System에서의 문자정보(Radio Text)전송 프로그램 개발)

  • 채영석;왕수현;권대복
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06a
    • /
    • pp.101-105
    • /
    • 1996
  • 근래에 기존의 텔레비젼이나 라디오 방송에서 영상이나 음성의 방송 외에, 할당된 주파수를 최대한 효율적으로 사용하기 위한 부가방송의 개발이 활발히 이루어지고 있다. KBS에서는 오래 전부터 데이터 방송분야의 개발에 힘써왔으며 라디오에서는 RDS를 이용한 데이터 방송이 실용화 단계에 들어가고 있다. 1995년 하반기부터 KBS 제1라디오(표준FM)를 통하여 전국적으로 RDS 시험 방송을 하고 있으며, 또한 1997년에 문자정보 서비스를 위해 한국 실정에 맞는 새로운 RDS 한글 문자정보 규격을 만들어 서비스를 할 예정이다.

  • PDF

An Improved K-means Document Clustering using Concept Vectors

  • Shin, Yang-Kyu
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.853-861
    • /
    • 2003
  • An improved K-means document clustering method has been presented, where a concept vector is manipulated for each cluster on the basis of cosine similarity of text documents. The concept vectors are unit vectors that have been normalized on the n-dimensional sphere. Because the standard K-means method is sensitive to initial starting condition, our improvement focused on starting condition for estimating the modes of a distribution. The improved K-means clustering algorithm has been applied to a set of text documents, called Classic3, to test and prove efficiency and correctness of clustering result, and showed 7% improvements in its worst case.

  • PDF

A multi-channel CNN based online review helpfulness prediction model (Multi-channel CNN 기반 온라인 리뷰 유용성 예측 모델 개발에 관한 연구)

  • Li, Xinzhe;Yun, Hyorim;Li, Qinglong;Kim, Jaekyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.171-189
    • /
    • 2022
  • Online reviews play an essential role in the consumer's purchasing decision-making process, and thus, providing helpful and reliable reviews is essential to consumers. Previous online review helpfulness prediction studies mainly predicted review helpfulness based on the consistency of text and rating information of online reviews. However, there is a limitation in that representation capacity or review text and rating interaction. We propose a CNN-RHP model that effectively learns the interaction between review text and rating information to improve the limitations of previous studies. Multi-channel CNNs were applied to extract the semantic representation of the review text. We also converted rating into independent high-dimensional embedding vectors representing the same dimension as the text vector. The consistency between the review text and the rating information is learned based on element-wise operations between the review text and the star rating vector. To evaluate the performance of the proposed CNN-RHP model in this study, we used online reviews collected from Amazom.com. Experimental results show that the CNN-RHP model indicates excellent performance compared to several benchmark models. The results of this study can provide practical implications when providing services related to review helpfulness on online e-commerce platforms.