• Title/Summary/Keyword: 텍스트 데이터 분석

Search Result 1,095, Processing Time 0.037 seconds

A Study on Extracting the Document Text for Unallocated Areas of Data Fragments (비할당 영역 데이터 파편의 문서 텍스트 추출 방안에 관한 연구)

  • Yoo, Byeong-Yeong;Park, Jung-Heum;Bang, Je-Wan;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.6
    • /
    • pp.43-51
    • /
    • 2010
  • It is meaningful to investigate data in unallocated space because we can investigate the deleted data. Consecutively complete file recovery using the File Carving is possible in unallocated area, but noncontiguous or incomplete data recovery is impossible. Typically, the analysis of the data fragments are needed because they should contain large amounts of information. Microsoft Word, Excel, PowerPoint and PDF document file's text are stored using compression or specific document format. If the part of aforementioned document file was stored in unallocated data fragment, text extraction is possible using specific document format. In this paper, we suggest the method of extracting a particular document file text in unallocated data fragment.

Stock Prediction Using News Text Mining and Time Series Analysis (뉴스 텍스트 마이닝과 시계열 분석을 이용한 주가예측)

  • Ahn, Sung-Won;Cho, Sung-Bae
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.364-369
    • /
    • 2010
  • 본 논문에서는 뉴스 텍스트 마이닝을 수행하여 2005년 1월부터 2008년 12월까지 4년 간의 뉴스 데이터에 대해 주가에 호재인지 악재인지 여부에 대해 학습을 하고, 이를 근거로 신규 발행된 뉴스가 주가 상승 또는 하락에 영향을 미치는지를 예측하는 알고리즘을 제안한다. 뉴스 텍스트 마이닝을 위해 변형된 Bag of Words 모델과 Naive Bayesian 분류기법을 사용하였으며, 특히 주가 예측에 있어서 뉴스 마이닝에만 의존하던 기존의 관련 연구와는 달리 예측의 정확성을 높이기 위해 주가의 시계열 데이터 분석기법인 RSI를 추가로 작용하였다. 2009년 11월부터 2010년 2월까지 4개월간 42,355건의 뉴스 데이터에 대해 실험한 결과, 기존 연구 대비 의미 있는 결과인 55.01%의 예측성공률을 얻었다.

  • PDF

A Personalized Learning System Using Social Data and Text Classification Techniques (소셜 데이터와 텍스트 분류 기술을 이용한 개인 맞춤형 학습 시스템)

  • Kim, Sun-Pyo;Kim, Eun-Sang;Jeon, Young-Ho;Lee, Ki-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.718-720
    • /
    • 2014
  • 정보통신 기기의 발달에 따라 스마트 러닝으로 교육방법이 진화하고 있다. 스마트 러닝에 있어서 학습자의 관심분야에 맞는 적절한 콘텐츠의 제공이 필수적이다. 본 논문에서는 텍스트 분류 기술을 이용하여 학습자의 SNS 데이터로부터 관심분야를 자동적으로 파악해내는 시스템을 제안한다. 텍스트 분류를 위해 카테고리 별로 기 분류되어있는 데이터를 수집하여 기계 학습을 수행하였다. 텍스트 분류의 정확도 향상을 위해 카테고리 분류 단위 크기를 변화시키면서 정확도를 측정하고 분석하여 실제 서비스에 적용 가능한 수준으로 판단되는 82.5%의 정확도를 얻었다.

A Study on Unstructured text data Post-processing Methodology using Stopword Thesaurus (불용어 시소러스를 이용한 비정형 텍스트 데이터 후처리 방법론에 관한 연구)

  • Won-Jo Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.935-940
    • /
    • 2023
  • Most text data collected through web scraping for artificial intelligence and big data analysis is generally large and unstructured, so a purification process is required for big data analysis. The process becomes structured data that can be analyzed through a heuristic pre-processing refining step and a post-processing machine refining step. Therefore, in this study, in the post-processing machine refining process, the Korean dictionary and the stopword dictionary are used to extract vocabularies for frequency analysis for word cloud analysis. In this process, "user-defined stopwords" are used to efficiently remove stopwords that were not removed. We propose a methodology for applying the "thesaurus" and examine the pros and cons of the proposed refining method through a case analysis using the "user-defined stop word thesaurus" technique proposed to complement the problems of the existing "stop word dictionary" method with R's word cloud technique. We present comparative verification and suggest the effectiveness of practical application of the proposed methodology.

Measuring a Valence and Activation Dimension of Korean Emotion Terms using in Social Media (소셜 미디어에서 사용되는 한국어 정서 단어의 정서가, 활성화 차원 측정)

  • Rhee, Shin-Young;Ko, Il-Ju
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.167-176
    • /
    • 2013
  • User-created text data are increasing rapidly caused by development of social media. In opinion mining, User's opinions are extracted by analyzing user's text. A primary goal of sentiment analysis as a branch of opinion mining is to extract user's opinions from a text that is required to build a list of emotion terms. In this paper, we built a list of emotion terms to analyse a sentiment of social media using Facebook as a representative social media. We collected data from Facebook and selected a emotion terms, and measured the dimensions of valence and activation through a survey. As a result, we built a list of 267 emotion terms including the dimension of valence and activation.

  • PDF

Design of Twitter data collection system for regional sentiment analysis (지역별 감성 분석을 위한 트위터 데이터 수집 시스템 설계)

  • Choi, Kiwon;Kim, Hee-Cheol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.506-509
    • /
    • 2017
  • Opinion mining is a way to analyze the emotions in the text and is used to identify the emotional state of the author and to find out the opinions of the public. As you can analyze individual emotions through opinion mining, if you analyze the text by region, you can find out the emotional state you have in each region. The regional sentiment analysis can obtain information that could not be obtained from personal sentiment analysis, and if a certain area has emotions, it can understand the cause. For regional sentiment analysis, we need text data created by region, so we need to collect data through Twitter crawling. Therefore, this paper designs a Twitter data collection system for regional sentiment analysis. The client requests the tweet data of the specific region and time, and the server collects and transmits the requested tweet data from the client. Through the latitude and longitude values of the region, it collects the tweet data of the area, and it can manage the text by region and time through collected data. We expect efficient data collection and management for emotional analysis through the design of this system.

  • PDF

Analysis of Seasonal Importance of Construction Hazards Using Text Mining (텍스트마이닝을 이용한 건설공사 위험요소의 계절별 중요도 분석)

  • Park, Kichang;Kim, Hyoungkwan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.3
    • /
    • pp.305-316
    • /
    • 2021
  • Construction accidents occur due to a number of reasons-worker carelessness, non-adoption of safety equipment, and failure to comply with safety rules are some examples. Because much construction work is done outdoors, weather conditions can also be a factor in accidents. Past construction accident data are useful for accident prevention, but since construction accident data are often in a text format consisting of natural language, extracting construction hazards from construction accident data can take a lot of time and that entails extra cost. Therefore, in this study, we extracted construction hazards from 2,026 domestic construction accident reports using text mining and performed a seasonal analysis of construction hazards through frequency analysis and centrality analysis. Of the 254 construction hazards defined by Korea's Ministry of Land, Infrastructure, and Transport, we extracted 51 risk factors from the construction accident data. The results showed that a significant hazard was "Formwork" in spring and autumn, "Scaffold" in summer, and "Crane" in winter. The proposed method would enable construction safety managers to prepare better safety measures against outdoor construction accidents according to weather, season, and climate.

Analysis of the Unstructured Traffic Report from Traffic Broadcasting Network by Adapting the Text Mining Methodology (텍스트 마이닝을 적용한 한국교통방송제보 비정형데이터의 분석)

  • Roh, You Jin;Bae, Sang Hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.3
    • /
    • pp.87-97
    • /
    • 2018
  • The traffic accident reports that are generated by the Traffic Broadcasting Networks(TBN) are unstructured data. It, however, has the value as some sort of real-time traffic information generated by the viewpoint of the drives and/or pedestrians that were on the roads, the time and spots, not the offender or the victim who caused the traffic accidents. However, the traffic accident reports, which are big data, were not applied to traffic accident analysis and traffic related research commonly. This study adopting text-mining technique was able to provide a clue for utilizing it for the impacts of traffic accidents. Seven years of traffic reports were grasped by this analysis. By analyzing the reports, it was possible to identify the road names, accident spot names, time, and to identify factors that have the greatest influence on other drivers due to traffic accidents. Authors plan to combine unstructured accident data with traffic reports for further study.

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.

Analysis of Factors Affecting Surge in Container Shipping Rates in the Era of Covid19 Using Text Analysis (코로나19 판데믹 이후 컨테이너선 운임 상승 요인분석: 텍스트 분석을 중심으로)

  • Rha, Jin Sung
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.1
    • /
    • pp.111-123
    • /
    • 2022
  • In the era of the Covid19, container shipping rates are surging up. Many studies have attempted to investigate the factors affecting a surge in container shipping rates. However, there is limited literature using text mining techniques for analyzing the underlying causes of the surge. This study aims to identify the factors behind the unprecedented surge in shipping rates using network text analysis and LDA topic modeling. For the analysis, we collected the data and keywords from articles in Lloyd's List during past two years(2020-2021). The results of the text analysis showed that the current surge is mainly due to "US-China trade war", "rising blanking sailings", "port congestion", "container shortage", and "unexpected events such as the Suez canal blockage".