• Title/Summary/Keyword: Document Frequency

Search Result 297, Processing Time 0.026 seconds

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Trend Forecasting and Analysis of Quantum Computer Technology (양자 컴퓨터 기술 트렌드 예측과 분석)

  • Cha, Eunju;Chang, Byeong-Yun
    • Journal of the Korea Society for Simulation
    • /
    • v.31 no.3
    • /
    • pp.35-44
    • /
    • 2022
  • In this study, we analyze and forecast quantum computer technology trends. Previous research has been mainly focused on application fields centered on technology for quantum computer technology trends analysis. Therefore, this paper analyzes important quantum computer technologies and performs future signal detection and prediction, for a more market driven technical analysis and prediction. As analyzing words used in news articles to identify rapidly changing market changes and public interest. This paper extends conference presentation of Cha & Chang (2022). The research is conducted by collecting domestic news articles from 2019 to 2021. First, we organize the main keywords through text mining. Next, we explore future quantum computer technologies through analysis of Term Frequency - Inverse Document Frequency(TF-IDF), Key Issue Map(KIM), and Key Emergence Map (KEM). Finally, the relationship between future technologies and supply and demand is identified through random forests, decision trees, and correlation analysis. As results of the study, the interest in artificial intelligence was the highest in frequency analysis, keyword diffusion and visibility analysis. In terms of cyber-security, the rate of mention in news articles is getting overwhelmingly higher than that of other technologies. Quantum communication, resistant cryptography, and augmented reality also showed a high rate of increase in interest. These results show that the expectation is high for applying trend technology in the market. The results of this study can be applied to identifying areas of interest in the quantum computer market and establishing a response system related to technology investment.

Decision Method of Importance of E-Mail based on User Profiles (사용자 프로파일에 기반한 전자 메일의 중요도 결정)

  • Lee, Samuel Sang-Kon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.493-500
    • /
    • 2008
  • Although modern day people gather many data from the network, the users want only the information needed. Using this technology, the users can extract on the data that satisfy the query. As the previous studies use the single data in the document, frequency of the data for example, it cannot be considered as the effective data clustering method. What is needed is the effective clustering technology that can process the electronic network documents such as the e-mail or XML that contain the tags of various formats. This paper describes the study of extracting the information from the user query based on the multi-attributes. It proposes a method of extracting the data such as the sender, text type, time limit syntax in the text, and title from the e-mail and using such data for filtering. It also describes the experiment to verify that the multi-attribute based clustering method is more accurate than the existing clustering methods using only the word frequency.

Automatic Classification of Documents Using Word Correlation (단어의 연관성을 이용한 문서의 자동분류)

  • Sin, Jin-Seop;Lee, Chang-Hun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2422-2430
    • /
    • 1999
  • In this paper, we propose a new method for automatic classification of web documents using the degree of correlation between words. First, we select keywords from term frequency and inverse document frequency (TF*IDF) and compute the degree of relevance between the keywords in the whole documents,, using the probability model word that was closely connected with them and create a profile that characterizes each class. Finally, if we repeat the above process until lower than threshold value, we will make several profiles which are in keeping with users concern. And, we classified each document with the profiles and compared these with those of other automatic classification methods.

  • PDF

Influencer Attribute Analysis based Recommendation System (인플루언서 속성 분석 기반 추천 시스템)

  • Park, JeongReun;Park, Jiwon;Kim, Minwoo;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1321-1329
    • /
    • 2019
  • With the development of social information networks, the marketing methods are also changing in various ways. Unlike successful marketing methods based on existing celebrities and financial support, Influencer-based marketing is a big trend and very famous. In this paper, we first extract influencer features from more than 54 YouTube channels using the multi-dimensional qualitative analysis based on the meta information and comment data analysis of YouTube, model representative themes to maximize a personalized video satisfaction. Plus, the purpose of this study is to provide supplementary means for the successful promotion and marketing by creating and distributing videos of new items by referring to the existing Influencer features. For that we assume all comments of various videos for each channel as each document, TF-IDF (Term Frequency and Inverse Document Frequency) and LDA (Latent Dirichlet Allocation) algorithms are applied to maximize performance of the proposed scheme. Based on the performance evaluation, we proved the proposed scheme is better than other schemes.

Study on the resonant HF DC/DC Converter for the weight reduction of the Auxiliary Power Supply of MAGLEV (자기부상열차 보조전원장치 경량화를 위한 공진형 HF DC/DC Converter 연구)

  • Lee, Kyoung-Bok;Lim, Ji-Young;Jo, Jeong-Min;Kim, Jin-Su;Han, Young-Jae;Choi, Sung-Ho
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.1825-1831
    • /
    • 2011
  • One of the major trends in traction power electronics is increasing the switching frequencies. The advances in the frequency elevation have made it possible to reduce the total size and weight of the passive components such as capacitors, inductors and transformers in the DC/DC converter and hence to increase the power density. The traction dynamic performance is also improved. This document describes several aspects relating to the design of resonant DC/DC converter operating at high frequency(10KHz) and the converter topologies and the control method of MAGLEV, which result in soft switching, are discussed.

  • PDF

A Study on the Integration of Similar Sentences in Atomatic Summarizing of Document (자동초록 작성시에 발생하는 유사의미 문장요소들의 통합에 관한 연구)

  • Lee, Tae-Young
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.34 no.2
    • /
    • pp.87-115
    • /
    • 2000
  • The effects of the Case, Part of Speech, Word and Clause Location, Word Frequency etc. were studied in discriminating the similar sentences of the Korean text. Word Frequency was much related to the discrimination of similarity and Tilte word and Functional Clause were little, but the others were not. The cosine coefficient and Salton'similarity measurement are used to measure the similarity between sentences. The change of clauses between each sentence is also used to unify the similar sentences into a represenative sentence.

  • PDF

Automatic Classification of Product Data for Natural General-purpose O2O Application User Interface (자연스러운 범용 O2O 애플리케이션 사용자 인터페이스를 위한 상품 정보 자동 분류)

  • Lee, Hana;Lim, Eunsoo;Cho, Youngin;Yoon, Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.382-385
    • /
    • 2016
  • 본 논문은 현재 영역 별로 파편화된 여러 O2O(Online to Offline) 서비스들을 통합적으로 제공하기 위해 자연어를 통한 NUI(Natural User Interface)를 개발하여 사용자가 명시한 상품 정보의 항목을 자동으로 분류하고자 한다. 이를 위해 e-commerce 도메인 정보 학습에 적합한 나이브 베이즈 분류(Naive Bayes Classifier) 알고리즘을 사용한다. 학습에는 미국 e-commerce 사이트 Groupon의 상품 정보와 분류 체계를 사용하며, 학습 데이터의 특징을 분석하여 상품 정보에 특화된 학습 데이터 정제 및 TF-IDF(Term Frequency-Inverse Document Frequency)를 통한 단어 별 가중치를 적용하여 알고리즘의 정확도를 향상시킨다.

A Query Classification Method for Question Answering on a Large-Scale Text Data (대규모 문서 데이터 집합에서 Q&A를 위한 질의문 분류 기법)

  • 엄재홍;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.253-255
    • /
    • 2000
  • 어떠한 질문에 대한 구체적 해답을 얻고 싶은 경우, 일반적인 정보 검색이 가지는 문제점은 검색 결과가 사용자가 찾고자 하는 답이라 하기 보다는 해답을 포함하는(또는 포함하지 않는) 문서의 집합이라는 점이다. 사용자가 후보문서를 모두 읽을 필요 없이 빠르게 원하는 정보를 얻기 위해서는 검색의 결과로 문서집합을 제시하기 보다는 실제 원하는 답을 제공하는 시스템의 필요성이 대두된다. 이를 위해 기존의 TF-IDF(Term Frequency-Inversed Document Frequency)기반의 정보검색의 방삭에 자연언어처리(Natural Language Processing)를 이용한 질문의 분류와 문서의 사전 표지(Tagging)를 사용할 수 있다. 본 연구에서는 매년 NIST(National Institute of Standards & Technology)와 DARPA(Defense Advanced Research Projects Agency)주관으로 열리는 TREC(Text REtrieval Conference)중 1999년에 열린 TREC-8의 사용자의 질문(Question)에 대한 답(Answer)을 찾는 ‘Question & Answer’문제의 실험 환경에서 질문을 특징별로 분류하고 검색 대상의 문서에 대한 사전 표지를 이용한 정보검색 시스템으로 사용자의 질문(Question)에 대한 해답을 보다 정확하고 효율적으로 제시할 수 있음을 실험을 통하여 보인다.

  • PDF

Optimization Model on the World Wide Web Organization with respect to Content Centric Measures (월드와이드웹의 내용기반 구조최적화)

  • Lee Wookey;Kim Seung;Kim Hando;Kang Sukho
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.30 no.1
    • /
    • pp.187-198
    • /
    • 2005
  • The structure of a Web site can prevent the search robots or crawling agents from confusion in the midst of huge forest of the Web pages. We formalize the view on the World Wide Web and generalize it as a hierarchy of Web objects such as the Web as a set of Web sites, and a Web site as a directed graph with Web nodes and Web edges. Our approach results in the optimal hierarchical structure that can maximize the weight, tf-idf (term frequency and inverse document frequency), that is one of the most widely accepted content centric measures in the information retrieval community, so that the measure can be used to embody the semantics of search query. The experimental results represent that the optimization model is an effective alternative in the dynamically changing Web environment by replacing conventional heuristic approaches.