• Title/Summary/Keyword: StopWords

Search Result 107, Processing Time 0.028 seconds

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

An Experimental Study on Braking Thermal Damage of Brake Disk Cover (브레이크 디스크 커버의 제동 열손상에 대한 실험적 연구)

  • Ko, Kwang-Ho;Moon, Byung-Koo
    • Journal of Digital Convergence
    • /
    • v.13 no.11
    • /
    • pp.171-178
    • /
    • 2015
  • The disk cover is installed to protect brake disk and calliper and it's removed right before delivering to customers. The temperature of disk cover was measured driving test vehicles(2000cc, diesel) in this study. The highest temperature measured for the driving test(120km/h-braking(0.3G)-stop-120km/h-braking(0.5G)-stop) was $260{\sim}270^{\circ}C$ in the upper part of the disk cover and the temperature varied considerably around the disk cover. It can be inferred from this temperature distribution around the cover that the major heat transfer from hot disk to cover was through convection. In other words, the hot air generated by braking friction moved up to the upper part of the disk cover. And only the upper area of the disk cover was melted down during this driving test. The thickness of disk cover was increased to 1.0mm from 0.7mm and 1 paper of masking tape was pasted in the upper region of the disk cover. Then the cover endured the heated air formed by braking friction during the driving test.

Recognizing the Two Faces of Gambling: The Lived Experiences of Korean Women Gamblers

  • Kim, Sungjae;Kim, Wooksoo;Dickerson, Suzanne S.
    • Journal of Korean Academy of Nursing
    • /
    • v.46 no.5
    • /
    • pp.753-765
    • /
    • 2016
  • Purpose: The aim of this study was to explore the lived experiences of women problem gamblers, focusing on the meaning of gambling to them, how and why these women continue to gamble or stop gambling, and their needs and concerns. In order to effectively help women problem gamblers, practical in-depth knowledge is necessary to develop intervention programs for prevention, treatment, and recovery among women problem gamblers. Methods: The hermeneutic phenomenology approach was used to guide in-depth interviews and team interpretation of data. Sixteen women gamblers who chose to live in the casino area were recruited through snowball sampling with help from a counseling center. Participants were individually interviewed from February to April 2013 and asked to tell their stories of gambling. Transcribed interviews provided data for interpretive analysis. Results: In the study analysis one constitutive pattern was identified: moving beyond addiction by recognizing the two faces of gambling in their life. Four related themes emerged in the analysis-gambling as alluring; gambling as 'ugly'; living in contradictions; and moving beyond. Conclusion: Loneliness and isolation play a critical role in gambling experiences of women gamblers in Korea. In other words, they are motivated to gamble in order to escape from loneliness, to stop gambling for fear of being lonely as they get older, and to stay in the casnio area so as not to be alone. The need for acceptance is one fo the important factors that should be considered in developing intervention program for women.

An Effective Increment리 Content Clustering Method for the Large Documents in U-learning Environment (U-learning 환경의 대용량 학습문서 판리를 위한 효율적인 점진적 문서)

  • Joo, Kil-Hong;Choi, Jin-Tak
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.9
    • /
    • pp.859-872
    • /
    • 2004
  • With the rapid advance of computer and communication techonology, the recent trend of education environment is edveloping in the ubiquitous learning (u-learning) direction that learners select and organize the contents, time and order of learning by themselves. Since the amount of education information through the internet is increasing rapidly and it is managed in document in an effective way is necessary. The document clustering is integrated documents to subject by classifying a set of documents through their similarity among them. Accordingly, the document clustering can be used in exploring and searching a document and it can increased accuracy of search. This paper proposes an efficient incremental clustering method for a set of documents increase gradually. The incremental document clustering algorithm assigns a set of new documents to the legacy clusters which have been identified in advance. In addition, to improve the correctness of the clustering, removing the stop words can be proposed.

  • PDF

An Effective Incremental Text Clustering Method for the Large Document Database (대용량 문서 데이터베이스를 위한 효율적인 점진적 문서 클러스터링 기법)

  • Kang, Dong-Hyuk;Joo, Kil-Hong;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.57-66
    • /
    • 2003
  • With the development of the internet and computer, the amount of information through the internet is increasing rapidly and it is managed in document form. For this reason, the research into the method to manage for a large amount of document in an effective way is necessary. The document clustering is integrated documents to subject by classifying a set of documents through their similarity among them. Accordingly, the document clustering can be used in exploring and searching a document and it can increased accuracy of search. This paper proposes an efficient incremental cluttering method for a set of documents increase gradually. The incremental document clustering algorithm assigns a set of new documents to the legacy clusters which have been identified in advance. In addition, to improve the correctness of the clustering, removing the stop words can be proposed and the weight of the word can be calculated by the proposed TF$\times$NIDF function.

Comparison of error characteristics of final consonant at word-medial position between children with functional articulation disorder and normal children (기능적 조음장애아동과 일반아동의 어중자음 연쇄조건에서 나타나는 어중종성 오류 특성 비교)

  • Lee, Ran;Lee, Eunju
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.19-28
    • /
    • 2015
  • This study investigated final consonant error characteristics at word-medial position in children with functional articulation disorder. Data was collected from 11 children with functional articulation and 11 normal children, ages 4 to 5. The speech samples were collected from a naming test. Seventy-five words with every possible bi-consonants matrix at the word-medial position were used. The results of this study were as follows : First, percentage of correct word-medial final consonants of functional articulation disorder was lower than normal children. Second, there were significant differences between two groups in omission, substitution and assimilation error. Children with functional articulation disorder showed a high frequency of omission and regressive assimilation error, especially alveolarization in regressive assimilation error most. However, normal children showed a high frequency of regressive assimilation error, especially bilabialization in regressive assimilation error most. Finally, the results of error analysis according to articulation manner, articulation place and phonation type of consonants of initial consonant at word-medial, both functional articulation disorder and normal children showed a high error rate in stop sound-stop sound condition. The error rate of final consonant at word-medial position was high when initial consonant at word-medial position was alveolar sound and alveopalatal sound. Futhermore, when initial sounds were fortis and aspirated sounds, more errors occurred than linis sound was initial sound. The results of this study provided practical error characteristics of final consonant at word-medial position in children with speech sound disorder.

Acoustic Characteristics of Patients with Maxillary Complete Dentures (상악 총의치 장착 환자 언어의 음향학적 특성 연구)

  • Ko, Sok-Min;Hwang, Byung-Nam
    • Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.139-156
    • /
    • 2001
  • Speech intelligibility in patients with complete dentures is an important clinical problem depending on the material used. The objective of this study was to investigate the speech of two edentulous subjects fitted with a complete maxillary prosthesis made of two different palatal materials: chrome-cobalt alloy and acrylic resin. Three patients with complete dentures in the experiment group and ten people in the controls groups participated in the experiment. CSL, Visi-Pitch were used to measure speech characteristics. The test words consisted of a simple vowel /e/, meaningless three syllabic words containing fricative, affricated and stops sounds, and sustained fricative sounds /s/ and /$\int$/. The analysis speech parameters were vowel and lateral formants, VOT, sound durations, sound pressure level and fricative frequency. Data analysis was conducted by a series of paired T-test. The findings like the following: (1) Vowel formant one of patients with complete denture is higher than that of the control group (p<0.05), while lateral formant three of patients with complete denture is lower than that of the control group (p<0.0l). (2) Patients with complete denture produced lower speech intelligibility with low fricative frequency (/$\int$/) than control group (p<0.0). The speech intelligibility of patients with metal prosthesis was higher than that of those with resin prosthesis (p<0.05). (3) Fricative, lateral and stop sound durations of patients with complete denture were longer than those of the control group (p<0.01 and p<0.05), respectively. Total sound durations of patients with metal prosthesis were similar to that of the control group (p<0.05), while those with resin prosthesis had a shorter duration (p<0.01). This implied that those with metal prosthesis had higher speech intelligibility than those with resin prosthesis. (4) Patients with complete denture had higher sound pressure levels /t/ and /c/ than the control group (p<0.01). However, sound pressure levels for /c/ of patients with metal prosthesis or resin prosthesis was similar to the control group (p<0.05). (5) Patients with complete denture had higher fundamental frequency than the control group (p<0.01).

  • PDF

A Study on Development of Patent Information Retrieval Using Textmining (텍스트 마이닝을 이용한 특허정보검색 개발에 관한 연구)

  • Go, Gwang-Su;Jung, Won-Kyo;Shin, Young-Geun;Park, Sang-Sung;Jang, Dong-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.8
    • /
    • pp.3677-3688
    • /
    • 2011
  • The patent information retrieval system can serve a variety of purposes. In general, the patent information is retrieved using limited key words. To identify earlier technology and priority rights repeated effort is needed. This study proposes a method of content-based retrieval using text mining. Using the proposed algorithm, each of the documents is invested with characteristic value. The characteristic values are used to compare similarities between query documents and database documents. Text analysis is composed of 3 steps: stop-word, keyword analysis and weighted value calculation. In the test results, the general retrieval and the proposed algorithm were compared by using accuracy measurements. As the study arranges the result documents as similarities of the query documents, the surfer can improve the efficiency by reviewing the similar documents first. Also because of being able to input the full-text of patent documents, the users unacquainted with surfing can use it easily and quickly. It can reduce the amount of displayed missing data through the use of content based retrieval instead of keyword based retrieval for extending the scope of the search.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

A Study on the Coupling Coefficient between ATP Antenna and ATS Antenna in Combined On-Board System (차상통합신호시스템에서 ATP 안테나와 ATS 안테나 사이의 결합계수에 관한 연구)

  • Kim, Doo-Gyum;Kim, Min-Seok;Kim, Min-Kyu;Lee, Jong-Woo
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.211-225
    • /
    • 2011
  • Railroad signalling systems are to control intervals and routes of trains. There are ATS(Automatic Train Stop), ATP(Automatic Train Protection), ATO(Automatic Train Operation) and ATC(Automatic Train Control) system. Trains are operated in the section which is met on the signalling system because various signalling systems are used in Korea. On the other words, trains are not operated in the section which is used in the other signalling system. To solve this problem, recently combined on-board system has been developed. The combined on-board system is designed by doubling the ATS, ATP and ATC system. Information signal is received by magnetic sensors in the ATC system and is received by antennas in the ATS and ATP system. Therefore, it is possible to arise transmission problems by magnetic coupling. In this paper, electric model of the ATS and ATP antenna is suggested and interference frequency by the magnetic coupling between the ATS and ATP antenna is estimated numerically. As a results of the magnetic coupling, the value of the magnetic coupling is presented without magnetic induction.

  • PDF