• Title/Summary/Keyword: Text data

Search Result 2,953, Processing Time 0.031 seconds

Discovering Meaningful Trends in the Inaugural Addresses of United States Presidents Via Text Mining (텍스트마이닝을 활용한 미국 대통령 취임 연설문의 트렌드 연구)

  • Cho, Su Gon;Cho, Jaehee;Kim, Seoung Bum
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.5
    • /
    • pp.453-460
    • /
    • 2015
  • Identification of meaningful patterns and trends in large volumes of text data is an important task in various research areas. In the present study, we propose a procedure to find meaningful tendencies based on a combination of text mining, cluster analysis, and low-dimensional embedding. To demonstrate applicability and effectiveness of the proposed procedure, we analyzed the inaugural addresses of the presidents of the United States from 1789 to 2009. The main results of this study show that trends in the national policy agenda can be discovered based on clustering and visualization algorithms.

Amazon product recommendation system based on a modified convolutional neural network

  • Yarasu Madhavi Latha;B. Srinivasa Rao
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.633-647
    • /
    • 2024
  • In e-commerce platforms, sentiment analysis on an enormous number of user reviews efficiently enhances user satisfaction. In this article, an automated product recommendation system is developed based on machine and deep-learning models. In the initial step, the text data are acquired from the Amazon Product Reviews dataset, which includes 60 000 customer reviews with 14 806 neutral reviews, 19 567 negative reviews, and 25 627 positive reviews. Further, the text data denoising is carried out using techniques such as stop word removal, stemming, segregation, lemmatization, and tokenization. Removing stop-words (duplicate and inconsistent text) and other denoising techniques improves the classification performance and decreases the training time of the model. Next, vectorization is accomplished utilizing the term frequency-inverse document frequency technique, which converts denoised text to numerical vectors for faster code execution. The obtained feature vectors are given to the modified convolutional neural network model for sentiment analysis on e-commerce platforms. The empirical result shows that the proposed model obtained a mean accuracy of 97.40% on the APR dataset.

Research on Construction Quality Problem Prevention

  • Shaohua Jiang;Jingqi Zhang
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.846-854
    • /
    • 2024
  • A project's success is directly guaranteed by the prevention of construction-related problems. Nevertheless, the prevention of quality issues frequently overlooks how issues are coupled with one another, which might result in a domino effect of quality issues. In order to solve the above problems, this work first preprocesses unstructured text data with quality problem coupling. Then the pre-processing data is used to build a knowledge base for the prevention of construction quality problems. Then the text similarity algorithm is used to mine the coupling relationship between the qualities and enrich the information in the database. Finally, some text is used as test object to verify the validity of the method. This study enriches the research around the prevention of building quality problems.

Construction of an Internet of Things Industry Chain Classification Model Based on IRFA and Text Analysis

  • Zhimin Wang
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.215-225
    • /
    • 2024
  • With the rapid development of Internet of Things (IoT) and big data technology, a large amount of data will be generated during the operation of related industries. How to classify the generated data accurately has become the core of research on data mining and processing in IoT industry chain. This study constructs a classification model of IoT industry chain based on improved random forest algorithm and text analysis, aiming to achieve efficient and accurate classification of IoT industry chain big data by improving traditional algorithms. The accuracy, precision, recall, and AUC value size of the traditional Random Forest algorithm and the algorithm used in the paper are compared on different datasets. The experimental results show that the algorithm model used in this paper has better performance on different datasets, and the accuracy and recall performance on four datasets are better than the traditional algorithm, and the accuracy performance on two datasets, P-I Diabetes and Loan Default, is better than the random forest model, and its final data classification results are better. Through the construction of this model, we can accurately classify the massive data generated in the IoT industry chain, thus providing more research value for the data mining and processing technology of the IoT industry chain.

Full-text databases as a means for resource sharing (자원공유 수단으로서의 전문 데이터베이스)

  • 노진구
    • Journal of Korean Library and Information Science Society
    • /
    • v.24
    • /
    • pp.45-79
    • /
    • 1996
  • Rising publication costs and declining financial resources have resulted in renewed interest among librarians in resource sharing. Although the idea of sharing resources is not new, there is a sense of urgency not seen in the past. Driven by rising publication costs and static and often shrinking budgets, librarians are embracing resource sharing as an idea whose time may finally have come. Resource sharing in electronic environments is creating a shift in the concept of the library as a warehouse of print-based collection to the idea of the library as the point of access to need information. Much of the library's material will be delivered in electronic form, or printed. In this new paradigm libraries can not be expected to su n.0, pport research from their own collections. These changes, along with improved communications, computerization of administrative functions, fax and digital delivery of articles, advancement of data storage technologies, are improving the procedures and means for delivering needed information to library users. In short, for resource sharing to be truly effective and efficient, however, automation and data communication are essential. The possibility of using full-text online databases as a su n.0, pplement to interlibrary loan for document delivery is examined. At this point, this article presents possibility of using full-text online databases as a means to interlibrary loan for document delivery. The findings of the study can be summarized as follows : First, turn-around time and the cost of getting a hard copy of a journal article from online full-text databases was comparable to the other document delivery services. Second, the use of full-text online databases should be considered as a method for promoting interlibrary loan services, as it is more cost-effective and labour saving. Third, for full-text databases to work as a document delivery system the databases must contain as many periodicals as possible and be loaded on as many systems as possible. Forth, to contain many scholarly research journals on full-text databases, we need guidelines to cover electronic document delivery, electronic reserves. Fifth, to be a full full-text database, more advanced information technologies are really needed.

  • PDF

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

Identifying Research Trends in Big data-driven Digital Transformation Using Text Mining (텍스트마이닝을 활용한 빅데이터 기반의 디지털 트랜스포메이션 연구동향 파악)

  • Minjun, Kim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.54-64
    • /
    • 2022
  • A big data-driven digital transformation is defined as a process that aims to innovate companies by triggering significant changes to their capabilities and designs through the use of big data and various technologies. For a successful big data-driven digital transformation, reviewing related literature, which enhances the understanding of research statuses and the identification of key research topics and relationships among key topics, is necessary. However, understanding and describing literature is challenging, considering its volume and variety. Establishing a common ground for central concepts is essential for science. To clarify key research topics on the big data-driven digital transformation, we carry out a comprehensive literature review by performing text mining of 439 articles. Text mining is applied to learn and identify specific topics, and the suggested key references are manually reviewed to develop a state-of-the-art overview. A total of 10 key research topics and relationships among the topics are identified. This study contributes to clarifying a systematized view of dispersed studies on big data-driven digital transformation across multiple disciplines and encourages further academic discussions and industrial transformation.

A Study on Incremental Learning Model for Naive Bayes Text Classifier (Naive Bayes 문서 분류기를 위한 점진적 학습 모델 연구)

  • 김제욱;김한준;이상구
    • The Journal of Information Technology and Database
    • /
    • v.8 no.1
    • /
    • pp.95-104
    • /
    • 2001
  • In the text classification domain, labeling the training documents is an expensive process because it requires human expertise and is a tedious, time-consuming task. Therefore, it is important to reduce the manual labeling of training documents while improving the text classifier. Selective sampling, a form of active learning, reduces the number of training documents that needs to be labeled by examining the unlabeled documents and selecting the most informative ones for manual labeling. We apply this methodology to Naive Bayes, a text classifier renowned as a successful method in text classification. One of the most important issues in selective sampling is to determine the criterion when selecting the training documents from the large pool of unlabeled documents. In this paper, we propose two measures that would determine this criterion : the Mean Absolute Deviation (MAD) and the entropy measure. The experimental results, using Renters 21578 corpus, show that this proposed learning method improves Naive Bayes text classifier more than the existing ones.

  • PDF

A New Steroidal Glycoside from Allium macrostemon Bunge

  • Kim, Yun Sik;Cha, Joon Min;Kim, Dong Hyun;Lee, Tae Hyun;Lee, Kang Ro
    • Natural Product Sciences
    • /
    • v.24 no.1
    • /
    • pp.54-58
    • /
    • 2018
  • A phytochemical investigation of Allium macrostemon Bunge (Liliaceae) afforded the new pregnane steroidal glycoside, named allimacroside F (1), along with three known glycosides, benzyl-O-${\alpha}-{\text\tiny{L}}$-rhamnopyranosyl-($1{\rightarrow}6$)-${\beta}-{\text\tiny{D}}$-glucopyranoside (2), phenylethyl-O-${\alpha}-{\text\tiny{L}}$-rhamnopyranosyl-($1{\rightarrow}6$)-${\beta}-{\text\tiny{D}}$-glucopyranoside (3), (Z)-3-hexenyl-O-${\alpha}-{\text\tiny{L}}$-rhamnopyranosyl-($1{\rightarrow}6$)-${\beta}-{\text\tiny{D}}$-glucopyranoside (4). The identification and structural elucidation of a new compound (1) was carried out based on spectral data analyses ($^1H-NMR$, $^{13}C-NMR$, $^1H-^1H$ COSY, HSQC, HMBC, and NOESY) and HR-FAB-MS.

An Optimal Weighting Method in Supervised Learning of Linguistic Model for Text Classification

  • Mikawa, Kenta;Ishida, Takashi;Goto, Masayuki
    • Industrial Engineering and Management Systems
    • /
    • v.11 no.1
    • /
    • pp.87-93
    • /
    • 2012
  • This paper discusses a new weighting method for text analyzing from the view point of supervised learning. The term frequency and inverse term frequency measure (tf-idf measure) is famous weighting method for information retrieval, and this method can be used for text analyzing either. However, it is an experimental weighting method for information retrieval whose effectiveness is not clarified from the theoretical viewpoints. Therefore, other effective weighting measure may be obtained for document classification problems. In this study, we propose the optimal weighting method for document classification problems from the view point of supervised learning. The proposed measure is more suitable for the text classification problem as used training data than the tf-idf measure. The effectiveness of our proposal is clarified by simulation experiments for the text classification problems of newspaper article and the customer review which is posted on the web site.