• Title/Summary/Keyword: term-document matrix

Search Result 46, Processing Time 0.025 seconds

Review of Wind Energy Publications in Korea Citation Index using Latent Dirichlet Allocation (잠재디리클레할당을 이용한 한국학술지인용색인의 풍력에너지 문헌검토)

  • Kim, Hyun-Goo;Lee, Jehyun;Oh, Myeongchan
    • New & Renewable Energy
    • /
    • v.16 no.4
    • /
    • pp.33-40
    • /
    • 2020
  • The research topics of more than 1,900 wind energy papers registered in the Korean Journal Citation Index (KCI) were modeled into 25 topics using latent directory allocation (LDA), and their consistency was cross-validated through principal component analysis (PCA) of the document word matrix. Key research topics in the wind energy field were identified as "offshore, wind farm," "blade, design," "generator, voltage, control," 'dynamic, load, noise," and "performance test." As a new method to determine the similarity between research topics in journals, a systematic evaluation method was proposed to analyze the correlation between topics by constructing a journal-topic matrix (JTM) and clustering them based on topic similarity between journals. By evaluating 24 journals that published more than 20 wind energy papers, it was confirmed that they were classified into meaningful clusters of mechanical engineering, electrical engineering, marine engineering, and renewable energy. It is expected that the proposed systematic method can be applied to the evaluation of the specificity of subsequent journals.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

Enhancing Document Clustering using Important Term of Cluster and Wikipedia (군집의 중요 용어와 위키피디아를 이용한 문서군집 향상)

  • Park, Sun;Lee, Yeon-Woo;Jeong, Min-A;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.45-52
    • /
    • 2012
  • This paper proposes a new enhancing document clustering method using the important terms of cluster and the wikipedia. The proposed method can well represent the concept of cluster topics by means of selecting the important terms in cluster by the semantic features of NMF. It can solve the problem of "bags of words" to be not considered the meaningful relationships between documents and clusters, which expands the important terms of cluster by using of the synonyms of wikipedia. Also, it can improve the quality of document clustering which uses the expanded cluster important terms to refine the initial cluster by re-clustering. The experimental results demonstrate that the proposed method achieves better performance than other document clustering methods.

Creation and clustering of proximity data for text data analysis (텍스트 데이터 분석을 위한 근접성 데이터의 생성과 군집화)

  • Jung, Min-Ji;Shin, Sang Min;Choi, Yong-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.3
    • /
    • pp.451-462
    • /
    • 2019
  • Document-term frequency matrix is a type of data used in text mining. This matrix is often based on various documents provided by the objects to be analyzed. When analyzing objects using this matrix, researchers generally select only terms that are common in documents belonging to one object as keywords. Keywords are used to analyze the object. However, this method misses the unique information of the individual document as well as causes a problem of removing potential keywords that occur frequently in a specific document. In this study, we define data that can overcome this problem as proximity data. We introduce twelve methods that generate proximity data and cluster the objects through two clustering methods of multidimensional scaling and k-means cluster analysis. Finally, we choose the best method to be optimized for clustering the object.

A Tensor Space Model based Semantic Search Technique (텐서공간모델 기반 시멘틱 검색 기법)

  • Hong, Kee-Joo;Kim, Han-Joon;Chang, Jae-Young;Chun, Jong-Hoon
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.1-14
    • /
    • 2016
  • Semantic search is known as a series of activities and techniques to improve the search accuracy by clearly understanding users' search intent without big cognitive efforts. Usually, semantic search engines requires ontology and semantic metadata to analyze user queries. However, building a particular ontology and semantic metadata intended for large amounts of data is a very time-consuming and costly task. This is why commercialization practices of semantic search are insufficient. In order to resolve this problem, we propose a novel semantic search method which takes advantage of our previous semantic tensor space model. Since each term is represented as the 2nd-order 'document-by-concept' tensor (i.e., matrix), and each concept as the 2nd-order 'document-by-term' tensor in the model, our proposed semantic search method does not require to build ontology. Nevertheless, through extensive experiments using the OHSUMED document collection and SCOPUS journal abstract data, we show that our proposed method outperforms the vector space model-based search method.

Association between Efficiency and Quality of Health Care in South Korea Long-term Care Hospitals: Using the Data Envelopment Analysis and Matrix Analysis (요양병원의 효율성과 의료서비스 질의 관련성: 자료포락분석과 매트릭스 분석)

  • Sohn, Minsung;Choi, Mankyu
    • Journal of Korean Academy of Nursing
    • /
    • v.44 no.4
    • /
    • pp.418-427
    • /
    • 2014
  • Purpose: Objectives of this study were to investigate the association between efficiency and quality of health care in Long-term Care Hospitals (LTCH) and determine related factors that would enable achievement of both high efficiency and high quality at the same time. Methods: Major data sources were the "2012 Korean Assessment of Propriety by Long-term Care Hospitals" obtained from the Health Insurance Review & Assessment Service. Cost variables were supplemented by a National Tax Service accounting document. First, data envelopment analysis was performed by generating efficiency scores for each LTCH. Second, matrix analysis was conducted to ascertain association between efficiency and quality. Lastly, kruskal-wallis and mann-whitney tests were conducted to identify related factors. Results: First, efficiency and quality of care are not in a relationship of trade-offs; thus, LTCH can be confident that high efficiency-high quality can be achieved. Second, LTCH with a large number of beds, longer tenure of medical personnel, and adequate levels of investment were more likely to have improved quality as well as efficiency. Conclusion: It is essential to enforce legal standards appropriate to the facilities, reduce turnover of nursing staff, and invest properly in human resources. These consequences will help LTCH to maintain the balance of high efficiency-high quality in the long-run.

Analysis of Consumer Awareness of Cycling Wear Using Web Mining (웹마이닝을 활용한 사이클웨어 소비자 인식 분석)

  • Kim, Chungjeong;Yi, Eunjou
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.640-649
    • /
    • 2018
  • This study analyzed the consumer awareness of cycling wear using web mining, one of the big data analysis methods. For this, the texts of postings and comments related to cycling wear from 2006 to 2017 at Naver cafe, 'people who commute by bicycle' were collected and analyzed using R packages. A total of 15,321 documents were used for data analysis. The keywords of cycling wear were extracted using a Korean morphological analyzer (KoNLP) and converted to TDM (Term Document Matrix) and co-occurrence matrix to calculate the frequency of the keywords. The most frequent keyword in cycling wear was 'tights', including the opinion that they feel embarrassed because they are too tight. When they purchase cycling wear, they appeared to consider 'price', 'size', and 'brand'. Recently 'low price' and 'cost effectiveness' have become more frequent since 2016 than before, which indicates that consumers tend to prefer practical products. Moreover, the findings showed that it is necessary to improve not only the design and wearability, but also the material functionality, such as sweat-absorbance and quick drying, and the function of pad. These showed similar results to previous studies using a questionnaire. Therefore, it is expected to be used as an objective indicator that can be reflected in product development by real-time analysis of the opinions and requirements of consumers using web mining.

A Re-Ranking Retrieval Model based on Two-Level Similarity Relation Matrices (2단계 유사관계 행렬을 기반으로 한 순위 재조정 검색 모델)

  • 이기영;은희주;김용성
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1519-1533
    • /
    • 2004
  • When Web-based special retrieval systems for scientific field extremely restrict the expression of user's information request, the process of the information content analysis and that of the information acquisition become inconsistent. In this paper, we apply the fuzzy retrieval model to solve the high time complexity of the retrieval system by constructing a reduced term set for the term's relatively importance degree. Furthermore, we perform a cluster retrieval to reflect the user's Query exactly through the similarity relation matrix satisfying the characteristics of the fuzzy compatibility relation. We have proven the performance of a proposed re-ranking model based on the similarity union of the fuzzy retrieval model and the document cluster retrieval model.

Patent Keyword Analysis for Forecasting Emerging Technology : GHG Technology (부상기술 예측을 위한 특허키워드정보분석에 관한 연구 - GHG 기술 중심으로)

  • Choe, Do Han;Kim, Gab Jo;Park, Sang Sung;Jang, Dong Sik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.2
    • /
    • pp.139-149
    • /
    • 2013
  • As the importance of technology forecasting while countries and companies manage the R&D project is growing bigger, the methodology of technology forecasting has been diversified. One of the forecasting method is patent analysis. This research proposes quick forecasting process of emerging technology based on keyword approach using text mining. The forecasting process is following: First, the term-document matrix is extracted from patent documents by using text mining. Second, emerging technology keyword are extracted by analyzing the importance of word from utilizing mean values and standard deviation values of the term and the emerging trend of word discovered from time series information of the term. Next, association between terms is measured by using cosine similarity. finally, the keyword of emerging technology is selected in consequence of the synthesized result and we forecast the emerging technology according to the results. The technology forecasting process described in this paper can be applied to developing computerized technology forecasting system integrated with various results of other patent analysis for decision maker of company and country.

Variational Expectation-Maximization Algorithm in Posterior Distribution of a Latent Dirichlet Allocation Model for Research Topic Analysis

  • Kim, Jong Nam
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.883-890
    • /
    • 2020
  • In this paper, we propose a variational expectation-maximization algorithm that computes posterior probabilities from Latent Dirichlet Allocation (LDA) model. The algorithm approximates the intractable posterior distribution of a document term matrix generated from a corpus made up by 50 papers. It approximates the posterior by searching the local optima using lower bound of the true posterior distribution. Moreover, it maximizes the lower bound of the log-likelihood of the true posterior by minimizing the relative entropy of the prior and the posterior distribution known as KL-Divergence. The experimental results indicate that documents clustered to image classification and segmentation are correlated at 0.79 while those clustered to object detection and image segmentation are highly correlated at 0.96. The proposed variational inference algorithm performs efficiently and faster than Gibbs sampling at a computational time of 0.029s.