• Title/Summary/Keyword: LDA 확장

Search Result 29, Processing Time 0.024 seconds

Extensions of LDA by PCA Mixture Model and Class-wise Features (PCA 혼합 모형과 클래스 기반 특징에 의한 LDA의 확장)

  • Kim Hyun-Chul;Kim Daijin;Bang Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.8
    • /
    • pp.781-788
    • /
    • 2005
  • LDA (Linear Discriminant Analysis) is a data discrimination technique that seeks transformation to maximize the ratio of the between-class scatter and the within-class scatter While it has been successfully applied to several applications, it has two limitations, both concerning the underfitting problem. First, it fails to discriminate data with complex distributions since all data in each class are assumed to be distributed in the Gaussian manner; and second, it can lose class-wise information, since it produces only one transformation over the entire range of classes. We propose three extensions of LDA to overcome the above problems. The first extension overcomes the first problem by modeling the within-class scatter using a PCA mixture model that can represent more complex distribution. The second extension overcomes the second problem by taking different transformation for each class in order to provide class-wise features. The third extension combines these two modifications by representing each class in terms of the PCA mixture model and taking different transformation for each mixture component. It is shown that all our proposed extensions of LDA outperform LDA concerning classification errors for handwritten digit recognition and alphabet recognition.

On Optimizing LDA-extentions Using a Pre-Clustering (사전 클러스터링을 이용한 LDA-확장법들의 최적화)

  • Kim, Sang-Woon;Koo, Byum-Yong;Choi, Woo-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.3
    • /
    • pp.98-107
    • /
    • 2007
  • For high-dimensional pattern recognition, such as face classification, the small number of training samples leads to the Small Sample Size problem when the number of pattern samples is smaller than the number of dimensionality. Recently, various LDA-extensions have been developed, including LDA, PCA+LDA, and Direct-LDA, to address the problem. This paper proposes a method of improving the classification efficiency by increasing the number of (sub)-classes through pre-clustering a training set prior to the execution of Direct-LDA. In LDA (or Direct-LDA), since the number of classes of the training set puts a limit to the dimensionality to be reduced, it is increased to the number of sub-classes that is obtained through clustering so that the classification performance of LDA-extensions can be improved. In other words, the eigen space of the training set consists of the range space and the null space, and the dimensionality of the range space increases as the number of classes increases. Therefore, when constructing the transformation matrix, through minimizing the null space, the loss of discriminatve information resulted from this space can be minimized. Experimental results for the artificial data of X-OR samples as well as the bench mark face databases of AT&T and Yale demonstrate that the classification efficiency of the proposed method could be improved.

Topic Expansion based on Infinite Vocabulary Online LDA Topic Model using Semantic Correlation Information (무한 사전 온라인 LDA 토픽 모델에서 의미적 연관성을 사용한 토픽 확장)

  • Kwak, Chang-Uk;Kim, Sun-Joong;Park, Seong-Bae;Kim, Kweon Yang
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.9
    • /
    • pp.461-466
    • /
    • 2016
  • Topic expansion is an expansion method that reflects external data for improving quality of learned topic. The online learning topic model is not appropriate for topic expansion using external data, because it does not reflect unseen words to learned topic model. In this study, we proposed topic expansion method using infinite vocabulary online LDA. When unseen words appear in learning process, the proposed method allocates unseen word to topic after calculating semantic correlation between unseen word and each topic. To evaluate the proposed method, we compared with existing topic expansion method. The results indicated that the proposed method includes additional information that is not contained in broadcasting script by reflecting external documents. Also, the proposed method outperformed on coherence evaluation.

Query Expansion based on Knowledge Extraction and Latent Dirichlet Allocation for Clinical Decision Support (의학 문서 검색을 위한 지식 추출 및 LDA 기반 질의 확장)

  • Jo, Seung-Hyeon;Lee, Kyung-Soon
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.31-34
    • /
    • 2015
  • 본 논문에서는 임상 의사 결정 지원을 위한 UMLS와 위키피디아를 이용하여 지식 정보를 추출하고 질의 유형 정보를 이용한 LDA 기반 질의 확장 방법을 제안한다. 질의로는 해당 환자가 겪고 있는 증상들이 주어진다. UMLS와 위키피디아를 사용하여 병명과 병과 관련된 증상, 검사 방법, 치료 방법 정보를 추출한다. UMLS와 위키피디아를 사용하여 추출한 의학 정보를 이용하여 질의와 관련된 병명을 추출한다. 질의와 관련된 병명을 이용하여 추가 증상, 검사 방법, 치료 방법 정보를 확장 질의로 선택한다. 또한, LDA를 실행한 후, Word-Topic 클러스터에서 질의와 관련된 클러스터를 추출하고 Document-Topic 클러스터에서 초기 검색 결과와 관련이 높은 클러스터를 추출한다. 추출한 Word-Topic 클러스터와 Document-Topic 클러스터 중 같은 번호를 가지고 있는 클러스터를 찾는다. 그 후, Word-Topic 클러스터에서 의학 용어를 추출하여 확장 질의로 선택한다. 제안 방법의 유효성을 검증하기 위해 TREC Clinical Decision Support(CDS) 2014 테스트 컬렉션에 대해 비교 평가한다.

  • PDF

The MeSH-Term Query Expansion Models using LDA Topic Models in Health Information Retrieval (MeSH 기반의 LDA 토픽 모델을 이용한 검색어 확장)

  • You, Sukjin
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.1
    • /
    • pp.79-108
    • /
    • 2021
  • Information retrieval in the health field has several challenges. Health information terminology is difficult for consumers (laypeople) to understand. Formulating a query with professional terms is not easy for consumers because health-related terms are more familiar to health professionals. If health terms related to a query are automatically added, it would help consumers to find relevant information. The proposed query expansion (QE) models show how to expand a query using MeSH terms. The documents were represented by MeSH terms (i.e. Bag-of-MeSH), found in the full-text articles. And then the MeSH terms were used to generate LDA (Latent Dirichlet Analysis) topic models. A query and the top k retrieved documents were used to find MeSH terms as topic words related to the query. LDA topic words were filtered by threshold values of topic probability (TP) and word probability (WP). Threshold values were effective in an LDA model with a specific number of topics to increase IR performance in terms of infAP (inferred Average Precision) and infNDCG (inferred Normalized Discounted Cumulative Gain), which are common IR metrics for large data collections with incomplete judgments. The top k words were chosen by the word score based on (TP *WP) and retrieved document ranking in an LDA model with specific thresholds. The QE model with specific thresholds for TP and WP showed improved mean infAP and infNDCG scores in an LDA model, comparing with the baseline result.

Feature Expansion based on LDA Word Distribution for Performance Improvement of Informal Document Classification (비격식 문서 분류 성능 개선을 위한 LDA 단어 분포 기반의 자질 확장)

  • Lee, Hokyung;Yang, Seon;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.43 no.9
    • /
    • pp.1008-1014
    • /
    • 2016
  • Data such as Twitter, Facebook, and customer reviews belong to the informal document group, whereas, newspapers that have grammar correction step belong to the formal document group. Finding consistent rules or patterns in informal documents is difficult, as compared to formal documents. Hence, there is a need for additional approaches to improve informal document analysis. In this study, we classified Twitter data, a representative informal document, into ten categories. To improve performance, we revised and expanded features based on LDA(Latent Dirichlet allocation) word distribution. Using LDA top-ranked words, the other words were separated or bundled, and the feature set was thus expanded repeatedly. Finally, we conducted document classification with the expanded features. Experimental results indicated that the proposed method improved the micro-averaged F1-score of 7.11%p, as compared to the results before the feature expansion step.

Topic Model Augmentation and Extension Method using LDA and BERTopic (LDA와 BERTopic을 이용한 토픽모델링의 증강과 확장 기법 연구)

  • Kim, SeonWook;Yang, Kiduk
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.3
    • /
    • pp.99-132
    • /
    • 2022
  • The purpose of this study is to propose AET (Augmented and Extended Topics), a novel method of synthesizing both LDA and BERTopic results, and to analyze the recently published LIS articles as an experimental approach. To achieve the purpose of this study, 55,442 abstracts from 85 LIS journals within the WoS database, which spans from January 2001 to October 2021, were analyzed. AET first constructs a WORD2VEC-based cosine similarity matrix between LDA and BERTopic results, extracts AT (Augmented Topics) by repeating the matrix reordering and segmentation procedures as long as their semantic relations are still valid, and finally determines ET (Extended Topics) by removing any LDA related residual subtopics from the matrix and ordering the rest of them by F1 (BERTopic topic size rank, Inverse cosine similarity rank). AET, by comparing with the baseline LDA result, shows that AT has effectively concretized the original LDA topic model and ET has discovered new meaningful topics that LDA didn't. When it comes to the qualitative performance evaluation, AT performs better than LDA while ET shows similar performances except in a few cases.

Transformation Technique for Null Space-Based Linear Discriminant Analysis with Lagrange Method (라그랑지 기법을 쓴 영 공간 기반 선형 판별 분석법의 변형 기법)

  • Hou, Yuxi;Min, Hwang-Ki;Song, Iickho;Choi, Myeong Soo;Park, Sun;Lee, Seong Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.2
    • /
    • pp.208-212
    • /
    • 2013
  • Due to the singularity of the within-class scatter, linear discriminant analysis (LDA) becomes ill-posed for small sample size (SSS) problems. An extension of LDA, the null space-based LDA (NLDA) provides good discriminant performances for SSS problems. In this paper, by applying the Lagrange technique, the procedure of transforming the problem of finding the feature extractor of NLDA into a linear equation problem is derived.

A Study on the Document Topic Extraction System for LDA-based User Sentiment Analysis (LDA 기반 사용자 감정분석을 위한 문서 토픽 추출 시스템에 대한 연구)

  • An, Yoon-Bin;Kim, Hak-Young;Moon, Yong-Hyun;Hwang, Seung-Yeon;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.195-203
    • /
    • 2021
  • Recently, big data, a major technology in the IT field, has been expanding into various industrial sectors and research on how to utilize it is actively underway. In most Internet industries, user reviews help users make decisions about purchasing products. However, the process of screening positive, negative and helpful reviews from vast product reviews requires a lot of time in determining product purchases. Therefore, this paper designs and implements a system that analyzes and aggregates keywords using LDA, a big data analysis technology, to provide meaningful information to users. For the extraction of document topics, in this study, the domestic book industry is crawling data into domains, and big data analysis is conducted. This helps buyers by providing comprehensive information on products based on user review topics and appraisal words, and furthermore, the product's outlook can be identified through the review status analysis.

Systemic Analysis of Research Activities and Trends Related to Artificial Intelligence(A.I.) Technology Based on Latent Dirichlet Allocation (LDA) Model (Latent Dirichlet Allocation (LDA) 모델 기반의 인공지능(A.I.) 기술 관련 연구 활동 및 동향 분석)

  • Chung, Myoung Sug;Lee, Joo Yeoun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.3
    • /
    • pp.87-95
    • /
    • 2018
  • Recently, with the technological development of artificial intelligence, related market is expanding rapidly. In the artificial intelligence technology field, which is still in the early stage but still expanding, it is important to reduce uncertainty about research direction and investment field. Therefore, this study examined technology trends using text mining and topic modeling among big data analysis methods and suggested trends of core technology and future growth potential. We hope that the results of this study will provide researchers with an understanding of artificial intelligence technology trends and new implications for future research directions.