• Title/Summary/Keyword: latent space model

Search Result 45, Processing Time 0.023 seconds

Sequential Adaptation Algorithm Based on Transformation Space Model for Speech Recognition (음성인식을 위한 변환 공간 모델에 근거한 순차 적응기법)

  • Kim, Dong-Kook;Chang, Joo-Hyuk;Kim, Nam-Soo
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.75-88
    • /
    • 2004
  • In this paper, we propose a new approach to sequential linear regression adaptation of continuous density hidden Markov models (CDHMMs) based on transformation space model (TSM). The proposed TSM which characterizes the a priori knowledge of the training speakers associated with maximum likelihood linear regression (MLLR) matrix parameters is effectively described in terms of the latent variable models. The TSM provides various sources of information such as the correlation information, the prior distribution, and the prior knowledge of the regression parameters that are very useful for rapid adaptation. The quasi-Bayes (QB) estimation algorithm is formulated to incrementally update the hyperparameters of the TSM and regression matrices simultaneously. Experimental results showed that the proposed TSM approach is better than that of the conventional quasi-Bayes linear regression (QBLR) algorithm for a small amount of adaptation data.

  • PDF

A New Method to Retrieve Sensible Heat and Latent Heat Fluxes from the Remote Sensing Data

  • Liou Yuei-An;Chen Yi-Ying;Chien Tzu-Chieh;Chang Tzu-Yin
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.415-417
    • /
    • 2005
  • In order to retrieve the latent and sensible heat fluxes, high-resolution airborne imageries with visible, near infrared, and thermal infrared bands and ground-base meteorology measurements are utilized in this paper. The retrieval scheme is based on the balance of surface energy budget and momentum equations. There are three basic surface parameters including surface albedo $(\alpha)$, normalized difference vegetation index (NOVI) and surface kinetic temperature (TO). Lowtran 7 code is used to correct the atmosphere effect. The imageries were taken on 28 April and 5 May 2003. From the scattering plot of data set, we observed the extreme dry and wet pixels to derive the fitting of dry and wet controlled lines, respectively. Then the sensible heat and latent heat fluxes are derived from through a partitioning factor A. The retrieved latent and sensible heat fluxes are compared with in situ measurements, including eddy correlation and porometer measurements. It is shown that the retrieved fluxes from our scheme match with the measurements better than those derived from the S-SEBI model.

  • PDF

Non-Conservatism of Bonferroni-Adjusted Test

  • Jeon, Cyeong-Bae;Lee, Sung-Duck
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.219-227
    • /
    • 2001
  • Another approach (multi-parameter measurement method) of interlaboratory studies of test methods is presented. When the unrestricted normal likelihood for the fixed latent variable model is unbounded, we propose a me쇙 of restricting the parameter space by formulating realistic alternative hypothesis under which the likelihood is bounded. A simulation study verified the claim of conservatism of level of significance based on assumptions about central chi-square distributed test statistics and on Bonferroni approximations. We showed a randomization approach that furnished empirical significance levels would be better than a Bonferroni adjustment.

  • PDF

Study on Designing and Installation Effect of Fresh Air Load Reduction System by using Underground Double Floor Space - Proposal of Numerical Model coupled Heat and Moisture Simultaneous Transfer in Hygroscopic - (지열을 이용한 공조외기부하저감(空調外氣負荷低減) 시스템의 설계 및 도입 효과에 관한 연구 - 증기 확산지배에 의한 열수분 동시 이동 수치모델의 제안 -)

  • Son, Won-tug;Choi, Young-sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.7 no.4
    • /
    • pp.331-340
    • /
    • 2004
  • This paper presents a feasibility study of a fresh air load reduction system by using an underground double floor space. The fresh air is introduced into the double slab space and passes through the opening bored into the footing beam. The air is cooled by the heat exchange with the inside surface of the double slab space in summer, and heated in winter. This system not only reduces sensible heat load of the fresh air by heat exchange with earth but also reduces latent heat load of the fresh air by ad/de-sorption of underground double slab concrete. In this paper, we used a model for evaluation of fresh air latent heat load reduction by hygroscopic of air to earth exchange system taking into account coupled heat and moisture transfer of underground double floor space. In conclusion it shows the validity of the proposed method for a design tool and the quantitative effect of the system.

  • PDF

Variational Auto Encoder Distributed Restrictions for Image Generation (이미지 생성을 위한 변동 자동 인코더 분산 제약)

  • Yong-Gil Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.91-97
    • /
    • 2023
  • Recent research shows that latent directions can be used to image process towards certain attributes. However, controlling the generation process of generative model is very difficult. Though the latent directions are used to image process for certain attributes, many restrictions are required to enhance the attributes received the latent vectors according to certain text and prompts and other attributes largely unaffected. This study presents a generative model having certain restriction to the latent vectors for image generation and manipulation. The suggested method requires only few minutes per manipulation, and the simulation results through Tensorflow Variational Auto-encoder show the effectiveness of the suggested approach with extensive results.

Reputation Analysis of Document Using Probabilistic Latent Semantic Analysis Based on Weighting Distinctions (가중치 기반 PLSA를 이용한 문서 평가 분석)

  • Cho, Shi-Won;Lee, Dong-Wook
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.3
    • /
    • pp.632-638
    • /
    • 2009
  • Probabilistic Latent Semantic Analysis has many applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. In this paper, we propose an algorithm using weighted Probabilistic Latent Semantic Analysis Model to find the contextual phrases and opinions from documents. The traditional keyword search is unable to find the semantic relations of phrases, Overcoming these obstacles requires the development of techniques for automatically classifying semantic relations of phrases. Through experiments, we show that the proposed algorithm works well to discover semantic relations of phrases and presents the semantic relations of phrases to the vector-space model. The proposed algorithm is able to perform a variety of analyses, including such as document classification, online reputation, and collaborative recommendation.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Finding the best suited autoencoder for reducing model complexity

  • Ngoc, Kien Mai;Hwang, Myunggwon
    • Smart Media Journal
    • /
    • v.10 no.3
    • /
    • pp.9-22
    • /
    • 2021
  • Basically, machine learning models use input data to produce results. Sometimes, the input data is too complicated for the models to learn useful patterns. Therefore, feature engineering is a crucial data preprocessing step for constructing a proper feature set to improve the performance of such models. One of the most efficient methods for automating feature engineering is the autoencoder, which transforms the data from its original space into a latent space. However certain factors, including the datasets, the machine learning models, and the number of dimensions of the latent space (denoted by k), should be carefully considered when using the autoencoder. In this study, we design a framework to compare two data preprocessing approaches: with and without autoencoder and to observe the impact of these factors on autoencoder. We then conduct experiments using autoencoders with classifiers on popular datasets. The empirical results provide a perspective regarding the best suited autoencoder for these factors.

A Bayesian Model-based Clustering with Dissimilarities

  • Oh, Man-Suk;Raftery, Adrian
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.9-14
    • /
    • 2003
  • A Bayesian model-based clustering method is proposed for clustering objects on the basis of dissimilarites. This combines two basic ideas. The first is that tile objects have latent positions in a Euclidean space, and that the observed dissimilarities are measurements of the Euclidean distances with error. The second idea is that the latent positions are generated from a mixture of multivariate normal distributions, each one corresponding to a cluster. We estimate the resulting model in a Bayesian way using Markov chain Monte Carlo. The method carries out multidimensional scaling and model-based clustering simultaneously, and yields good object configurations and good clustering results with reasonable measures of clustering uncertainties. In the examples we studied, the clustering results based on low-dimensional configurations were almost as good as those based on high-dimensional ones. Thus tile method can be used as a tool for dimension reduction when clustering high-dimensional objects, which may be useful especially for visual inspection of clusters. We also propose a Bayesian criterion for choosing the dimension of the object configuration and the number of clusters simultaneously. This is easy to compute and works reasonably well in simulations and real examples.

  • PDF

Thermal Vacuum Test of the Phase Change Material Thermal Control Unit Loaded on the Satellite Flight Model and Thermal Model Correlation with Test Results (위성에 탑재된 상변화물질 열제어장치 비행모델의 열진공시험 및 이를 통한 열해석 모델 보정)

  • Cho, Yeon;Kim, Taig Young;Seo, Joung-Ki;Jang, Tae Seong;Park, Hong-Young
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.10
    • /
    • pp.729-737
    • /
    • 2022
  • Melting and icing process of the PCMTCU(Phase Change Material Thermal Control Unit) installed on the NEXTSat-2, which is scheduled to be launched in the second half, was investigated through the results of satellite-level TVT(Thermal Vacuum Test). As a result of the test, it was confirmed that the latent heat of PCM contributes to the temperature stabilization of the heating components. The thermal model for numerical analysis of the PCMTCU was correlated to acquire a reasonable degree of accuracy using the collected temperature measurements during TVT. The periodic temperature variation of the PCMTCU in normal on-orbit operation was predicted with the correlated thermal model, and the quantitative contribution of the PCM on the thermal energy management was evaluated with the liquid fraction. It will receive flight telemetry from the NEXTSat-2 after the launch, and complete the space verification of the PCMTCU.