• Title/Summary/Keyword: baseline model

Search Result 868, Processing Time 0.043 seconds

ML estimation using Poisson HGLM approach in semi-parametric frailty models

  • Ha, Il Do
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1389-1397
    • /
    • 2016
  • Semi-parametric frailty model with nonparametric baseline hazards has been widely used for the analyses of clustered survival-time data. The frailty models can be fitted via an auxiliary Poisson hierarchical generalized linear model (HGLM). For the inferences of the frailty model marginal likelihood, which gives MLE, is often used. The marginal likelihood is usually obtained by integrating out random effects, but it often requires an intractable integration. In this paper, we propose to obtain the MLE via Laplace approximation using a Poisson HGLM approach for semi-parametric frailty model. The proposed HGLM approach uses hierarchical-likelihood (h-likelihood), which avoids integration itself. The proposed method is illustrated using a numerical study.

Customer Satisfaction Measurement Using QFD in the College (QFD를 이용한 전문대학의 고객만족평가)

  • Woo, Tae-Hee
    • Journal of the Korea Safety Management & Science
    • /
    • v.8 no.3
    • /
    • pp.171-187
    • /
    • 2006
  • Modern management considers customer satisfaction as a baseline standard of performance and a possible standard of excellence for any business organization including the college. Quality function deployment(QFD) is a structured approach to seek out voice of customers, understanding their needs, and ensure that their needs are met. The strategy value proposed by Chien et al. combines importance, satisfaction, performance, and ability to enhance decision making effectiveness. But in their model, the correlation among the strategic alternatives isn't considered the decision chain and is therefore eliminated. This paper proposes how to calculate the new weight of columns to consider various strength levels of correlations matrix, representing the correlation among the strategic alternatives, using normalization procedure. The aim of this paper is to present and original customer satisfaction survey conducted in the college. Thus, this paper presents an original customer satisfaction survey in the college and provides to demonstrate the practical usage of the design model to compare this model with Chien's model.

Text-Independent Speaker Verification Using Variational Gaussian Mixture Model

  • Moattar, Mohammad Hossein;Homayounpour, Mohammad Mehdi
    • ETRI Journal
    • /
    • v.33 no.6
    • /
    • pp.914-923
    • /
    • 2011
  • This paper concerns robust and reliable speaker model training for text-independent speaker verification. The baseline speaker modeling approach is the Gaussian mixture model (GMM). In text-independent speaker verification, the amount of speech data may be different for speakers. However, we still wish the modeling approach to perform equally well for all speakers. Besides, the modeling technique must be least vulnerable against unseen data. A traditional approach for GMM training is expectation maximization (EM) method, which is known for its overfitting problem and its weakness in handling insufficient training data. To tackle these problems, variational approximation is proposed. Variational approaches are known to be robust against overtraining and data insufficiency. We evaluated the proposed approach on two different databases, namely KING and TFarsdat. The experiments show that the proposed approach improves the performance on TFarsdat and KING databases by 0.56% and 4.81%, respectively. Also, the experiments show that the variationally optimized GMM is more robust against noise and the verification error rate in noisy environments for TFarsdat dataset decreases by 1.52%.

Availability of a Maintained System

  • Jung, Hai-Sung
    • International Journal of Reliability and Applications
    • /
    • v.3 no.4
    • /
    • pp.185-198
    • /
    • 2002
  • In the traditional life testing model, it is assumed that a certain number of identical items are tested under identical condition. This is due to statistical rather than practical considerations. The proportional hazards model can be used to develop a realistic approach to determine the performance of an item. That is also capable of modeling the failure rates of accelerated life testing when the covariates are applied stresses. The proportional hazards model is typically applied for a group of items to assess the importance of factors that may influence the reliability of an item. In this paper we considered the interarrival times of an item rather than the time to first failure for grouped items and provided the availability estimation for the determination of maintenance policy and overhaul time. In order to demonstrate the proposed approach, an example is presented.

  • PDF

Analyzing Survival Data by Proportional Reversed Hazard Model

  • Gupta, Ramesh C.;Wu, Han
    • International Journal of Reliability and Applications
    • /
    • v.2 no.1
    • /
    • pp.1-26
    • /
    • 2001
  • The purpose of this paper is to introduce a proportional reversed hazard rate model, in contrast to the celebrated proportional hazard model, and study some of its structural properties. Some criteria of ageing are presented and the inheritance of the ageing notions (of the base line distribution) by the proposed model are studied. Two important data sets are analyzed: one uncensored and the other having some censored observations. In both cases, the confidence bands for the failure rate and survival function are investigated. In one case the failure rate is bathtub shaped and in the other it is upside bath tub shaped and thus the failure rates are non-monotonic even though the baseline failure rate is monotonic. In addition, the estimates of the turning points of the failure rates are provided.

  • PDF

Classical and Bayesian studies for a new lifetime model in presence of type-II censoring

  • Goyal, Teena;Rai, Piyush K;Maury, Sandeep K
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.4
    • /
    • pp.385-410
    • /
    • 2019
  • This paper proposes a new class of distribution using the concept of exponentiated of distribution function that provides a more flexible model to the baseline model. It also proposes a new lifetime distribution with different types of hazard rates such as decreasing, increasing and bathtub. After studying some basic statistical properties and parameter estimation procedure in case of complete sample observation, we have studied point and interval estimation procedures in presence of type-II censored samples under a classical as well as Bayesian paradigm. In the Bayesian paradigm, we considered a Gibbs sampler under Metropolis-Hasting for estimation under two different loss functions. After simulation studies, three different real datasets having various nature are considered for showing the suitability of the proposed model.

A Robust Bayesian Probabilistic Matrix Factorization Model for Collaborative Filtering Recommender Systems Based on User Anomaly Rating Behavior Detection

  • Yu, Hongtao;Sun, Lijun;Zhang, Fuzhi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4684-4705
    • /
    • 2019
  • Collaborative filtering recommender systems are vulnerable to shilling attacks in which malicious users may inject biased profiles to promote or demote a particular item being recommended. To tackle this problem, many robust collaborative recommendation methods have been presented. Unfortunately, the robustness of most methods is improved at the expense of prediction accuracy. In this paper, we construct a robust Bayesian probabilistic matrix factorization model for collaborative filtering recommender systems by incorporating the detection of user anomaly rating behaviors. We first detect the anomaly rating behaviors of users by the modified K-means algorithm and target item identification method to generate an indicator matrix of attack users. Then we incorporate the indicator matrix of attack users to construct a robust Bayesian probabilistic matrix factorization model and based on which a robust collaborative recommendation algorithm is devised. The experimental results on the MovieLens and Netflix datasets show that our model can significantly improve the robustness and recommendation accuracy compared with three baseline methods.

Optimized Chinese Pronunciation Prediction by Component-Based Statistical Machine Translation

  • Zhu, Shunle
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.203-212
    • /
    • 2021
  • To eliminate ambiguities in the existing methods to simplify Chinese pronunciation learning, we propose a model that can predict the pronunciation of Chinese characters automatically. The proposed model relies on a statistical machine translation (SMT) framework. In particular, we consider the components of Chinese characters as the basic unit and consider the pronunciation prediction as a machine translation procedure (the component sequence as a source sentence, the pronunciation, pinyin, as a target sentence). In addition to traditional features such as the bidirectional word translation and the n-gram language model, we also implement a component similarity feature to overcome some typos during practical use. We incorporate these features into a log-linear model. The experimental results show that our approach significantly outperforms other baseline models.

Analyzing DNN Model Performance Depending on Backbone Network (백본 네트워크에 따른 사람 속성 검출 모델의 성능 변화 분석)

  • Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.128-132
    • /
    • 2023
  • Recently, with the development of deep learning technology, research on pedestrian attribute recognition technology using deep neural networks has been actively conducted. Existing pedestrian attribute recognition techniques can be obtained in such a way as global-based, regional-area-based, visual attention-based, sequential prediction-based, and newly designed loss function-based, depending on how pedestrian attributes are detected. It is known that the performance of these pedestrian attribute recognition technologies varies greatly depending on the type of backbone network that constitutes the deep neural networks model. Therefore, in this paper, several backbone networks are applied to the baseline pedestrian attribute recognition model and the performance changes of the model are analyzed. In this paper, the analysis is conducted using Resnet34, Resnet50, Resnet101, Swin-tiny, and Swinv2-tiny, which are representative backbone networks used in the fields of image classification, object detection, etc. Furthermore, this paper analyzes the change in time complexity when inferencing each backbone network using a CPU and a GPU.

  • PDF

A Study on Improving Performance of the Deep Neural Network Model for Relational Reasoning (관계 추론 심층 신경망 모델의 성능개선 연구)

  • Lee, Hyun-Ok;Lim, Heui-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.12
    • /
    • pp.485-496
    • /
    • 2018
  • So far, the deep learning, a field of artificial intelligence, has achieved remarkable results in solving problems from unstructured data. However, it is difficult to comprehensively judge situations like humans, and did not reach the level of intelligence that deduced their relations and predicted the next situation. Recently, deep neural networks show that artificial intelligence can possess powerful relational reasoning that is core intellectual ability of human being. In this paper, to analyze and observe the performance of Relation Networks (RN) among the neural networks for relational reasoning, two types of RN-based deep neural network models were constructed and compared with the baseline model. One is a visual question answering RN model using Sort-of-CLEVR and the other is a text-based question answering RN model using bAbI task. In order to maximize the performance of the RN-based model, various performance improvement experiments such as hyper parameters tuning have been proposed and performed. The effectiveness of the proposed performance improvement methods has been verified by applying to the visual QA RN model and the text-based QA RN model, and the new domain model using the dialogue-based LL dataset. As a result of the various experiments, it is found that the initial learning rate is a key factor in determining the performance of the model in both types of RN models. We have observed that the optimal initial learning rate setting found by the proposed random search method can improve the performance of the model up to 99.8%.