• Title/Summary/Keyword: Conditional Mean Models

Search Result 49, Processing Time 0.022 seconds

Determinants of student course evaluation using hierarchical linear model (위계적 선형모형을 이용한 강의평가 결정요인 분석)

  • Cho, Jang Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.6
    • /
    • pp.1285-1296
    • /
    • 2013
  • The fundamental concerns of this paper are to analyze the effects of student course evaluation using subject characteristic and student characteristic variables. We use a 2-level hierarchical linear model since the data structure of subject characteristic and student characteristic variables is multilevel. Four models we consider are as follows; (1) null model, (2) random coefficient model, (3) mean as outcomes model, (4) intercepts and slopes as outcomes model. The results of the analysis were given as follows. First, the result of null model was that subject characteristics effects on course evaluation had much larger than student characteristics. Second, the result of conditional model specifying subject and student level predictors revealed that class size, grade, tenure, mean GPA of the class, native class for level-1, and sex, department category, admission method, mean GPA of the student for level-2 had statistically significant effects on course evaluation. The explained variance was 13% in subject level, 13% in student level.

Quadratic inference functions in marginal models for longitudinal data with time-varying stochastic covariates

  • Cho, Gyo-Young;Dashnyam, Oyunchimeg
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.651-658
    • /
    • 2013
  • For the marginal model and generalized estimating equations (GEE) method there is important full covariates conditional mean (FCCM) assumption which is pointed out by Pepe and Anderson (1994). With longitudinal data with time-varying stochastic covariates, this assumption may not necessarily hold. If this assumption is violated, the biased estimates of regression coefficients may result. But if a diagonal working correlation matrix is used, irrespective of whether the assumption is violated, the resulting estimates are (nearly) unbiased (Pan et al., 2000).The quadratic inference functions (QIF) method proposed by Qu et al. (2000) is the method based on generalized method of moment (GMM) using GEE. The QIF yields a substantial improvement in efficiency for the estimator of ${\beta}$ when the working correlation is misspecified, and equal efficiency to the GEE when the working correlation is correct (Qu et al., 2000).In this paper, we interest in whether the QIF can improve the results of the GEE method in the case of FCCM is violated. We show that the QIF with exchangeable and AR(1) working correlation matrix cannot be consistent and asymptotically normal in this case. Also it may not be efficient than GEE with independence working correlation. Our simulation studies verify the result.

Generalized methods of moments in marginal models for longitudinal data with time-dependent covariates

  • Cho, Gyo-Young;Dashnyam, Oyunchimeg
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.877-883
    • /
    • 2013
  • The quadratic inference functions (QIF) method proposed by Qu et al. (2000) and the generalized method of moments (GMM) for marginal regression analysis of longitudinal data with time-dependent covariates proposed by Lai and Small (2007) both are the methods based on generalized method of moment (GMM) introduced by Hansen (1982) and both use generalized estimating equations (GEE). Lai and Small (2007) divided time-dependent covariates into three types such as: Type I, Type II and Type III. In this paper, we compared these methods in the case of Type II and Type III in which full covariates conditional mean assumption (FCCM) is violated and interested in whether they can improve the results of GEE with independence working correlation. We show that in the marginal regression model with Type II time-dependent covariates, GMM Type II of Lai and Small (2007) provides more ecient result than QIF and for the Type III time-dependent covariates, QIF with independence working correlation and GMM Type III methods provide the same results. Our simulation study showed the same results.

An Empirical Study on the Asymmetric Correlation and Market Efficiency Between International Currency Futures and Spot Markets with Bivariate GJR-GARCH Model (이변량 GJR-GARCH모형을 이용한 국제통화선물시장과 통화현물시장간의 비대칭적 인과관계 및 시장효율성 비교분석에 관한 연구)

  • Hong, Chung-Hyo
    • The Korean Journal of Financial Management
    • /
    • v.27 no.1
    • /
    • pp.1-30
    • /
    • 2010
  • This paper tested the lead-lag relationship as well as the symmetric and asymmetric volatility spillover effects between international currency futures markets and cash markets. We use five kinds of currency spot and futures markets such as British pound, Australian and Canadian dollar, Brasilian Real and won/dollar spot and futures markets. daily closing prices covering from September 15, 2003 to July 30, 2009. For this purpose we employed dynamic time series models such as the Granger causality based on VAR and time-varying MA(1)-GJR-GARCH(1, 1)-M. The main empirical results are as follows; First, according to Granger causality test, we find that the bilateral lead-lag relationship between the five countries' currency spot and futures market. The price discover effect from currency futures markets to spot market is relatively stronger than that from currency spot to futures markets. Second, based on the time varying GARCH model, we find that there is a bilateral conditional mean spillover effects between the five currency spot and futures markets. Third, we also find that there is a bilateral asymmetric volatility spillover effects between British pound, Canadian dollar, Brasilian Real and won/dollar spot and futures market. However there is a unilateral asymmetric volatility spillover effect from Australian dollar futures to cash market, not vice versa. From these empirical results we infer that most of currency futures markets have a much better price discovery function than currency cash market and are inefficient to the information.

  • PDF

Assessing Infinite Failure Software Reliability Model Using SPC (Statistical Process Control) (통계적 공정관리(SPC)를 이용한 무한고장 소프트웨어 신뢰성 모형에 대한 접근방법 연구)

  • Kim, Hee Cheul;Shin, Hyun Cheul
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.85-92
    • /
    • 2012
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on infinite failure model and non-homogeneous Poisson Processes (NHPP). For someone making a decision about when to market software, the conditional failure rate is an important variables. The finite failure model are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many study. Statistical Process Control (SPC) can monitor the forecasting of software failure and there by contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, we proposed a control mechanism based on NHPP using mean value function of log Poission, log-linear and Parto distribution.

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on polynomial hazard function (다항 위험함수에 근거한 NHPP 소프트웨어 신뢰모형에 관한 통계적 공정관리 접근방법 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.5
    • /
    • pp.345-353
    • /
    • 2015
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do parameter inference for software reliability models based on finite failure model and non-homogeneous Poisson Processes (NHPP). For someone making a decision to market software, the conditional failure rate is an important variables. In this case, finite failure model are used in a wide variety of practical situations. Their use in characterization problems, detection of outlier, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many study. Statistical process control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, proposed a control mechanism based on NHPP using mean value function of polynomial hazard function.

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on Musa-Okumo and Power-law Type (Musa-Okumoto와 Power-law형 NHPP 소프트웨어 신뢰모형에 관한 통계적 공정관리 접근방법 비교연구)

  • Kim, Hee-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.483-490
    • /
    • 2015
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do likelihood inference for software reliability models based on finite failure model and non-homogeneous Poisson Processes (NHPP). For someone making a decision about when to market software, the conditional failure rate is an important variables. The infinite failure model are used in a wide variety of practical situations. Their use in characterization problems, detection of outlier, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many study. Statistical process control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, proposed a control mechanism based on NHPP using mean value function of Musa-Okumo and Power law type property.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

The effect of blood cadmium levels on hypertension in male firefighters in a metropolitan city

  • Ye-eun Jeon;Min Ji Kim;Insung Chung;Jea Chul Ha
    • Annals of Occupational and Environmental Medicine
    • /
    • v.34
    • /
    • pp.37.1-37.15
    • /
    • 2022
  • Background: This study investigated the effect of dispatch frequency on blood cadmium levels and the effect of blood cadmium levels on hypertension in male firefighters in a metropolitan city. Methods: We conducted a retrospective longitudinal study of male firefighters who completed the regular health checkups, including a health examination survey and blood cadmium measurements. We followed them for 3 years. To investigate the effect of dispatch frequency on blood cadmium levels and the effect of blood cadmium levels on hypertension, we estimated the short-term (model 1) and long-term (model 2) effects of exposure and hypothesized a reversed causal pathway model (model 3) for sensitivity analysis. Sequential conditional mean models were fitted using generalized estimating equations, and the odds ratios (ORs) and the respective 95% confidence intervals (CIs) were calculated for hypertension for log-transformed (base 2) blood cadmium levels and quartiles. Results: Using the lowest category of dispatch frequency as a reference, we observed that the highest category showed an increase in blood cadmium levels of 1.879 (95% CI: 0.673, 3.086) ㎍/dL and 0.708 (95% CI: 0.023, 1.394) ㎍/dL in models 2 and 3, respectively. In addition, we observed that doubling the blood cadmium level significantly increased the odds of hypertension in model 1 (OR: 1.772; 95% CI: 1.046, 3.003) and model 3 (OR: 4.288; 95% CI: 1.110, 16.554). Using the lowest quartile of blood cadmium levels as a reference, the highest quartile showed increased odds of hypertension in model 1 (OR: 2.968; 95% CI: 1.121, 7.861) and model 3 (OR: 33.468; 95% CI: 1.881, 595.500). Conclusions: We found that dispatch frequency may affect blood cadmium levels in male firefighters, and high blood cadmium levels may influence hypertension in a dose-response manner.

A Sparse Data Preprocessing Using Support Vector Regression (Support Vector Regression을 이용한 희소 데이터의 전처리)

  • Jun, Sung-Hae;Park, Jung-Eun;Oh, Kyung-Whan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.6
    • /
    • pp.789-792
    • /
    • 2004
  • In various fields as web mining, bioinformatics, statistical data analysis, and so forth, very diversely missing values are found. These values make training data to be sparse. Largely, the missing values are replaced by predicted values using mean and mode. We can used the advanced missing value imputation methods as conditional mean, tree method, and Markov Chain Monte Carlo algorithm. But general imputation models have the property that their predictive accuracy is decreased according to increase the ratio of missing in training data. Moreover the number of available imputations is limited by increasing missing ratio. To settle this problem, we proposed statistical learning theory to preprocess for missing values. Our statistical learning theory is the support vector regression by Vapnik. The proposed method can be applied to sparsely training data. We verified the performance of our model using the data sets from UCI machine learning repository.