• Title/Summary/Keyword: variable coefficients

Search Result 680, Processing Time 0.032 seconds

TWO VARIABLE HIGHER-ORDER FUBINI POLYNOMIALS

  • Kim, Dae San;Kim, Taekyun;Kwon, Hyuck-In;Park, Jin-Woo
    • Journal of the Korean Mathematical Society
    • /
    • v.55 no.4
    • /
    • pp.975-986
    • /
    • 2018
  • Some new family of Fubini type numbers and polynomials associated with Apostol-Bernoulli numbers and polynomilas were introduced recently by Kilar and Simsek ([5]) and we study the two variable Fubini polynomials as Appell polynomials whose coefficients are the Fubini polynomials. In this paper, we would like to utilize umbral calculus in order to study two variable higher-order Fubini polynomials. We derive some of their properties, explicit expressions and recurrence relations. In addition, we express the two variable higher-order Fubini polynomials in terms of some families of special polynomials and vice versa.

H-likelihood approach for variable selection in gamma frailty models

  • Ha, Il-Do;Cho, Geon-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.1
    • /
    • pp.199-207
    • /
    • 2012
  • Recently, variable selection methods using penalized likelihood with a shrink penalty function have been widely studied in various statistical models including generalized linear models and survival models. In particular, they select important variables and estimate coefficients of covariates simultaneously. In this paper, we develop a penalize h-likelihood method for variable selection in gamma frailty models. For this we use the smoothly clipped absolute deviation (SCAD) penalty function, which satisfies a good property in variable selection. The proposed method is illustrated using simulation study and a practical data set.

Variable selection in Poisson HGLMs using h-likelihoood

  • Ha, Il Do;Cho, Geon-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1513-1521
    • /
    • 2015
  • Selecting relevant variables for a statistical model is very important in regression analysis. Recently, variable selection methods using a penalized likelihood have been widely studied in various regression models. The main advantage of these methods is that they select important variables and estimate the regression coefficients of the covariates, simultaneously. In this paper, we propose a simple procedure based on a penalized h-likelihood (HL) for variable selection in Poisson hierarchical generalized linear models (HGLMs) for correlated count data. For this we consider three penalty functions (LASSO, SCAD and HL), and derive the corresponding variable-selection procedures. The proposed method is illustrated using a practical example.

An Economic Design of a Screening and Process Monitoring Procedure for a Normal Model (정규모형하에서의 선별검사 및 공정감시 절차의 경제적 설계)

  • Kwon, Hyuck-Moo;Hong, Sung-Hoon;Lee, Min-Koo;Kim, Sang-Boo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.26 no.3
    • /
    • pp.200-205
    • /
    • 2000
  • An economic process monitoring procedure is presented using a surrogate variable for the case where performance variable is dichotomous. Every item is inspected with a surrogate variable and determined whether it should be accepted or rejected. When an item is rejected, the previous number of consecutively accepted items is compared with a predetermined number r to decide whether there is a shift in fraction nonconforming or not. The conditional distribution of the surrogate variable given the performance variable is assumed to be normal. A cost model is constructed which includes costs of inspection, misclassification, illegal signal, undetected out-of-control state, and correction. Methods of finding the optimum number r and screening limit are provided. Numerical studies on the effects of cost coefficients are also performed.

  • PDF

Context-based coding of inter-frame DCT coefficients for video compression (비디오 압축을 위한 영상간 차분 DCT 계수의 문맥값 기반 부호화 방법)

  • Lee, Jin-Hak;Kim, Jae-Kyoon
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.281-285
    • /
    • 2000
  • This paper proposes context-based coding methods for variable length coding of inter-frame DCT coefficients. The proposed methods classify run-level symbols depending on the preceding coefficients. No extra overhead needs to be transmitted, since the information of the previously transmitted coefficients is used for classification. Two entropy coding methods, arithmetic coding and Huffman coding, are used for the proposed context-based coding. For Huffman coding, there is no complexity increase from the current standards by using the existing inter/intra VLC tables. Experimental results show that the proposed methods give ~ 19% bits gain and ~ 0.8 dB PSNR improvement for adaptive inter/intra VLC table selection, and ~ 37% bits gain and ~ 2.7dB PSNR improvement for arithmetic coding over the current standards, MPEG-4 and H.263. Also, the proposed methods obtain larger gain for small quantizaton parameters and the sequences with fast and complex motion. Therefore, for high quality video coding, the proposed methods have more advantage.

  • PDF

Functional Data Classification of Variable Stars

  • Park, Minjeong;Kim, Donghoh;Cho, Sinsup;Oh, Hee-Seok
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.4
    • /
    • pp.271-281
    • /
    • 2013
  • This paper considers a problem of classification of variable stars based on functional data analysis. For a better understanding of galaxy structure and stellar evolution, various approaches for classification of variable stars have been studied. Several features that explain the characteristics of variable stars (such as color index, amplitude, period, and Fourier coefficients) were usually used to classify variable stars. Excluding other factors but focusing only on the curve shapes of variable stars, Deb and Singh (2009) proposed a classification procedure using multivariate principal component analysis. However, this approach is limited to accommodate some features of the light curve data that are unequally spaced in the phase domain and have some functional properties. In this paper, we propose a light curve estimation method that is suitable for functional data analysis, and provide a classification procedure for variable stars that combined the features of a light curve with existing functional data analysis methods. To evaluate its practical applicability, we apply the proposed classification procedure to the data sets of variable stars from the project STellar Astrophysics and Research on Exoplanets (STARE).

Level of Lead in Air and Blood Zinc Protoporphyrin of Workers in Lead Plants (연 취급 노동자의 연 폭로 수준 및 혈중 Zinc Protoporphyrin 농도)

  • 김창영
    • Journal of Environmental Health Sciences
    • /
    • v.17 no.1
    • /
    • pp.95-103
    • /
    • 1991
  • For the purpose of estimating the working environment and the relationship between the airborne lead concentration and the ZPP level in the whole blood of the workers, the airborne lead concentrations and the ZPP level were measured at the 26 plants which deal with lead, from October 5 to November 5 in 1988. Analysis of the airborne lead concentration was performed by NIOSH Method 7082, and the ZPP level was measured by a hematofluorometer. The following results are concluded. 1. The average airborne lead concentration of the lead battery manufactures is 0.025mg/m$^{3}$ and that of the secondary lead smelters is 0.023mg/m$^{3}$. There were no significant differences between industry (p>0.1) 2. At the lead battery manufacture, the process of lead powder production showed the highest concentration of 0.034mg/m$^{3}$ but there were no significant differences among the processes (p>0.1). At the secondary lead smelter, the process of dismantling waste batteries showed the highest concentration 0.141mg/m$^{3}$, and there were very significant differences among the processes (p<0.005). 3. The ZPP level in the whole blood showed significant differences between industry (p<0.10). The average ZPP level of the lead battery manufactures is 133.0 + 106.3 $\mu$g/100ml and that of the secondary lead smelters is 149.6 + 110.9 $\mu$g/100ml. 4. The correlation coefficients between the airborne lead concantration and ZPP level were 0. 426 (p<0.001) for the lead battery manufactures and 0.484 (p<0.001) for the secondary lead smelters. The correlation coefficients between the work duration (in months) and the ZPP level were 0.238 (p<0.001) for the lead battery mannfactures and 0.075 (p>0.10) for the secondary lead smelters. 5. The linear regression equation, with the airborne lead concentration as an independent variable and the ZPP level as a dependent variable, is Y=96.84+1300.34X (r=0.448, p<0.001) for the 26 plants which deal with lead. The linear regression equation, with the work duration(in months) as an independent variable and the ZPP level as a dependent variable, is Y=127.28 +0.49X (r=0.162, p<0.05). 6. The correlation coefficients between the amount of inhaled lead and ZPP level were 0.349 (p < 0.001) for the lead battery manufactures and 0.318(p<0.001) for the secondary lead smeltes. The linear regression equation for the 26 plants surveyed, with the amount of inhaled lead as an independent variable and ZPP level as a dependent variable, is Y=123.63+18.82X (r=0. 335, p<0.001).

  • PDF

Concurrent Equalizer with Squared Error Weight-Based Tap Coefficients Update (오차 제곱 가중치기반 랩 계수 갱신을 적용한 동시 등화기)

  • Oh, Kil-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.3C
    • /
    • pp.157-162
    • /
    • 2011
  • For blind equalization of communication channels, concurrent equalization is useful to improve convergence characteristics. However, the concurrent equalization will result in limited performance enhancement by continuing concurrent adaptation with two algorithms after the equalizer converges to steady-state. In this paper, to improve the convergence characteristics and steady-state performance of the concurrent equalization, proposed is a new concurrent equalization technique with variable step-size parameter and weight-based tap coefficients update. The proposed concurrent vsCMA+DD equalization calculates weight factors using error signals of the variable step-size CMA (vsCMA) and DD (decision-directed) algorithm, and then updates the two equalizers based on the weights respectively. The proposed method, first, improves the error performance of the CMA by the vsCMA, and enhances the steady-state performance as well as the convergence speed further by the weight-based tap coefficients update. The performance improvement by the proposed scheme is verified through simulations.

Study of estimated model of drift through real ship (실선에 의한 표류 예측모델에 관한 연구)

  • Chang-Heon LEE;Kwang-Il KIM;Sang-Lok YOO;Min-Son KIM;Seung-Hun HAN
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.60 no.1
    • /
    • pp.57-70
    • /
    • 2024
  • In order to present a predictive drift model, Jeju National University's training ship was tested for about 11 hours and 40 minutes, and 81 samples that selected one of the entire samples at ten-minute intervals were subjected to regression analysis after verifying outliers and influence points. In the outlier and influence point analysis, although there is a part where the wind direction exceeds 1 in the DFBETAS (difference in Betas) value, the CV (cumulative variable) value is 6%, close to 1. Therefore, it was judged that there would be no problem in conducting multiple regression analyses on samples. The standard regression coefficient showed how much current and wind affect the dependent variable. It showed that current speed and direction were the most important variables for drift speed and direction, with values of 47.1% and 58.1%, respectively. The analysis showed that the statistical values indicated the fit of the model at the significance level of 0.05 for multiple regression analysis. The multiple correlation coefficients indicating the degree of influence on the dependent variable were 83.2% and 89.0%, respectively. The determination of coefficients were 69.3% and 79.3%, and the adjusted determination of coefficients were 67.6% and 78.3%, respectively. In this study, a more quantitative prediction model will be presented because it is performed after identifying outliers and influence points of sample data before multiple regression analysis. Therefore, many studies will be active in the future by combining them.

A Penalized Principal Components using Probabilistic PCA

  • Park, Chong-Sun;Wang, Morgan
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.151-156
    • /
    • 2003
  • Variable selection algorithm for principal component analysis using penalized likelihood method is proposed. We will adopt a probabilistic principal component idea to utilize likelihood function for the problem and use HARD penalty function to force coefficients of any irrelevant variables for each component to zero. Consistency and sparsity of coefficient estimates will be provided with results of small simulated and illustrative real examples.

  • PDF