• Title/Summary/Keyword: S-principal

Search Result 2,185, Processing Time 0.026 seconds

Processing of Downhole S-wave Seismic Survey Data by Considering Direction of Polarization

  • Kim, Jin-Hoo;Park, Choon-B.
    • Journal of the Korean Geophysical Society
    • /
    • v.5 no.4
    • /
    • pp.321-328
    • /
    • 2002
  • Difficulties encountered in downhole S-wave (shear wave) surveys include the precise determination of shear wave travel times and determination of geophone orientation relative to the direction of polarization caused by the seismic source. In this study an S-wave enhancing and a principal component analysis method were adopted as a tool for determination of S-wave arrivals and the direction of polarization from downhole S-wave survey data. An S-wave enhancing method can almost double the amplitudes of S-waves, and the angle between direction of polarization and a geophone axis can be obtained by a principal component analysis. Once the angle is obtained data recorded by two horizontal geophones are transformed to principal axes, yielding so called scores. The scores gathered along depth are all in-phase, consequently, the accuracy of S-wave arrival picking could be remarkably improved. Applying this processing method to the field data reveals that the test site consists of a layered ground earth structure.

  • PDF

An Efficient Method to Compute a Covariance Matrix of the Non-local Means Algorithm for Image Denoising with the Principal Component Analysis (영상 잡음 제거를 위한 주성분 분석 기반 비 지역적 평균 알고리즘의 효율적인 공분산 행렬 계산 방법)

  • Kim, Jeonghwan;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.60-65
    • /
    • 2016
  • This paper introduces the non-local means (NLM) algorithm for image denoising, and also introduces an improved algorithm which is based on the principal component analysis (PCA). To do the PCA, a covariance matrix of a given image should be evaluated first. If we let the size of neighborhood patches of the NLM S × S2, and let the number of pixels Q, a matrix multiplication of the size S2 × Q is required to compute a covariance matrix. According to the characteristic of images, such computation is inefficient. Therefore, this paper proposes an efficient method to compute the covariance matrix by sampling the pixels. After sampling, the covariance matrix can be computed with matrices of the size S2 × floor (Width/l) × (Height/l).

A Criterion for the Selection of Principal Components in the Robust Principal Component Regression (로버스트주성분회귀에서 최적의 주성분선정을 위한 기준)

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.6
    • /
    • pp.761-770
    • /
    • 2011
  • Robust principal components regression is suggested to deal with both the multicollinearity and outlier problem. A main aspect of the robust principal components regression is the selection of an optimal set of principal components. Instead of the eigenvalue of the sample covariance matrix, a selection criterion is developed based on the condition index of the minimum volume ellipsoid estimator which is highly robust against leverage points. In addition, the least trimmed squares estimation is employed to cope with regression outliers. Monte Carlo simulation results indicate that the proposed criterion is superior to existing ones.

Shrinkage Structure of Ridge Partial Least Squares Regression

  • Kim, Jong-Duk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.2
    • /
    • pp.327-344
    • /
    • 2007
  • Ridge partial least squares regression (RPLS) is a regression method which can be obtained by combining ridge regression and partial least squares regression and is intended to provide better predictive ability and less sensitive to overfitting. In this paper, explicit expressions for the shrinkage factor of RPLS are developed. The structure of the shrinkage factor is explored and compared with those of other biased regression methods, such as ridge regression, principal component regression, ridge principal component regression, and partial least squares regression using a near infrared data set.

  • PDF

Principal Components Regression in Logistic Model (로지스틱모형에서의 주성분회귀)

  • Kim, Bu-Yong;Kahng, Myung-Wook
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.4
    • /
    • pp.571-580
    • /
    • 2008
  • The logistic regression analysis is widely used in the area of customer relationship management and credit risk management. It is well known that the maximum likelihood estimation is not appropriate when multicollinearity exists among the regressors. Thus we propose the logistic principal components regression to deal with the multicollinearity problem. In particular, new method is suggested to select proper principal components. The selection method is based on the condition index instead of the eigenvalue. When a condition index is larger than the upper limit of cutoff value, principal component corresponding to the index is removed from the estimation. And hypothesis test is sequentially employed to eliminate the principal component when a condition index is between the upper limit and the lower limit. The limits are obtained by a linear model which is constructed on the basis of the conjoint analysis. The proposed method is evaluated by means of the variance of the estimates and the correct classification rate. The results indicate that the proposed method is superior to the existing method in terms of efficiency and goodness of fit.

Equivalence study of canonical correspondence analysis by weighted principal component analysis and canonical correspondence analysis by Gaussian response model (가중주성분분석을 활용한 정준대응분석과 가우시안 반응 모형에 의한 정준대응분석의 동일성 연구)

  • Jeong, Hyeong Chul
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.945-956
    • /
    • 2021
  • In this study, we considered the algorithm of Legendre and Legendre (2012), which derives canonical correspondence analysis from weighted principal component analysis. And, it was proved that the canonical correspondence analysis based on the weighted principal component analysis is exactly the same as Ter Braak's (1986) canonical correspondence analysis based on the Gaussian response model. Ter Braak (1986)'s canonical correspondence analysis derived from a Gaussian response curve that can explain the abundance of species in ecology well uses the basic assumption of the species packing model and then conducts generalized linear model and canonical correlation analysis. It is derived by way of binding. However, the algorithm of Legendre and Legendre (2012) is calculated in a method quite similar to Benzecri's correspondence analysis without such assumptions. Therefore, if canonical correspondence analysis based on weighted principal component analysis is used, it is possible to have some flexibility in using the results. In conclusion, this study shows that the two methods starting from different models have the same site scores, species scores, and species-environment correlations.

A Research on the Job Analysis of the Principal of Vocation High School using DACUM Method (데이컴(DACUM) 기법을 활용한 직업계고등학교 학교장의 직무 분석)

  • Hyun, Su
    • 대한공업교육학회지
    • /
    • v.44 no.1
    • /
    • pp.114-140
    • /
    • 2019
  • The purpose of this research is to analyze a principal's job at a Vocational High School using DACUM Task Analysis Method. The contents of this research are to set the order after deriving the duties and tasks of the principal., then to verify as the importance, difficulty, and frequency of each task, and also to indicate whether it is an essential capability to have in the early stages of one's duty. Finally, based on the job analysis results, a DACUM chart was developed by the principal of the Vocational High School. The DACUM Task Analysis Workshop was attended by one DACUM analyst with LEVEL - 1 license, seven DACUM members with more than four years experience, one secretary and two administrative assistants for a two-day period. The results of the research are as follows; First, the Vocational High School Principal is defined as a school administrator who operate the vocational education curriculum of in the specialized and customized high schools of industrial demand development and the job area. The analysis derived 11 duties and 95 tasks of the Principal. Second, the importance, difficulties, and frequency of each task were divided respectively into high (A), moderate (B), and low (C), and the consensus of the experts was made to determine whether the core capabilities are acquired early on the job. Third, based on the analysis results, a DACUM Task Analysis chart of the Vocational High School principals was presented. In addition, while engaged on the job of the vocational high school principal, a list of 49 general knowledge and abilities, 16 tools, Integrated data and fixtures are required. Along with 28 attitudes 33 future prospects and characteristics of the Vocational High School principal was presented.

Evaluation of Current Coding Practices in 3 University Hospitals (3개 대학병원의 주 진단 코딩사례 평가)

  • Seo, Sun Won;Kim, Kwang Hwan;Pu, Yoo Kyung;Suh, Jin Sook;Seo, Jeong-Don;Park, Woo-Sung;Yoon, Seok Jun;Lee, Young Sung;Lee, Moo-Sik;Chung, Hee-Ung
    • Quality Improvement in Health Care
    • /
    • v.9 no.1
    • /
    • pp.52-64
    • /
    • 2002
  • Background : Coding of principal diagnosis is essential component for producing reliable health statistics. We performed this study to evaluate the current practice of principal diagnoses determination and coding, and to give some basic data to improve coding of principal diagnosis. Method : Nineteen medical record administrators (MRAs) of 3 university hospitals participated in coding principal Dx. from August 1, 2001 to August 31, 2001. From each hospital, 10 medical records of patients with high frequency disease were selected randomly. Each 10 medical records were grouped into three (A. B, C). Then, these 30 medical records were given to each MRAs for coding. At the same time questionnaire was given to each of them. Questions were to prove how they decide and code the principal diagnosis among many current diagnoses; how they decide and code the principal diagnosis when they see irrelevant diagnosis recorded as the principal diagnosis in medical record, when only tentative diagnoses were recorded without final diagnosis, and when different diagnoses were recorded in different sheets of same record. Agreement of coding among 3 hospitals were compared and survey results were analysed with SAS 6.12. Results : Agreement of coding was found in medical records 5-6 of each 10 medical records. Causes of disagreement were as follows. Difference of clinician's opinion from each hospital; mixed use of guideline from KCD-3 and guideline from DRG; difference in 4th digit classification according to the absence of pathology report in the medical record; difference of abbreviations among hospitals. 57.9% of MRAs selected the principal diagnosis recorded by physician, 42.1% of MRAs decided principal diagnosis after consulting to KCD-3 guideline. When there were difficulties in determining the principal diagnosis, 42.1% of MRAs decided principal diagnosis after discussion with the physician, 26.3% after discussion with fellow MRAs. Conclusion : There were differences in codings among hospitals. To minimize the difference, we suggest the development of disease-specific guidelines for coding in addition to the current general guideline such as KCD-3. To do this, Coding Clinic which can produce guidelines is needed.

  • PDF

Magnetocardiogram Topography with Automatic Artifact Correction using Principal Component Analysis and Artificial Neural Network

  • Ahn C.B.;Kim T.H.;Park H.C.;Oh S.J.
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.2
    • /
    • pp.59-63
    • /
    • 2006
  • Magnetocardiogram (MCG) topography is a useful diagnostic technique that employs multi-channel magnetocardiograms. Measurement of artifact-free MCG signals is essenctial to obtain MCG topography or map for a diagnosis of human heart. Principal component analysis (PCA) combined with an artificial neural network (ANN) is proposed to remove a pulse-type artifact in the MCG signals. The algorithm is composed of a PCA module which decomposes the obtained signal into its principal components, followed by an ANN module for the classification of the components automatically. In the experiments with volunteer subjects, 97% of the decisions that were made by the ANN were identical to those by the human experts. Using the proposed technique, the MCG topography was successfully obtained without the artifact.

A Numerical Taxonomic Study of Calystegia in Korea by the Cluster Analysis and Principal Component Analysis (류집분석과 주성분분석에 의한 한국산 메꽃과의 수량분류학적 연구)

  • Kim, Yun Shik
    • Journal of Plant Biology
    • /
    • v.27 no.1
    • /
    • pp.33-41
    • /
    • 1984
  • The relationships and character variations on 5 taxa of Calystegia were examined by sluster analysis and principal component analysis. Thirteen Calystegia population samples from the middle part of Korea were observed. Although minor differences were noted, essentially similar results were obtained from the phenograms by UPGMA, UPGMC and Ward's clustering methods, and these results were in accordance with those obtained from the ordination plots by principal component analysis. C. soldanella is distantly connected with the other taxa mainly because of its morphologically different leaf organs. Based on the difference on the first principal component, C. hederacae is kept apart from the rest 3 taxa. In the relationships among C. japonica, C. sepium var. americana and C. davurica, mivor differences were obtained from the 3 clustering methods. As to the character variations among different populations within a taxon, they are slight in C. soldanella and C. sepium var. americana, but remarkable in C. hederacae and C. davurica.

  • PDF