• Title/Summary/Keyword: predictive performance

Search Result 1,335, Processing Time 0.037 seconds

A Study on the Revitalization of the Competency Assessment System in the Public Sector : Compare with Private Sector Operations (공공부문 역량평가제도의 활성화 방안에 대한 연구 : 민간부분의 운영방식과의 비교 연구)

  • Kwon, Yong-man;Jeong, Jang-ho
    • Journal of Venture Innovation
    • /
    • v.4 no.1
    • /
    • pp.51-65
    • /
    • 2021
  • The HR policy in the public sector was closed and operated mainly on written tests, but in 2006, a new evaluation, promotion and education system based on competence was introduced in the promotion and selection system of civil servants. In particular, the seniority-oriented promotion system was evaluated based on competence by operating an Assessment Center related to promotion. Competency evaluation is known to be the most reliable and valid evaluation method among the evaluation methods used to date and is also known to have high predictive feasibility for performance. In 2001, 19 government standard competency models were designed. In 2006, the competency assessment was implemented with the implementation of the high-ranking civil service team system. In the public sector, the purpose of the competency evaluation is mainly to select third-grade civil servants, assign fourth-grade civil servants, and promotion fifth-grade civil servants. However, competency assessments in the public sector differ in terms of competency assessment objectives, assessment processes and competency assessment programmes compared to those in the private sector. For the purposes of competency assessment, the public sector is for the promotion of candidates, and the private sector focuses on career development and fostering. Therefore, it is not continuously developing capabilities than the private sector and is not used to enhance performance in performing its duties. In relation to evaluation items, the public sector generally operates a system that passes capacity assessment at 2.5 out of 5 for 6 competencies, lacks feedback on what competencies are lacking, and the private sector uses each individual's competency score. Regarding the selection and operation of evaluators, the public sector focuses on fairness in evaluation, and the private sector focuses on usability, which is inconsistent with the aspect of developing capabilities and utilizing human resources in the right place. Therefore, the public sector should also improve measures to identify outstanding people and motivate them through capacity evaluation and change the operation of the capacity evaluation system so that they can grow into better managers through accurate reports and individual feedback

Prediction of Life Expectancy for Terminally Ill Cancer Patients Based on Clinical Parameters (말기 암 환자에서 임상변수를 이용한 생존 기간 예측)

  • Yeom, Chang-Hwan;Choi, Youn-Seon;Hong, Young-Seon;Park, Yong-Gyu;Lee, Hye-Ree
    • Journal of Hospice and Palliative Care
    • /
    • v.5 no.2
    • /
    • pp.111-124
    • /
    • 2002
  • Purpose : Although the average life expectancy has increased due to advances in medicine, mortality due to cancer is on an increasing trend. Consequently, the number of terminally ill cancer patients is also on the rise. Predicting the survival period is an important issue in the treatment of terminally ill cancer patients since the choice of treatment would vary significantly by the patents, their families, and physicians according to the expected survival. Therefore, we investigated the prognostic factors for increased mortality risk in terminally ill cancer patients to help treat these patients by predicting the survival period. Methods : We investigated 31 clinical parameters in 157 terminally ill cancer patients admitted to in the Department of Family Medicine, National Health Insurance Corporation Ilsan Hospital between July 1, 2000 and August 31, 2001. We confirmed the patients' survival as of October 31, 2001 based on medical records and personal data. The survival rates and median survival times were estimated by the Kaplan-Meier method and Log-rank test was used to compare the differences between the survival rates according to each clinical parameter. Cox's proportional hazard model was used to determine the most predictive subset from the prognostic factors among many clinical parameters which affect the risk of death. We predicted the mean, median, the first quartile value and third quartile value of the expected lifetimes by Weibull proportional hazard regression model. Results : Out of 157 patients, 79 were male (50.3%). The mean age was $65.1{\pm}13.0$ years in males and was $64.3{\pm}13.7$ years in females. The most prevalent cancer was gastric cancer (36 patients, 22.9%), followed by lung cancer (27, 17.2%), and cervical cancer (20, 12.7%). The survival time decreased with to the following factors; mental change, anorexia, hypotension, poor performance status, leukocytosis, neutrophilia, elevated serum creatinine level, hypoalbuminemia, hyperbilirubinemia, elevated SGPT, prolonged prothrombin time (PT), prolonged activated partial thromboplastin time (aPTT), hyponatremia, and hyperkalemia. Among these factors, poor performance status, neutrophilia, prolonged PT and aPTT were significant prognostic factors of death risk in these patients according to the results of Cox's proportional hazard model. We predicted that the median life expectancy was 3.0 days when all of the above 4 factors were present, $5.7{\sim}8.2$ days when 3 of these 4 factors were present, $11.4{\sim}20.0$ days when 2 of the 4 were present, and $27.9{\sim}40.0$ when 1 of the 4 was present, and 77 days when none of these 4 factors were present. Conclusions : In terminally ill cancer patients, we found that the prognostic factors related to reduced survival time were poor performance status, neutrophilia, prolonged PT and prolonged am. The four prognostic factors enabled the prediction of life expectancy in terminally ill cancer patients.

  • PDF

Predicting hospital bankruptcy in Korea (병원도산 예측에 관한 연구)

  • Lee, Moo-Sik;Seo, Young-Joon
    • Journal of Preventive Medicine and Public Health
    • /
    • v.31 no.3 s.62
    • /
    • pp.490-502
    • /
    • 1998
  • This study purports to find the predictor of hospital bankruptcy in Korea and to examine the predictive power of the discriminant function model of hospital bankruptcy. Data on 17 financial and 4 non-financial indicators of 31 bankrupt and 31 profitable hospitals of 1, 2, and 3 years before bankruptcy were obtained from the hospital performance databank of Korea Institute of Health Services Management. Significant variables were identified through mean comparison of each indicator between bankrupt and profitable hospitals, and the discriminant function model of hospital bankruptcy was developed. The major findings are as follows 1. As for profitability indicators, net worth to total assets, operating profit to total capital, operating profit ratio to gross revenues, normal profit to total assets, normal profit to gross revenues, net profit to total assets were significantly different in mean comparison test in 1, 2, and 3 years before hospital bankruptcy. With regard to liquidity indicators, current ratio and quick ratio were significant in 1 year before bankruptcy. For activity indicators, patients receivable turnover was significant in 2 and 3 years before bankruptcy and added value per adjusted inpatient days was significant in 3 years before bankruptcy. 2. The discriminant function in 1, 2, and 3 years before bankruptcy were; $Z=-0.0166{\times}quick$ ratio-$0.1356{\times}normal$ profit to total assets-$1.545{\times}total$ assets turnrounds in 1 year before bankruptcy, $Z=-0.0119{\times}quick$ ratio-$0.1433{\times}operating$ profit to total assets-$0.0227{\times}value$ added to total assets in 2 years before bankruptcy, and $Z=-0.3533{\times}net$ profit to total assets-$0.1336{\times}patients$ receivables turn-rounds-$0.04301{\times}added$ value per adjusted $patient+0.00119{\times}average$ daily inpatient census in 3 years before bankruptcy. 3. The discriminant function's discriminant power in 1, 2, and 3 years before bankruptcy was 77.42, 79.03, 82.25% respectively.

  • PDF

NIRS AS AN ESSENTIAL TOOL IN FOOD SAFETY PROGRAMS: FEED INGREDIENTS PREDICTION H COMMERCIAL COMPOUND FEEDING STUFFS

  • Varo, Ana-Garrido;MariaDoloresPerezMarin;Cabrera, Augusto-Gomez;JoseEmilioGuerrero Ginel;FelixdePaz;NatividadDelgado
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1153-1153
    • /
    • 2001
  • Directive 79/373/EEC on the marketing of compound feeding stuffs, provided far a flexible declaration arrangement confined to the indication of the feed materials without stating their quantity and the possibility was retained to declare categories of feed materials instead of declaring the feed materials themselves. However, the BSE (Bovine Spongiform Encephalopathy) and the dioxin crisis have demonstrated the inadequacy of the current provisions and the need of detailed qualitative and quantitative information. On 10 January 2000 the Commission submitted to the Council a proposal for a Directive related to the marketing of compound feeding stuffs and the Council adopted a Common Position (EC N$^{\circ}$/2001) published at the Official Journal of the European Communities of 2. 2. 2001. According to the EC (EC N$^{\circ}$ 6/2001) the feeds material contained in compound feeding stufs intended for animals other than pets must be declared according to their percentage by weight, by descending order of weight and within the following brackets (I :< 30%; II :> 15 to 30%; III :> 5 to 15%; IV : 2% to 5%; V: < 2%). For practical reasons, it shall be allowed that the declarations of feed materials included in the compound feeding stuffs are provided on an ad hoc label or accompanying document. However, documents alone will not be sufficient to restore public confidence on the animal feed industry. The objective of the present work is to obtain calibration equations fur the instanteneous and simultaneous prediction of the chemical composition and the percentage of ingredients of unground compound feeding stuffs. A total of 287 samples of unground compound feeds marketed in Spain were scanned in a FOSS-NIR Systems 6500 monochromator using a rectangular cup with a quartz window (16 $\times$ 3.5 cm). Calibration equations were obtained for the prediction of moisture ($R^2$= 0.84, SECV = 0.54), crude protein ($R^2$= 0.96, SECV = 0.75), fat ($R^2$= 0.86, SECV = 0.54), crude fiber ($R^2$= 0.97, SECV = 0.63) and ashes ($R^2$= 0.86, SECV = 0.83). The sane set of spectroscopic data was used to predict the ingredient composition of the compound feeds. The preliminary results show that NIRS has an excellent ability ($r^2$$\geq$ 0, 9; RPD $\geq$ 3) for the prediction of the percentage of inclusion of alfalfa, sunflower meal, gluten meal, sugar beet pulp, palm meal, poultry meal, total meat meal (meat and bone meal and poultry meal) and whey. Other equations with a good predictive performance ($R^2$$\geq$0, 7; 2$\leq$RPD$\leq$3) were the obtained for the prediction of soya bean meal, corn, molasses, animal fat and lupin meal. The equations obtained for the prediction of other constituents (barley, bran, rice, manioc, meat and bone meal, fish meal, calcium carbonate, ammonium clorure and salt have an accuracy enough to fulfill the requirements layed down by the Common Position (EC Nº 6/2001). NIRS technology should be considered as an essential tool in food Safety Programs.

  • PDF

A Study on Relationship between Physical Elements and Tennis/Golf Elbow

  • Choi, Jungmin;Park, Jungwoo;Kim, Hyunseung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.3
    • /
    • pp.183-196
    • /
    • 2017
  • Objective: The purpose of this research was to assess the agreement between job physical risk factor analysis by ergonomists using ergonomic methods and physical examinations made by occupational physicians on the presence of musculoskeletal disorders of the upper extremities. Background: Ergonomics is the systematic application of principles concerned with the design of devices and working conditions for enhancing human capabilities and optimizing working and living conditions. Proper ergonomic design is necessary to prevent injuries and physical and emotional stress. The major types of ergonomic injuries and incidents are cumulative trauma disorders (CTDs), acute strains, sprains, and system failures. Minimization of use of excessive force and awkward postures can help to prevent such injuries Method: Initial data were collected as part of a larger study by the University of Utah Ergonomics and Safety program field data collection teams and medical data collection teams from the Rocky Mountain Center for Occupational and Environmental Health (RMCOEH). Subjects included 173 male and female workers, 83 at Beehive Clothing (a clothing plant), 74 at Autoliv (a plant making air bags for vehicles), and 16 at Deseret Meat (a meat-processing plant). Posture and effort levels were analyzed using a software program developed at the University of Utah (Utah Ergonomic Analysis Tool). The Ergonomic Epicondylitis Model (EEM) was developed to assess the risk of epicondylitis from observable job physical factors. The model considers five job risk factors: (1) intensity of exertion, (2) forearm rotation, (3) wrist posture, (4) elbow compression, and (5) speed of work. Qualitative ratings of these physical factors were determined during video analysis. Personal variables were also investigated to study their relationship with epicondylitis. Logistic regression models were used to determine the association between risk factors and symptoms of epicondyle pain. Results: Results of this study indicate that gender, smoking status, and BMI do have an effect on the risk of epicondylitis but there is not a statistically significant relationship between EEM and epicondylitis. Conclusion: This research studied the relationship between an Ergonomic Epicondylitis Model (EEM) and the occurrence of epicondylitis. The model was not predictive for epicondylitis. However, it is clear that epicondylitis was associated with some individual risk factors such as smoking status, gender, and BMI. Based on the results, future research may discover risk factors that seem to increase the risk of epicondylitis. Application: Although this research used a combination of questionnaire, ergonomic job analysis, and medical job analysis to specifically verify risk factors related to epicondylitis, there are limitations. This research did not have a very large sample size because only 173 subjects were available for this study. Also, it was conducted in only 3 facilities, a plant making air bags for vehicles, a meat-processing plant, and a clothing plant in Utah. If working conditions in other kinds of facilities are considered, results may improve. Therefore, future research should perform analysis with additional subjects in different kinds of facilities. Repetition and duration of a task were not considered as risk factors in this research. These two factors could be associated with epicondylitis so it could be important to include these factors in future research. Psychosocial data and workplace conditions (e.g., low temperature) were also noted during data collection, and could be used to further study the prevalence of epicondylitis. Univariate analysis methods could be used for each variable of EEM. This research was performed using multivariate analysis. Therefore, it was difficult to recognize the different effect of each variable. Basically, the difference between univariate and multivariate analysis is that univariate analysis deals with one predictor variable at a time, whereas multivariate analysis deals with multiple predictor variables combined in a predetermined manner. The univariate analysis could show how each variable is associated with epicondyle pain. This may allow more appropriate weighting factors to be determined and therefore improve the performance of the EEM.

Factors Predicting the Development of Radiation Pneumonitis in the Patients Receiving Radiation Therapy for Lung Cancer (방사선 치료를 시행 받은 폐암 환자에서 방사선 폐렴의 발생에 관한 예측 인자)

  • An, Jin Yong;Lee, Yun Sun;Kwon, Sun Jung;Park, Hee Sun;Jung, Sung Soo;Kim, Jin whan;Kim, Ju Ock;Jo, Moon Jun;Kim, Sun Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.56 no.1
    • /
    • pp.40-50
    • /
    • 2004
  • Background : Radiation pneumonitis(RP) is the major serious complication of thoracic irradiation treatment. In this study, we attempted to retrospectively evaluate the long-term prognosis of patients who experienced acute RP and to identify factor that might allow prediction of RP. Methods : Of the 114 lung cancer patients who underwent thoracic radiotherapy between December 2000 and December 2002, We performed analysis using a database of 90 patients who were capable of being evaluated. Results : Of the 44 patients(48.9%) who experienced clinical RP in this study, the RP was mild in 33(36.6%) and severe in 11(12.3%). All of severe RP were treated with corticosteroids. The median starting corticosteroids dose was 34 mg(30~40) and median treatment duration was 68 days(8~97). The median survival time of the 11 patients who experienced severe RP was significantly poorer than the mild RP group. (p=0.046) The higher total radiation dose(${\geq}60Gy$) was significantly associated with developing in RP.(p=0.001) The incidence of RP did not correlate with any of the ECOG performance, pulmonary function test, age, cell type, history of smoking, radiotherapy combined with chemotherapy, once-daily radiotherapy dose fraction. Also, serum albumin level, uric acid level at onset of RP did not influence the risk of severe RP in our study. Conclusion : Only the higher total radiation dose(${\geq}60Gy$) was a significant risk factor predictive of RP. Also severe RP was an adverse prognostic factor.

Validation of QF-PCR for Rapid Prenatal Diagnosis of Common Chromosomal Aneuploidies in Korea

  • Han, Sung-Hee;Ryu, Jae-Song;An, Jeong-Wook;Park, Ok-Kyoung;Yoon, Hye-Ryoung;Yang, Young-Ho;Lee, Kyoung-Ryul
    • Journal of Genetic Medicine
    • /
    • v.7 no.1
    • /
    • pp.59-66
    • /
    • 2010
  • Purpose: Quantitative fluorescent polymerase chain reaction (QF-PCR) allows for the rapid prenatal diagnosis of common aneuploidies. The main advantages of this assay are its low cost, speed, and automation, allowing for large-scale application. However, despite these advantages, it is not a routine method for prenatal aneuploidy screening in Korea. Our objective in the present study was to validate the performance of QF-PCR using short tandem repeat (STR) markers in a Korean population as a means for rapid prenatal diagnosis. Material and Methods: A QF-PCR assay using an Elucigene kit (Gen-Probe, Abingdon, UK), containing 20 STR markers located on chromosomes 13, 18, 21, X and Y, was performed on 847 amniotic fluid (AF) samples for prenatal aneuploidy screening referred for prenatal aneuploidy screening from 2007 to 2009. The results were then compared to those obtained using conventional cytogenetic analysis. To evaluate the informativity of STR markers, the heterozygosity index of each marker was determined in all the samples. Results: Three autosomes (13, 18, and 21) and X and Y chromosome aneuploidies were detected in 19 cases (2.2%, 19/847) after QF-PCR analysis of the 847 AF samples. Their results are identical to those of conventional cytogenetic analysis, with 100% positive predictive value. However, after cytogenetic analysis, 7 cases (0.8%, 7/847) were found to have 5 balanced and 2 unbalanced chromosomal abnormalities that were not detected by QF-PCR. The STR markers had a slightly low heterozygosity index (average: 0.76) compared to those reported in Caucasians (average: 0.80). Submicroscopic duplication of D13S634 marker, which might be a unique finding in Koreans, was detected in 1.4% (12/847) of the samples in the present study. Conclusion: A QF-PCR assay for prenatal aneuploidy screening was validated in our institution and proved to be efficient and reliable. However, we suggest that each laboratory must perform an independent validation test for each STR marker in order to develop interpretation guidelines of the results and must integrate QF-PCR into the routine cytogenetic laboratory workflow.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.

Development of Deep-Learning-Based Models for Predicting Groundwater Levels in the Middle-Jeju Watershed, Jeju Island (딥러닝 기법을 이용한 제주도 중제주수역 지하수위 예측 모델개발)

  • Park, Jaesung;Jeong, Jiho;Jeong, Jina;Kim, Ki-Hong;Shin, Jaehyeon;Lee, Dongyeop;Jeong, Saebom
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.697-723
    • /
    • 2022
  • Data-driven models to predict groundwater levels 30 days in advance were developed for 12 groundwater monitoring stations in the middle-Jeju watershed, Jeju Island. Stacked long short-term memory (stacked-LSTM), a deep learning technique suitable for time series forecasting, was used for model development. Daily time series data from 2001 to 2022 for precipitation, groundwater usage amount, and groundwater level were considered. Various models were proposed that used different combinations of the input data types and varying lengths of previous time series data for each input variable. A general procedure for deep-learning-based model development is suggested based on consideration of the comparative validation results of the tested models. A model using precipitation, groundwater usage amount, and previous groundwater level data as input variables outperformed any model neglecting one or more of these data categories. Using extended sequences of these past data improved the predictions, possibly owing to the long delay time between precipitation and groundwater recharge, which results from the deep groundwater level in Jeju Island. However, limiting the range of considered groundwater usage data that significantly affected the groundwater level fluctuation (rather than using all the groundwater usage data) improved the performance of the predictive model. The developed models can predict the future groundwater level based on the current amount of precipitation and groundwater use. Therefore, the models provide information on the soundness of the aquifer system, which will help to prepare management plans to maintain appropriate groundwater quantities.