• Title/Summary/Keyword: Database Quality

Search Result 1,373, Processing Time 0.027 seconds

A proposal of spirometry reference equations for Korean workers

  • Yonglim Won;Hwa-Yeon Lee
    • Annals of Occupational and Environmental Medicine
    • /
    • v.34
    • /
    • pp.14.1-14.14
    • /
    • 2022
  • Background: Although spirometry results can be interpreted differently depending on the reference equation used, there are no established criteria for selecting reference equations as part of the special health examinations for Korean workers. Thus, it is essential to examine the current use of reference equations in Korea, quantify their impact on result interpretation, and propose reference equations suitable for Korean workers, while also considering the environmental conditions of special health examination facilities. Methods: The 213,640 results from the special health examination database were analyzed to identify changes in the ratio of measured values to reference values of lung capacity in Korean workers with changes in age or height, and changes in the agreement of interpretations with the reference equation used. Data from 238 organizations that participated in the 2018-2019 quality control assessment by the Korea Occupational Safety and Health Agency were used to identify the spirometer model and reference equations used in each special health examination facility. Results: Korean special health examination facilities used six reference equations, and the rate of normal or abnormal ventilatory diagnoses varied with the reference equation used. The prediction curve of the Global Lung Function Initiative 2012-Northeast Asian (GLI2012) equation most resembled that of the normal group, but the spirometry model most commonly used by examination facilities was not compliant with the GLI2012 equation. With a scaling factor of 0.95 applied to the Dr. Choi equation, the agreement with the GLI2012 equation was > 0.81 for men and women. Conclusions: We propose the GLI2012 equation as reference equation for spirometry in Korean workers. The GLI2012 equation exhibited the most suitable prediction curve against the normal lung function group. For devices that cannot use the GLI2012 equation, we recommend applying a scaling factor of 0.95 to the Dr. Choi equation.

Effects on Clinical Judgement after Domestic Simulation (국내 시뮬레이션 실습 후 임상해결력에 대한 효과)

  • Seung-Ok Shin
    • Journal of the Health Care and Life Science
    • /
    • v.9 no.2
    • /
    • pp.267-273
    • /
    • 2021
  • The purpose of this study is a systematic literature review to investigate the effect of clinical judgment after simulation practice in nursing students in Korea. The research subjects were papers published from 2011 to February 2021, and the Korean database, KmBase, Korea Research Information (KISS), and Science and Technology Information (NDSL) were searched. ','Nursing debriefing' was searched. Out of a total of 279 studies, the final 3 literatures were selected. All three studies were non-randomized quasi-experimental studies, and the quality of the literature was confirmed. As a result of the study, it was confirmed that the debriefing after the simulation-based class showed statistically significant results for clinical judgment. In order to see the effect of simulation-based education, it is necessary to design a systematic interventional study in the future, and it can be suggested that field practice and a comparative study of simulation are necessary.

Development and validation of novel simple prognostic model for predicting mortality in Korean intensive care units using national insurance claims data

  • Ah Young Leem;Soyul Han;Kyung Soo Chung;Su Hwan Lee;Moo Suk Park;Bora Lee;Young Sam Kim
    • The Korean journal of internal medicine
    • /
    • v.39 no.4
    • /
    • pp.625-639
    • /
    • 2024
  • Background/Aims: Intensive care unit (ICU) quality is largely determined by the mortality rate. Therefore, we aimed to develop and validate a novel prognostic model for predicting mortality in Korean ICUs, using national insurance claims data. Methods: Data were obtained from the health insurance claims database maintained by the Health Insurance Review and Assessment Service of South Korea. From patients who underwent the third ICU adequacy evaluation, 42,489 cases were enrolled and randomly divided into the derivation and validation cohorts. Using the models derived from the derivation cohort, we analyzed whether they accurately predicted death in the validation cohort. The models were verified using data from one general and two tertiary hospitals. Results: Two severity correction models were created from the derivation cohort data, by applying variables selected through statistical analysis, through clinical consensus, and from performing multiple logistic regression analysis. Model 1 included six categorical variables (age, sex, Charlson comorbidity index, ventilator use, hemodialysis or continuous renal replacement therapy, and vasopressor use). Model 2 additionally included presence/absence of ICU specialists and nursing grades. In external validation, the performance of models 1 and 2 for predicting in-hospital and ICU mortality was not inferior to that of pre-existing scoring systems. Conclusions: The novel and simple models could predict in-hospital and ICU mortality and were not inferior compared to the pre-existing scoring systems.

CNN-ViT Hybrid Aesthetic Evaluation Model Based on Quantification of Cognitive Features in Images (이미지의 인지적 특징 정량화를 통한 CNN-ViT 하이브리드 미학 평가 모델)

  • Soo-Eun Kim;Joon-Shik Lim
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.352-359
    • /
    • 2024
  • This paper proposes a CNN-ViT hybrid model that automatically evaluates the aesthetic quality of images by combining local and global features. In this approach, CNN is used to extract local features such as color and object placement, while ViT is employed to analyze the aesthetic value of the image by reflecting global features. Color composition is derived by extracting the primary colors from the input image, creating a color palette, and then passing it through the CNN. The Rule of Thirds is quantified by calculating how closely objects in the image are positioned near the thirds intersection points. These values provide the model with critical information about the color balance and spatial harmony of the image. The model then analyzes the relationship between these factors to predict scores that align closely with human judgment. Experimental results on the AADB image database show that the proposed model achieved a Spearman's Rank Correlation Coefficient (SRCC) of 0.716, indicating more consistent rank predictions, and a Pearson Correlation Coefficient (LCC) of 0.72, which is 2~4% higher than existing models.

Changes and Improvements of the Standardized Eddy Covariance Data Processing in KoFlux (표준화된 KoFlux 에디 공분산 자료 처리 방법의 변화와 개선)

  • Kang, Minseok;Kim, Joon;Lee, Seung-Hoon;Kim, Jongho;Chun, Jung-Hwa;Cho, Sungsik
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.1
    • /
    • pp.5-17
    • /
    • 2018
  • The standardized eddy covariance flux data processing in KoFlux has been updated, and its database has been amended accordingly. KoFlux data users have not been informed properly regarding these changes and the likely impacts on their analyses. In this paper, we have documented how the current structure of data processing in KoFlux has been established through the changes and improvements to ensure transparency, reliability and usability of the KoFlux database. Due to increasing diversity and complexity of flux site instrumentation and organization, we have re-implemented the previously ignored or simplified procedures in data processing (e.g., frequency response correction, stationarity test), and added new methods for $CH_4$ flux gap-filling and $CO_2$ flux correction and partitioning. To evaluate the effects of the changes, we processed the data measured at a flat and homogeneous paddy field (i.e., HPK) and a deciduous forest in complex and heterogeneous topography (i.e., GDK), and quantified the differences. Based on the results from our overall assessment, it is confirmed that (1) the frequency response correction (HPK: 11~18% of biases for annually integrated values, GDK: 6~10%) and the stationarity test (HPK: 4~19% of biases for annually integrated values, GDK: 9~23%) are important for quality control and (2) the minimization of the missing data and the choice of the appropriate driver (rather than the choice of the gap-filling method) are important to reduce the uncertainty in gap-filled fluxes. These results suggest the future directions for the data processing technology development to ensure the continuity of the long-term KoFlux database.

Reconstruction of Metabolic Pathway for the Chicken Genome (닭 특이 대사 경로 재확립)

  • Kim, Woon-Su;Lee, Se-Young;Park, Hye-Sun;Baik, Woon-Kee;Lee, Jun-Heon;Seo, Seong-Won
    • Korean Journal of Poultry Science
    • /
    • v.37 no.3
    • /
    • pp.275-282
    • /
    • 2010
  • Chicken is an important livestock as a valuable biomedical model as well as food for human, and there is a strong rationale for improving our understanding on metabolism and physiology of this organism. The first draft of chicken genome assembly was released in 2004, which enables elaboration on the linkage between genetic and metabolic traits of chicken. The objectives of this study were thus to reconstruct metabolic pathway of the chicken genome and to construct a chicken specific pathway genome database (PGDB). We developed a comprehensive genome database for chicken by integrating all the known annotations for chicken genes and proteins using a pipeline written in Perl. Based on the comprehensive genome annotations, metabolic pathways of the chicken genome were reconstructed using the PathoLogic algorithm in Pathway Tools software. We identified a total of 212 metabolic pathways, 2,709 enzymes, 71 transporters, 1,698 enzymatic reactions, 8 transport reactions, and 1,360 compounds in the current chicken genome build, Gallus_gallus-2.1. Comparative metabolic analysis with the human, mouse and cattle genomes revealed that core metabolic pathways are highly conserved in the chicken genome. It was indicated the quality of assembly and annotations of the chicken genome need to be improved and more researches are required for improving our understanding on function of genes and metabolic pathways of avian species. We conclude that the chicken PGDB is useful for studies on avian and chicken metabolism and provides a platform for comparative genomic and metabolic analysis of animal biology and biomedicine.

A Comprehensive Computer Program for Monitor Unit Calculation and Beam Data Management: Independent Verification of Radiation Treatment Planning Systems (방사선치료계획시스템의 독립적 검증을 위한 선량 계산 및 빔데이터 관리 프로그램)

  • Kim, Hee-Jung;Park, Yang-Kyun;Park, Jong-Min;Choi, Chang-Heon;Kim, Jung-In;Lee, Sang-Won;Oh, Heon-Jin;Lim, Chun-Il;Kim, Il-Han;Ye, Sung-Joon
    • Progress in Medical Physics
    • /
    • v.19 no.4
    • /
    • pp.231-240
    • /
    • 2008
  • We developed a user-friendly program to independently verify monitor units (MUs) calculated by radiation treatment planning systems (RTPS), as well as to manage beam database in clinic. The off-axis factor, beam hardening effect, inhomogeneity correction, and the different depth correction were incorporated into the program algorithm to improve the accuracy in calculated MUs. A beam database in the program was supposed to use measured data from routine quality assurance (QA) processes for timely update. To enhance user's convenience, a graphic user interface (GUI) was developed by using Visual Basic for Application. In order to evaluate the accuracy of the program for various treatment conditions, the MU comparisons were made for 213 cases of phantom and for 108 cases of 17 patients treated by 3D conformal radiation therapy. The MUs calculated by the program and calculated by the RTPS showed a fair agreement within ${\pm}3%$ for the phantom and ${\pm}5%$ for the patient, except for the cases of extreme inhomogeneity. By using Visual Basic for Application and Microsoft Excel worksheet interface, the program can automatically generate beam data book for clinical reference and the comparison template for the beam data management. The program developed in this study can be used to verify the accuracy of RTPS for various treatment conditions and thus can be used as a tool of routine RTPS QA, as well as independent MU checks. In addition, its beam database management interface can update beam data periodically and thus can be used to monitor multiple beam databases efficiently.

  • PDF

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Development of processed food database using Korea National Health and Nutrition Examination Survey data (국민건강영양조사 자료를 이용한 가공식품 데이터베이스 구축)

  • Yoon, Mi Ock;Lee, Hyun Sook;Kim, Kirang;Shim, Jae Eun;Hwang, Ji-Yun
    • Journal of Nutrition and Health
    • /
    • v.50 no.5
    • /
    • pp.504-518
    • /
    • 2017
  • Purpose: The objective of this study was to develop a processed foods database (DB) for estimation of processed food intake in the Korean population using data from the Korea National Health and Nutrition Survey (KNHANES). Methods: Analytical values of processed foods were collected from food composition tables of national institutions (Development Institute, Rural Development Administration), the US Department of Agriculture, and previously reported scientific journals. Missing or unavailable values were substituted, calculated, or imputed. The nutrient data covered 14 nutrients, including energy, protein, carbohydrates, fat, calcium, phosphorus, iron, sodium, potassium, vitamin A, thiamin, riboflavin, niacin, and vitamin C. The processed food DB covered a total of 4,858 food items used in the KNHANES. Each analytical value per food item was selected systematically based on the priority criteria of data sources. Results: Level 0 DB was developed based on a list of 8,785 registered processed foods with recipes of ready-to-eat processed foods, one food composition table published by the national institution, and nutrition facts obtained directly from manufacturers or indirectly via web search. Level 1 DB included information of 14 nutrients, and missing or unavailable values were substituted, calculated, or imputed at level 2. Level 3 DB evaluated the newly constructed nutrient DB for processed foods using the 2013 KNHANES. Mean intakes of total food and processed food were 1,551.4 g (males 1,761.8 g, females 1,340.8 g) and 129.4 g (males 169.9 g, females 88.8 g), respectively. Processed foods contributed to nutrient intakes from 5.0% (fiber) to 12.3% (protein) in the Korean population. Conclusion: The newly developed nutrient DB for processed foods contributes to accurate estimation of nutrient intakes in the Korean population. Consistent and regular update and quality control of the DB is needed to obtain accurate estimation of usual intakes using data from the KNHANES.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.