• Title, Summary, Keyword: logistic curve

Search Result 268, Processing Time 0.05 seconds

Association between Sleep Duration, Dental Caries, and Periodontitis in Korean Adults: The Korea National Health and Nutrition Examination Survey, 2013~2014 (한국 성인에서 수면시간과 영구치 우식증 및 치주질환과의 관련성: 2013~2014 국민건강영양조사)

  • Lee, Da-Hyun;Lee, Young-Hoon
    • Journal of dental hygiene science
    • /
    • v.17 no.1
    • /
    • pp.38-45
    • /
    • 2017
  • We evaluated the association between sleep duration, dental caries, and periodontitis by using representative nationwide data. We examined 8,356 subjects aged ${\geq}19$ years who participated in the sixth Korea National Health and Nutrition Examination Survey (2013~2014). Sleep duration were grouped into ${\leq}5$, 6, 7, 8, and ${\geq}9$ hours. Presence of dental caries was defined as caries in ${\geq}1$ permanent tooth on dental examination. Periodontal status was assessed by using the community periodontal index (CPI), and a CPI code of ${\geq}3$ was defined as periodontitis. A chi-square test and multiple logistic regression analysis were used to determine statistical significance. Model 1 was adjusted for age and sex, model 2 for household income, educational level, and marital status plus model 1, and model 3 for smoking status, alcohol consumption, blood pressure level, fasting blood glucose level, total cholesterol level, and body mass index plus model 2. The prevalence of dental caries according to sleep duration showed a U-shaped curve of 33.4%, 29.4%, 28.4%, 29.4%, and 31.8% with ${\leq}5$, 6, 7, 8, and ${\geq}9$ hours of sleep, respectively. In the fully adjusted model 3, the risk of developing dental caries was significantly higher with ${\leq}5$ than with 7 hours of sleep (odds ratio, 1.23; 95% confidence interval, 1.06~1.43). The prevalence of periodontitis according to sleep duration showed a U-shaped curve of 34.4%, 28.6%, 28.1%, 31.3%, and 32.5%, respectively. The risk of periodontitis was significantly higher with ${\geq}9$ than with 7 hours of sleep in models 1 and 2, whereas the significant association disappeared in model 3. In a nationally representative sample, sleep duration was significantly associated with dental caries formation and weakly associated with periodontitis. Adequate sleep is required to prevent oral diseases such as dental caries and periodontitis.

The Prognostic Role of B-type Natriuretic Peptide in Acute Exacerbation of Chronic Obstructive Pulmonary Disease (만성폐쇄성폐질환의 급성 악화시 예후 인자로서의 혈중 B-type Natriuretic Peptide의 역할)

  • Lee, Ji Hyun;Oh, So Yeon;Hwang, Iljun;Kim, Okjun;Kim, Hyun Kuk;Kim, Eun Kyung;Lee, Ji-Hyun
    • Tuberculosis and Respiratory Diseases
    • /
    • v.56 no.6
    • /
    • pp.600-610
    • /
    • 2004
  • Background : The plasma B-type natriuretic peptide(BNP) concentration increases with the degree of pulmonary hypertension in patients with chronic respiratory disease. The aim of this study was to examine the prognostic role of BNP in the acute exacerbation of chronic obstructive lung disease (COPD). Method : We selected 67 patients who were admitted our hospital because of an acute exacerbation of COPD. Their BNP levels were checked on admission at the Emergency Department. Their medical records were analyzed retrospectively. The patients were divided into two groups according to their in-hospital mortality. The patients' medical history, comobidity, exacerbation type, blood gas analysis, pulmonary function, APACHE II severity score and plasma BNP level were compared. Results : Multiple logistic regression analysis identified three independent predictors of mortality: $FEV_1$, APACHE II score and plasma BNP level. The decedents group showed a lower $FEV_1$($28{\pm}7$ vs. $37{\pm}15%$, p=0.005), a higher APACHE II score($22.4{\pm}6.1$ vs. $15.8{\pm}4.7$, p=0.000) and a higher BNP level ($201{\pm}116$ vs. $77{\pm}80pg/mL$, p=0.000) than the sSurvivors group. When the BNP cut-off level was set to 88pg/mL using the receiver operating characteristic curve, the sensitivity was 90% and the specificity was 75% in differentiating between the survivors and decedents. On Fisher's exact test, the odds ratio for mortality was 21.2 (95% CI 2.49 to 180.4) in the patients with a BNP level > 88pg/mL. Conclusion : The plasma BNP level might be a predictor of mortality in an acute exacerbation of COPD as well as the $FEV_1$ and APACHE II score.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Correlation between Carotid Intima-media Thickness and Risk Factors for Atherosclerosis (경동맥 내중막 두께에 따른 죽상경화반의 위험요인과의 상관관계)

  • An, Hyun;Lee, Hyo Yeong
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.3
    • /
    • pp.339-348
    • /
    • 2019
  • The purpose of this study was to investigate the effect of carotid artery ultrasound Respectively. The carotid intima-media thickness is known to have a significant correlation with cardiovascular disease and cerebrovascular disease. We investigated the relationship between carotid intima - media thickness, body mass index, waist circumference, the blood lipid value, fasting blood glucose, glycated hemoglobin, and blood pressure using carotid artery ultrasound. The carotid artery ultrasound was considered to be abnormality of IMT thickness over 0.8 mm and the presence or absence of atherosclerotic plaque was evaluated. Serological tests were used to compare the geologic value, fasting blood glucose level, and glycated hemoglobin. As a result, waist circumference (=.022), low density cholesterol (=.004), fasting blood glucose level (.019), and glycemic index (.002) were analyzed as predictors of atherosclerosis. In the ROC curve analysis, sensitivity was 87.80% (95% CI: 73.8-95.9), specificity was 41.67% (95% CI: 30.2-53.9), sensitivity was 78.05% (95% CI: 62.4-89.4) in low density lipoprotein, Specificity was 50.00% (95% CI: 38.0-62.0), sensitivity was 73.11% (95% CI: 57.1-85.8), specificity was 61.11 (95% CI: 48.9-72.4) and sensitivity was 82.93%-91.8) and a specificity of 43.06% (31.4-55.3). In logistic regression analysis, the risk of atherosclerosis was 0.248 times at waist circumference (WC)> 76 cm, 3.475 times at low-density lipoprotein (LDL-C) ${\geq}124mg/dL$, 0.618 at HbA1c> 5.4% It appeared as a times. We suggest that prospective study of carotid artery ultrasound should be performed for the effective prevention of cardiovascular diseases.

Obstructive Ventilatory Impairment as a Risk Factor of Lung Cancer (폐암의 위험인자로서의 폐쇄성 환기장애)

  • Kim, Yeon-Jae;Park, Jae-Yong;Chae, Sang-Cheol;Won, Jun-Hee;Kim, Jeong-Seok;Kim, Chang-Ho;Jung, Tae-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.4
    • /
    • pp.746-753
    • /
    • 1998
  • Background : Cigarette smoking is closely related to both lung cancer and chronic obstructive pulmonary disease. The incidence of lung cancer is higher in patients with obstructive ventilatory impairment than in patients without obstructive ventilatory impairment regardless of smoking. So, obstructive ventilatory impairment is suspected as an independent risk factor of lung cancer. Methods: For the evaluation of the role of obstructive ventilatory impairment as a risk factor of lung cancer, a total of 73 cases comprising 47 cases of malignant and 26 benign solitary pulmonary nodule were analyzed retrospectively. A comparative study of analysis of forced expiratory volume curves and frequencies of obstructive ventilatory impairment were made between cases with malignant and benign nodules. Results: In comparison of vital capacity and parameters derived from forced expiratory volume curve between two groups. VC, FVC and $FEV_1$ were not significantly different. whereas $FEV_1/FVC%$ and FEF 25-75% showed a significant decrease in the cases with malignant nodule. The frequency of obstructive ventilatory impairment determined by pulmonary function test was significantly higher in the cases with malignant nodule(23.4%) than in benign nodule(3.8%). When the risk for lung cancer was examined by the presence or absence of obstructive ventilatory impairment using the logistic regression analysis, the unadjusted relative risk for the lung cancer of obstructive ventilatory impairment was 17.17. When the effect of smoking and age were considered, the relative risk was to 8.13. Conclusion: These findings suggest that an obstructive ventilatory impairment is a risk factor of lung cancer.

  • PDF

Diagnostic Value of Serum Procalcitonin in Febrile Infants Under 6 Months of Age for the Detection of Bacterial Infections (발열이 있는 6개월 미만의 영아에서 세균성 감염에 대한 procalcitonin의 진단적 가치)

  • Kim, Nam Hyo;Kim, Ji Hee;Lee, Taek Jin
    • Pediatric Infection and Vaccine
    • /
    • v.16 no.2
    • /
    • pp.142-149
    • /
    • 2009
  • Purpose : The aim of this study was to determine the diagnostic value of serum procalcitonin (PCT) compared with that of C-reactive protein (CRP) and the total white blood cell count (WBC) in predicting bacterial infections in febrile infants<6 months of age. Methods : A prospective study was performed with infants <6 months of age who were admitted to the Department of Pediatrics with a fever of uncertain source between July and September 2008. Spinal taps were performed according to clinical symptoms and physical examination. Serum PCT levels were measured using an enzyme-linked fluorescent assay. Results : Seventy-one infants (mean age, 2.62 months) were studied. Twenty-six infants (36.6%) had urinary tract infections (UTIs), and 22 infants (31.0%) had viral meningitis. The remaining infants had acute pharyngitis (n=1), herpangina (n=1), upper respiratory tract infections (n=7), acute bronchiolitis (n=8), acute gastroenteritis (n=4), and bacteremia (n=2). The median WBC and CRP levels were significantly higher in infants with UTIs than in infants with viral meningitis. However, there were no differences in the median PCT levels between the groups (0.14 ng/mL vs. 0.11 ng/mL, P=0.419). The area under the receiver operating characteristic curve was 0.792 (95% CI, 0.65-0.896) for WBC, 0.77 (95% CI, 0.626-0.879) for CRP, and 0.568 (95% CI, 0.417-0.710) for PCT. An elevated WBC count (>11,920/${\mu}L$) and an increased CRP level (>1.06mg/dL) were significant predictors of UTIs based on multiple logistic regression analysis. Conclusion : Serum PCT concentrations should be interpreted with caution in infants <6 months of age with a fever of uncertain source.

  • PDF

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.