• Title/Summary/Keyword: Performance parameter

Search Result 4,770, Processing Time 0.041 seconds

Modified Traditional Calibration Method of CRNP for Improving Soil Moisture Estimation (산악지형에서의 CRNP를 이용한 토양 수분 측정 개선을 위한 새로운 중성자 강도 교정 방법 검증 및 평가)

  • Cho, Seongkeun;Nguyen, Hoang Hai;Jeong, Jaehwan;Oh, Seungcheol;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.665-679
    • /
    • 2019
  • Mesoscale soil moisture measurement from the promising Cosmic-Ray Neutron Probe (CRNP) is expected to bridge the gap between large scale microwave remote sensing and point-based in-situ soil moisture observations. Traditional calibration based on $N_0$ method is used to convert neutron intensity measured at the CRNP to field scale soil moisture. However, the static calibration parameter $N_0$ used in traditional technique is insufficient to quantify long term soil moisture variation and easily influenced by different time-variant factors, contributing to the high uncertainties in CRNP soil moisture product. Consequently, in this study, we proposed a modified traditional calibration method, so-called Dynamic-$N_0$ method, which take into account the temporal variation of $N_0$ to improve the CRNP based soil moisture estimation. In particular, a nonlinear regression method has been developed to directly estimate the time series of $N_0$ data from the corrected neutron intensity. The $N_0$ time series were then reapplied to generate the soil moisture. We evaluated the performance of Dynamic-$N_0$ method for soil moisture estimation compared with the traditional one by using a weighted in-situ soil moisture product. The results indicated that Dynamic-$N_0$ method outperformed the traditional calibration technique, where correlation coefficient increased from 0.70 to 0.72 and RMSE and bias reduced from 0.036 to 0.026 and -0.006 to $-0.001m^3m^{-3}$. Superior performance of the Dynamic-$N_0$ calibration method revealed that the temporal variability of $N_0$ was caused by hydrogen pools surrounding the CRNP. Although several uncertainty sources contributed to the variation of $N_0$ were not fully identified, this proposed calibration method gave a new insight to improve field scale soil moisture estimation from the CRNP.

A review on the design requirement of temperature in high-level nuclear waste disposal system: based on bentonite buffer (고준위폐기물처분시스템 설계 제한온도 설정에 관한 기술현황 분석: 벤토나이트 완충재를 중심으로)

  • Kim, Jin-Seop;Cho, Won-Jin;Park, Seunghun;Kim, Geon-Young;Baik, Min-Hoon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.5
    • /
    • pp.587-609
    • /
    • 2019
  • Short-and long-term stabilities of bentonite, favored material as buffer in geological repositories for high-level waste were reviewed in this paper in addition to alternative design concepts of buffer to mitigate the thermal load from decay heat of SF (Spent Fuel) and further increase the disposal efficiency. It is generally reported that the irreversible changes in structure, hydraulic behavior, and swelling capacity are produced due to temperature increase and vapor flow between $150{\sim}250^{\circ}C$. Provided that the maximum temperature of bentonite is less than $150^{\circ}C$, however, the effects of temperature on the material, structural, and mineralogical stability seems to be minor. The maximum temperature in disposal system will constrain and determine the amount of waste to be disposed per unit area and be regarded as an important design parameter influencing the availability of disposal site. Thus, it is necessary to identify the effects of high temperature on the performance of buffer and allow for the thermal constraint greater than $100^{\circ}C$. In addition, the development of high-performance EBS (Engineered Barrier System) such as composite bentonite buffer mixed with graphite or silica and multi-layered buffer (i.e., highly thermal-conductive layer or insulating layer) should be taken into account to enhance the disposal efficiency in parallel with the development of multilayer repository. This will contribute to increase of reliability and securing the acceptance of the people with regard to a high-level waste disposal.

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Prediction of Life Expectancy for Terminally Ill Cancer Patients Based on Clinical Parameters (말기 암 환자에서 임상변수를 이용한 생존 기간 예측)

  • Yeom, Chang-Hwan;Choi, Youn-Seon;Hong, Young-Seon;Park, Yong-Gyu;Lee, Hye-Ree
    • Journal of Hospice and Palliative Care
    • /
    • v.5 no.2
    • /
    • pp.111-124
    • /
    • 2002
  • Purpose : Although the average life expectancy has increased due to advances in medicine, mortality due to cancer is on an increasing trend. Consequently, the number of terminally ill cancer patients is also on the rise. Predicting the survival period is an important issue in the treatment of terminally ill cancer patients since the choice of treatment would vary significantly by the patents, their families, and physicians according to the expected survival. Therefore, we investigated the prognostic factors for increased mortality risk in terminally ill cancer patients to help treat these patients by predicting the survival period. Methods : We investigated 31 clinical parameters in 157 terminally ill cancer patients admitted to in the Department of Family Medicine, National Health Insurance Corporation Ilsan Hospital between July 1, 2000 and August 31, 2001. We confirmed the patients' survival as of October 31, 2001 based on medical records and personal data. The survival rates and median survival times were estimated by the Kaplan-Meier method and Log-rank test was used to compare the differences between the survival rates according to each clinical parameter. Cox's proportional hazard model was used to determine the most predictive subset from the prognostic factors among many clinical parameters which affect the risk of death. We predicted the mean, median, the first quartile value and third quartile value of the expected lifetimes by Weibull proportional hazard regression model. Results : Out of 157 patients, 79 were male (50.3%). The mean age was $65.1{\pm}13.0$ years in males and was $64.3{\pm}13.7$ years in females. The most prevalent cancer was gastric cancer (36 patients, 22.9%), followed by lung cancer (27, 17.2%), and cervical cancer (20, 12.7%). The survival time decreased with to the following factors; mental change, anorexia, hypotension, poor performance status, leukocytosis, neutrophilia, elevated serum creatinine level, hypoalbuminemia, hyperbilirubinemia, elevated SGPT, prolonged prothrombin time (PT), prolonged activated partial thromboplastin time (aPTT), hyponatremia, and hyperkalemia. Among these factors, poor performance status, neutrophilia, prolonged PT and aPTT were significant prognostic factors of death risk in these patients according to the results of Cox's proportional hazard model. We predicted that the median life expectancy was 3.0 days when all of the above 4 factors were present, $5.7{\sim}8.2$ days when 3 of these 4 factors were present, $11.4{\sim}20.0$ days when 2 of the 4 were present, and $27.9{\sim}40.0$ when 1 of the 4 was present, and 77 days when none of these 4 factors were present. Conclusions : In terminally ill cancer patients, we found that the prognostic factors related to reduced survival time were poor performance status, neutrophilia, prolonged PT and prolonged am. The four prognostic factors enabled the prediction of life expectancy in terminally ill cancer patients.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Implementation Strategy for the Elderly Care Solution Based on Usage Log Analysis: Focusing on the Case of Hyodol Product (사용자 로그 분석에 기반한 노인 돌봄 솔루션 구축 전략: 효돌 제품의 사례를 중심으로)

  • Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.117-140
    • /
    • 2019
  • As the aging phenomenon accelerates and various social problems related to the elderly of the vulnerable are raised, the need for effective elderly care solutions to protect the health and safety of the elderly generation is growing. Recently, more and more people are using Smart Toys equipped with ICT technology for care for elderly. In particular, log data collected through smart toys is highly valuable to be used as a quantitative and objective indicator in areas such as policy-making and service planning. However, research related to smart toys is limited, such as the development of smart toys and the validation of smart toy effectiveness. In other words, there is a dearth of research to derive insights based on log data collected through smart toys and to use them for decision making. This study will analyze log data collected from smart toy and derive effective insights to improve the quality of life for elderly users. Specifically, the user profiling-based analysis and elicitation of a change in quality of life mechanism based on behavior were performed. First, in the user profiling analysis, two important dimensions of classifying the type of elderly group from five factors of elderly user's living management were derived: 'Routine Activities' and 'Work-out Activities'. Based on the dimensions derived, a hierarchical cluster analysis and K-Means clustering were performed to classify the entire elderly user into three groups. Through a profiling analysis, the demographic characteristics of each group of elderlies and the behavior of using smart toy were identified. Second, stepwise regression was performed in eliciting the mechanism of change in quality of life. The effects of interaction, content usage, and indoor activity have been identified on the improvement of depression and lifestyle for the elderly. In addition, it identified the role of user performance evaluation and satisfaction with smart toy as a parameter that mediated the relationship between usage behavior and quality of life change. Specific mechanisms are as follows. First, the interaction between smart toy and elderly was found to have an effect of improving the depression by mediating attitudes to smart toy. The 'Satisfaction toward Smart Toy,' a variable that affects the improvement of the elderly's depression, changes how users evaluate smart toy performance. At this time, it has been identified that it is the interaction with smart toy that has a positive effect on smart toy These results can be interpreted as an elderly with a desire to meet emotional stability interact actively with smart toy, and a positive assessment of smart toy, greatly appreciating the effectiveness of smart toy. Second, the content usage has been confirmed to have a direct effect on improving lifestyle without going through other variables. Elderly who use a lot of the content provided by smart toy have improved their lifestyle. However, this effect has occurred regardless of the attitude the user has toward smart toy. Third, log data show that a high degree of indoor activity improves both the lifestyle and depression of the elderly. The more indoor activity, the better the lifestyle of the elderly, and these effects occur regardless of the user's attitude toward smart toy. In addition, elderly with a high degree of indoor activity are satisfied with smart toys, which cause improvement in the elderly's depression. However, it can be interpreted that elderly who prefer outdoor activities than indoor activities, or those who are less active due to health problems, are hard to satisfied with smart toys, and are not able to get the effects of improving depression. In summary, based on the activities of the elderly, three groups of elderly were identified and the important characteristics of each type were identified. In addition, this study sought to identify the mechanism by which the behavior of the elderly on smart toy affects the lives of the actual elderly, and to derive user needs and insights.

Assessment of nutritional status of patients with chronic obstructive pulmonary disease (만성 폐쇄성 폐질환 환자의 영양상태 평가)

  • Park, Kwang Joo;Ahn, Chul Min;Kim, Hyung Jung;Chang, Joon;Kim, Sung Kyu;Lee, Won Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.1
    • /
    • pp.93-103
    • /
    • 1997
  • Background : Malnutrition is a common finding in patients with chronic obstructive pulmonary disease, especially in the emphysema group. Although the mechanism of malnutrition is not confirmed, it is believed to be a relative deficiency caused by hypermetabolism due to increased energy requirements of the respiratory muscles, rather than a dietary deficiency. Malnutrition in chronic obstructive pulmonary disease is not a merely coincidental finding. It is known that the nutritional status correlates with physiologic parameters including pulmonary function, muscular power, and exercise performance, and is one of the important and independent prognostic factors of the disease. Methods : Patients with chronic obstructive pulmonary disease Yongdong Severance Hospital from May, 1995 to March, 1996 and age-matched healthy control group were studied. Survey of nutritional intake, anthropometric measurements and biochemical tests were done to assess nutritional status. Relationship between nutritional status and FEV1 (forced expiratory volume at one second), which was a significant functional parameter, was assessed. Results : 1) The patient group was consisted of 25 males with mean age of 66.1years and FEV1 of $42{\pm}14%$ of predicted values. The control group was consisted of 26 healthy males with normal pulmonary function, whose mean age was 65.0 years. 2) The ratio of calorie intake/calorie requirement per day was $107{\pm}28%$ in the patient group, and $94{\pm}14%$ in the control group, showing a tendency of more nutritional intake in patient group(B=0.06). 3) There were significant differences between the patient group and control group in percent ideal body weight(92.8% vs 101.6%, p=0.024), body mass index($20.0kg/m^2$ VS $21.9kg/m^2$, p=0.015), and handgrip strength(29.0kg vs 34.3kg, p=0.003). However, there were no significant differences in triceps skinfold thickness, mid-arm muscle circumference, albumin, and total lymphocyte count between two groups. Percentage of underweight population was 40%(10/25) in the patient group, and 15%(4/26) in the control group. 4) The percent ideal body weight, triceps skinfold thickness, and mid-arm muscle circumference had significant correlation with FEV1. Conclusion : The patients with chronic obstructive pulmonary disease showed significant depletion in nutritional parameters such as body weight and peripheral muscle strength, while absolute amount of dietary intake was not insufficient. Nutritional parameters were well correlated with FEV1.

  • PDF

Estimation of Heritability and Genetic Parameter for Growth and Body Traits of Pig (종돈의 성장 및 체형 형질에 대한 유전력 및 유전모수 추정에 관한 연구)

  • Kang, Hyun-Sung;Nam, Ki-Chang;Kim, Kyung-Tai;Na, Chong-Sam;Seo, Kang-Seok
    • Journal of Animal Science and Technology
    • /
    • v.54 no.2
    • /
    • pp.83-87
    • /
    • 2012
  • The purpose of this study was to estimate genetic parameters for productive traits in swine. Productive traits were considered on average daily gain (ADG), body height (BH) and body length (BL). Genetic analysis was consisted of 18,668 heads for productive traits which were based on on-farm performance tested from May, 2007 to Apr, 2011. For estimating genetic parameters on productive traits, single best model was fitted after finding source of variance on fixed and random effects and estimated with a multiple trait animal model by using DF-REML (Derivative-Free Restricted Maximum Likelihood). The estimated heritabilities of Duroc, Berkshire, Landrace and Yorkshire 0.22-0.58 for the average daily gain, 0.34-0.41 for the body height and 0.4-0.52 for the body length, respectively. Phenotypic correlations of average daily gain with body height and body length for the four breeds were 0.42-0.48, 0.53-0.58, 0.34-0.46 and 0.47-0.56, respectively. Phenotypic correlations of body height with body length were 0.41, 0.57, 0.52, 0.59, respectively. The estimated genetic correlation coefficients of average daily gain with body height and body length estimated for the four breeds were 0.34-0.47, 0.70-0.75, 0.17-0.38 and 0.50-0.53, respectively. The estimated genetic correlation coefficients of body height with body length were 0.57, 0.69, 0.61 and 0.71, respectively.