• Title/Summary/Keyword: Contrast Scale

Search Result 609, Processing Time 0.028 seconds

Evaluation of the Measurement Uncertainty from the Standard Operating Procedures(SOP) of the National Environmental Specimen Bank (국가환경시료은행 생태계 대표시료의 채취 및 분석 표준운영절차에 대한 단계별 측정불확도 평가 연구)

  • Lee, Jongchun;Lee, Jangho;Park, Jong-Hyouk;Lee, Eugene;Shim, Kyuyoung;Kim, Taekyu;Han, Areum;Kim, Myungjin
    • Journal of Environmental Impact Assessment
    • /
    • v.24 no.6
    • /
    • pp.607-618
    • /
    • 2015
  • Five years have passed since the first set of environmental samples was taken in 2011 to represent various ecosystems which would help future generations lead back to the past environment. Those samples have been preserved cryogenically in the National Environmental Specimen Bank(NESB) at the National Institute of Environmental Research. Even though there is a strict regulation (SOP, standard operating procedure) that rules over the whole sampling procedure to ensure each sample to represent the sampling area, it has not been put to the test for the validation. The question needs to be answered to clear any doubts on the representativeness and the quality of the samples. In order to address the question and ensure the sampling practice set in the SOP, many steps to the measurement of the sample, that is, from sampling in the field and the chemical analysis in the lab are broken down to evaluate the uncertainty at each level. Of the 8 species currently taken for the cryogenic preservation in the NESB, pine tree samples from two different sites were selected for this study. Duplicate samples were taken from each site according to the sampling protocol followed by the duplicate analyses which were carried out for each discrete sample. The uncertainties were evaluated by Robust ANOVA; two levels of uncertainty, one is the uncertainty from the sampling practice, and the other from the analytical process, were then compiled to give the measurement uncertainty on a measured concentration of the measurand. As a result, it was confirmed that it is the sampling practice not the analytical process that accounts for the most of the measurement uncertainty. Based on the top-down approach for the measurement uncertainty, the efficient way to ensure the representativeness of the sample was to increase the quantity of each discrete sample for the making of a composite sample, than to increase the number of the discrete samples across the site. Furthermore, the cost-effective approach to enhance the confidence level on the measurement can be expected from the efforts to lower the sampling uncertainty, not the analytical uncertainty. To test the representativeness of a composite sample of a sampling area, the variance within the site should be less than the difference from duplicate sampling. For that, a criterion, ${i.e.s^2}_{geochem}$(across the site variance) <${s^2}_{samp}$(variance at the sampling location) was proposed. In light of the criterion, the two representative samples for the two study areas passed the requirement. In contrast, whenever the variance of among the sampling locations (i.e. across the site) is larger than the sampling variance, more sampling increments need to be added within the sampling area until the requirement for the representativeness is achieved.

A Comparative Study of the Handicaps in and Satisfaction with the Ordinary Life before and after the Plastic Operation for Artificial Joint Replacement-Centering around Those Who suffer from Joint Diseases (인공관절 전치환 성형 수술 전후의 일상활동 장애정도 및 삶의 만족도 비교연구 - 관절 질환 환자를 중심으로 -)

  • Kang, Shin-Hwa
    • Journal of muscle and joint health
    • /
    • v.3 no.1
    • /
    • pp.37-49
    • /
    • 1996
  • The joint diseases threaten modern people's healthy life. They bring about a long pain, an anasarca, loss of joint function or even deformation and rigidity of joint, limiting people's ordinary activities much. The chronic joint patients may be subject to some hypochondria caused by anxiety for their life, social isolation, financial problem and physical disability. Therefore, this population should continue to be duely taken care of by medical personnels. In particular, nurses should adequately help these people to recover and improve their health through suitable adaptations. With such basic conceptions in mind, this study was aimed at reviewing these patients' conditions in their ordinary life before and after a plastic operation for artificial joint replacement as well as their satisfaction with their life. For this purpose, those patients who underwent some plastic operations for artificial joint replacement at university hospitals in Seoul from January 2, 1993 to June 30, 1995 were selected as the population of this study. Among them, 87 people were randomly sampled to answer a questionnaire designed specially. For the surveying tools, Jette's (1980) scale was applied to address the sample people's inconveniences experienced and supports received in their ordinary life, while the scale of Wood, Wylie & Sheafer was used to measure their satisfaction with their life. The collected data were analyzed for percentiles, means, SD, t-test and Pearson's correlations. The results of survey can be summarized as follows ; As a result of t-test the frequencies of other people's support before and after the plastic operation, it was disclosed that those who underwent the operation were supported less frequently. In addition, as a result of t-testing their satisfaction with life before and after the operation, it was found that the operation increased their satisfaction with life significantly. Meanwhile, as a result of t-test inconveniences, frequencies of supports and life satisfaction before and after the plastic operation for artificial knee replacement, it was disclosed that only the inconveniences were significantly reduced after the operation. In contrast, the t-test the variables before and after the plastic operation for artificial hip replacement, it was found that only the frequencies of other people's supports were significant reduced after the operation. Furthermore, the differences 6 months, one year and two years after the plastic operation for artificial joint replacement were t-tested on the variables. As a result, it was disclosed that people's inconvenience, frequencies of supports and life satisfaction were not improved 6 months after the operation but their frequencies of supports decreased significantly one year after, while their inconveniences and life satisfaction were significantly improved two years after. As a result of analyzing the variables with Pearson's correlations, inconveniences and frequency of supports were negatively correlated significantly with the life satisfaction. In conclusion, the plastic operation for artificial joint replacement significantly improved people's living inconveniences, reduced their frequency of other people's support and enhanced their satisfaction with life. To break don the plastic operation for artificial knee replacement improved patients' inconveniences, while the plastic operation for artificial hip replacement not only improved patients' inconveniences but reduced the frequencies of other people's support also. Finally, the finding that the plastic operation for artificial joint replacement brought about the improvement two years after suggests that this period is needed for the patients to adapt themselves to the post-operation conditions.

  • PDF

Study of the Residential Environment and Accessibility of Rehabilitation for Patients with Cerebral Palsy (뇌성마비 환자의 주거 환경과 재활 접근성에 관한 연구)

  • Cho, Gyeong Hee;Chung, Chin Youb;Lee, Kyoung Min;Sung, Ki Hyuk;Cho, Byung Chae;Park, Moon Seok
    • Journal of the Korean Orthopaedic Association
    • /
    • v.54 no.4
    • /
    • pp.309-316
    • /
    • 2019
  • Purpose: This study examined the residential environment and accessibility of rehabilitation for cerebral palsy (CP) to identify the problems with residential laws pertaining to the disabled and provide basic data on the health legislation for the rights of the disabled. Materials and Methods: The literature was searched using three keywords: residence, rehabilitation, and accessibility. Two items were selected: residential environment and rehabilitation accessibility. The questionnaire included 51 items; 24 were scored using a Likert scale and 27 were in the form of multiple-choice questions. Results: This study included 100 subjects, of which 93 lived at home and seven lived in a facility. Of these 93 subjects, 65% were living in apartments, usually two or more floors above ground, and 40% of them were living without elevators. According to the Gross Motor Function Classification System, subjects with I to III belonged to the ambulatory group and IV, V were in the non-ambulatory group. Subjects from both groups who lived at home found it most difficult to visit the rehabilitation center by themselves. In contrast, among those who lived at the facility, the ambulatory group found it most difficult to leave the facility alone, while the non-ambulatory group found it most difficult to use the toilet alone. Moreover, 83% of respondents thought that rehabilitation was necessary for CP. On the other hand, 33% are receiving rehabilitation services. Rehabilitation was performed for an average of 3.6 sessions per week, 39 minutes per session. Conclusion: There is no law that ensures secure and convenient access of CP to higher levels. Laws on access routes to enter rooms are insufficient. The disabled people's law and the disabled person's health law will be implemented in December 2017. It is necessary to enact laws that actually reflect the difficulties of people with disabilities. Based on the results of this study, an investigation of the housing and rehabilitation of patients with CP through a large-scale questionnaire will necessary.

Evaluation of Application Possibility for Floating Marine Pollutants Detection Using Image Enhancement Techniques: A Case Study for Thin Oil Film on the Sea Surface (영상 강화 기법을 통한 부유성 해양오염물질 탐지 기술 적용 가능성 평가: 해수면의 얇은 유막을 대상으로)

  • Soyeong Jang;Yeongbin Park;Jaeyeop Kwon;Sangheon Lee;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1353-1369
    • /
    • 2023
  • In the event of a disaster accident at sea, the scale of damage will vary due to weather effects such as wind, currents, and tidal waves, and it is obligatory to minimize the scale of damage by establishing appropriate control plans through quick on-site identification. In particular, it is difficult to identify pollutants that exist in a thin film at sea surface due to their relatively low viscosity and surface tension among pollutants discharged into the sea. Therefore, this study aims to develop an algorithm to detect suspended pollutants on the sea surface in RGB images using imaging equipment that can be easily used in the field, and to evaluate the performance of the algorithm using input data obtained from actual waters. The developed algorithm uses image enhancement techniques to improve the contrast between the intensity values of pollutants and general sea surfaces, and through histogram analysis, the background threshold is found,suspended solids other than pollutants are removed, and finally pollutants are classified. In this study, a real sea test using substitute materials was performed to evaluate the performance of the developed algorithm, and most of the suspended marine pollutants were detected, but the false detection area occurred in places with strong waves. However, the detection results are about three times better than the detection method using a single threshold in the existing algorithm. Through the results of this R&D, it is expected to be useful for on-site control response activities by detecting suspended marine pollutants that were difficult to identify with the naked eye at existing sites.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

An Overview of the Rationale of Monetary and Banking Intervention: The Role of the Central Bank in Money and Banking Revisited (화폐(貨幣)·금융개입(金融介入)의 이론적(理論的) 근거(根據)에 대한 고찰(考察) : 중앙은행(中央銀行)의 존립근거(存立根據)에 대한 개관(槪觀))

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.12 no.3
    • /
    • pp.71-94
    • /
    • 1990
  • This paper reviews the rationale of monetary and banking intervention by an outside authority, either the government or the central bank, and seeks to delineate clearly the optimal limits to the monetary and banking deregulation currently underway in Korea as well as on a global scale. Furthermore, this paper seeks to establish an objective and balanced view on the role of the central bank, especially in light of the current discussion on the restructuring of Korea's central bank, which has been severely contaminated by interest-group politics. The discussion begins with the recognition that the modern free banking school and the new monetary economics are becoming formidable challenges to the traditional role of the government or the central bank in the monetary and banking sector. The paper reviews six arguments that have traditionally been presented to support intervention: (1) the possibility of an over-issue of bank notes under free banking instead of central banking; (2) externalities in and the public good nature of the use of money; (3) economies of scale and natural monopoly in producing money; (4) the need for macro stabilization policy due to the instability of the real sector; (5) the external effects of bank failure due to the inherent instability of the existing banking system; and (6) protection for small banknote users and depositors. Based on an analysis of the above arguments, the paper speculates on the optimal role of the government or central bank in the monetary and banking system and the optimal degree of monetary and banking deregulation. By contrast to the arguments for free banking or laissez-faire monetary systems, which become fashionable in recent years, monopoly and intervention by the government or central bank in the outside money system can be both necessary and optimal. In this case, of course, an over-issue of fiat money may be possible due to political considerations, but this issue is beyond the scope of this paper. On the other hand, the issue of inside monies based on outside money could indeed be provided for optimally under market competition by private institutions. A competitive system in issuing inside monies would help realize, to the maxim urn extent possible, external economies generated by using a single outside money. According to this reasoning, free banking activities will prevail in the inside money system, while a government monopoly will prevail in the outside money system. This speculation, then, also implies that the monetary and banking deregulation currently underway should and most likely will be limited to the inside money system, which could be liberalized to the fullest degree. It is also implied that it will be impractical to deregulate the outside money system and to allow market competition to provide outside money, in accordance with the arguments of the free banking school and the new monetary economics. Furthermore, the role of the government or central bank in this new environment will not be significantly different from their current roles. As far as the supply of fiat money continues to be monopolized by the government, the control of the supply of base money and such related responsibilities as monetary policy (argument(4)) and the lender of the last resort (argument (5)) will naturally be assigned to the outside money supplier. However, a mechanism for controlling an over-issue of fiat money by a monopolistic supplier will definitely be called for (argument(1)). A monetary policy based on a certain policy rule could be one possibility. More importantly, the deregulation of the inside money system would further increase the systemic risk inherent in the current fractional banking system, while enhancing the efficiency of the system (argument (5)). In this context, the role of the lender of the last resort would again become an instrument of paramount importance in alleviating liquidity crises in the early stages, thereby disallowing the possibility of a widespread bank run. Similarly, prudential banking supervision would also help maintain the safety and soundness of the fully deregulated banking system. These functions would also help protect depositors from losses due to bank failures (argument (6)). Finally, these speculations suggest that government or central bank authorities have probably been too conservative on the issue of the deregulation of the financial system, beyond the caution necessary to preserve system safety. Rather, only the fullest deregulation of the inside money system seems to guarantee the maximum enjoyment of external economies in the single outside money system.

  • PDF

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

Tectonic Structures and Hydrocarbon Potential in the Central Bransfield Basin, Antarctica (남극 브랜스필드 해협 중앙분지의 지체구조 및 석유부존 가능성)

  • Huh Sik;Kim Yeadong;Cheong Dae-Kyo;Jin Young Keun;Nam Sang Heon
    • The Korean Journal of Petroleum Geology
    • /
    • v.5 no.1_2 s.6
    • /
    • pp.9-15
    • /
    • 1997
  • The study area is located in the Central Bransfield Basin, Antarctica. To analyze the morphology of seafloor, structure of basement, and seismic stratigraphy of the sedimentary layers, we have acquired, processed, and interpreted the multi-channel seismic data. The northwest-southeastern back-arc extension dramatically changes seafloor morphology, volcanic and fault distribution, and basin structure along the spreading ridges. The northern continental shelf shows a narrow, steep topography. In contrast, the continental shelf or slope in the south, which is connected to the Antarctic Peninsula, has a gentle gradient. Volcanic activities resulted in the formation of large volcanos and basement highs near the spreading center, and small-scale volcanic diapirs on the shelf. A very long, continuous normal fault characterizes the northern shelf, whereas several basinward synthetic faults probably detach into the master fault in the south. Four transfer faults, the northwest-southeastern deep-parallel structures, controlled the complex distributions of the volcanos, normal faults, depocenters, and possibly hydrocarbon provinces in the study area. They have also deformed the basement structure and depositional pattern. Even though the Bransfield Basin was believed to be formed in the Late Cenozoic (about 4 Ma), the hydrocarbon potential may be very high due to thick sediment accumulation, high organic contents, high heat flow resulted from the active tectonics, and adequate traps.

  • PDF

Optimization of the Flip Angle and Scan Timing in Hepatobiliary Phase Imaging Using T1-Weighted, CAIPIRINHA GRE Imaging

  • Kim, Jeongjae;Kim, Bong Soo;Lee, Jeong Sub;Woo, Seung Tae;Choi, Guk Myung;Kim, Seung Hyoung;Lee, Ho Kyu;Lee, Mu Sook;Lee, Kyung Ryeol;Park, Joon Hyuk
    • Investigative Magnetic Resonance Imaging
    • /
    • v.22 no.1
    • /
    • pp.1-9
    • /
    • 2018
  • Purpose: This study was designed to optimize the flip angle (FA) and scan timing of the hepatobiliary phase (HBP) using the 3D T1-weighted, gradient-echo (GRE) imaging with controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA) technique on gadoxetic acid-enhanced 3T liver MR imaging. Materials and Methods: Sixty-two patients who underwent gadoxetic acid-enhanced 3T liver MR imaging were included in this study. Four 3D T1-weighted GRE imaging studies using the CAIPIRINHA technique and FAs of $9^{\circ}$ and $13^{\circ}$ were acquired during HBP at 15 and 20 min after intravenous injection of gadoxetic acid. Two abdominal radiologists, who were blinded to the FA and the timing of image acquisition, assessed the sharpness of liver edge, hepatic vessel clarity, lesion conspicuity, artifact severity, and overall image quality using a five-point scale. Quantitative analysis was performed by another radiologist to estimate the relative liver enhancement (RLE) and the signal-to-noise ratio (SNR). Statistical analyses were performed using the Wilcoxon signed rank test and one-way analysis of variance. Results: The scores of the HBP with an FA of $13^{\circ}$ during the same delayed time were significantly higher than those of the HBP with an FA of $9^{\circ}$ in all the assessment items (P < 0.01). In terms of the delay time, images at the same FA obtained with a 20-min-HBP showed better quality than those obtained with a 15-min-HBP. There was no significant difference in qualitative scores between the 20-min-HBP and the 15-min-HBP images in the non-liver cirrhosis (LC) group except for the hepatic vessel clarity score with $9^{\circ}$ FA. In the quantitative analysis, a statistically significant difference was found in the degree of RLE in the four HBP images (P = 0.012). However, in the subgroup analysis, no significant difference in RLE was found in the four HBP images in either the LC or the non-LC groups. The SNR did not differ significantly in the four HBP images. In the subgroup analysis, 20-min-HBP imaging with a $13^{\circ}$ FA showed the highest SNR value in the LC-group, whereas 15-min-HBP imaging with a $13^{\circ}$ FA showed the best value of SNR in the non-LC group. Conclusion: The use of a moderately high FA improves the image quality and lesion conspicuity on 3D, T1-weighted GRE imaging using the CAIPIRINHA technique on gadoxetic acid, 3T liver MR imaging. In patients with normal liver function, the 15-min-HBP with a $13^{\circ}$ FA represents a feasible option without a significant decrease in image quality.

Prevalence of Hypertension and Related Risk Factors of the Older Residents in Andong Rural Area (안동 농촌지역 중년 및 노인 주민의 고혈압 유병율과 위험요인 분석)

  • Lee, Hye-Sang;Kwun, In-Sook;Kwon, Chong-Suk
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.38 no.7
    • /
    • pp.852-861
    • /
    • 2009
  • This study was performed to assess the risk factors associated with hypertension from Jan/2003 to Feb/2003. The subjects were 1,296 people (496 males, 800 females) aged 40 years and over living in Andong rural area. The hypertensive group was composed of 602 people (272 males, 330 females), who were diagnosed as hypertension ($SBP{\geq}140\;mmHg$ or $DBP{\geq}90\;mmHg$) for the first time at this health examination. The mean anthropometric values of body weight, body fat (%), body mass index (BMI) and waist circumference were significantly higher in hypertensive group than those in normal group. However, the biochemical measurements such as total-cholesterol (TC), triglyceride (TG), HDL-C, LDL-C and fasting blood glucose (FBG) levels did not show any difference between two groups except TG in female. The risk factors of interest in the development of hypertension were analyzed using the multiple logistic regression and expressed as odds ratio (OR) and 95% confidential interval (CI). The results showed that age, sex, obesity, waist circumference, alcohol drinking and meat intakes were risk factors for hypertension. In contrast, cigarette smoking, exercise and the increased fish, fruit and vegetable (except Kimchi) consumption, blood lipid levels and FBG were not linked with the development of hypertension. Nutrient intakes were not associated with hypertension, either. In conclusion, we cannot assert that this study established the existence of the cause-and-effect relationship between nutrient intakes and risk of hypertension in the subjects, but it does suggest that this is a question worth investigating further using a larger scale of case-control study to determine how the past exposure to some nutrient or dietary component relates to the development of the disease.