• Title/Summary/Keyword: 산출측정

Search Result 2,525, Processing Time 0.035 seconds

The Patterns of Change in Arterial Oxygen Saturation and Heart Rate and Their Related Factors during Voluntary Breath holding and Rebreathing (자발적 호흡정지 및 재개시 동맥혈 산소포화도와 심박수의 변동양상과 이에 영향을 미치는 인자)

  • Lim, Chae-Man;Kim, Woo-Sung;Choi, Kang-Hyun;Koh, Youn-Suck;Kim, Dong-Soon;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.41 no.4
    • /
    • pp.379-388
    • /
    • 1994
  • Background : In sleep apnea syndrome, arterial oxygen saturation($SaO_2$) decreases at a variable rate and to a variable degree for a given apneic period from patient to patient, and various kinds of cardiac arrythmia are known to occur. Factors supposed to affect arterial oxygen desaturation during apnea are duration of apnea, lung voulume at which apnea occurs, and oxygen consumption rate of the subject. The lung serves as preferential oxygen source during apnea, and there have been many reports related with the influence of lung volume on $SaO_2$ during apnea, but there are few, if any, studies about the influence of oxygen consumption rate of an individual on $SaO_2$ during breath holding or about the profile of arterial oxygen resaturation after breathing resumed. Methods : To investigate the changes of $SaO_2$ and heart rate(HR) during breath holding(BH) and rebreathing(RB) and to evaluate the physiologic factors responsible for the changes, lung volume measurements, and arterial blood gas analyses were performed in 17 healthy subjects. Nasal airflow by thermistor, $SaO_2$ by pulse oxymeter and ECG tracing were recorded on Polygraph(TA 4000, Gould, U.S.A.) during voluntary BH & RB at total lung capacity(TLC), at functional residual capacity(FRC) and at residual volume(RV), respectively, for the study subjects. Each subject's basal metabolic rate(BMR) was assumed on Harris-Benedict equation. Results: The time needed for $SaO_2$ to drop 2% from the basal level during breath holding(T2%) were $70.1{\pm}14.2$ sec(mean${\pm}$standard deviation) at TLC, $44.0{\pm}11.6$ sec at FRC, and $33.2{\pm}11.1$ sec at RV(TLC vs. FRC, p<0.05; FRC vs. RV, p<0.05). On rebreathing after $SaO_2$ decreased 2%, further decrement in $SaO_2$ was observed and it was significantly greater at RV($4.3{\pm}2.1%$) than at TLC($1.4{\pm}1.0%$)(p<0.05) or at FRC($1.9{\pm}1.4%$)(p<0.05). The time required for $SaO_2$ to return to the basal level after RB(Tr) at TLC was not significantly different from those at FRC or at RV. T2% had no significant correlation either with lung volumes or with BMR respectively. On the other hand, T2% had significant correlation with TLC/BMR(r=0.693, p<0.01) and FRC/BMR (r=0.615, p<0.025) but not with RV/BMR(r=0.227, p>0.05). The differences between maximal and minimal HR(${\Delta}HR$) during the BH-RB manuever were $27.5{\pm}9.2/min$ at TLC, $26.4{\pm}14.0/min$ at RV, and $19.1{\pm}6.0/min$ at FRC which was significantly smaller than those at TLC(p<0.05) or at RV(p<0.05). The mean difference of 5 p-p intervals before and after RB were $0.8{\pm}0.10$ sec and $0.72{\pm}0.09$ sec at TLC(p<0.001), $0.82{\pm}0.11$ sec and $0.73{\pm}0.09$ sec at FRC(p<0.025), and $0.77{\pm}0.09$ sec and $0.72{\pm}0.09$ sec at RV(p<0.05). Conclusion Healthy subjects showed arterial desaturation of various rates and extent during breath holding at different lung volumes. When breath held at lung volume greater than FRC, the rate of arterial desaturation significantly correlated with lung volume/basal metabolic rate, but when breath held at RV, the rate of arterial desaturation did not correlate linearly with RV/BMR. Sinus arrythmias occurred during breath holding and rebreathing manuever irrespective of the size of the lung volume at which breath holding started, and the amount of change was smallest when breath held at FRC and the change in vagal tone induced by alteration in respiratory movement might be the major responsible factor for the sinus arrythmia.

  • PDF

A Structural Relationship among Job Requirements, Job Resources and Job Burnout, and Organizational Effectiveness of Private Security Guards (민간경비원의 직무요구 직무자원과 소진, 조직유효성의 구조적 관계)

  • Kim, Sung-Cheol;Kim, Young-Hyun
    • Korean Security Journal
    • /
    • no.48
    • /
    • pp.9-33
    • /
    • 2016
  • The purpose of the present study was to find out cause-and-effect relationship between job requirements and job resources, with job burnout as a mediator variable, and the effects of these variables on organizational effectiveness. The population in the present study was private security guards employed by 13 private security companies in Seoul and Gyeonggi-do areas, and a survey was conducted on 500 security guards selected using purposive sampling technique. Out of 460 questionnaires distributed, 429 responses, excluding 31 outliers or insincere responses, were used for data analysis. For analysis, data were coded and entered into SPSS 18.0 and AMOS 18.0, which were used to analyze the data. Descriptive analyses were performed to find out sociodemographic characteristics of the respondents. The exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were used to test the validity of the measurement tool, and the Cronbach's Alpha coefficients were calculated to test the reliability. To find out the significance of relationships among variables, Pearson's correlation analysis was performed. Covariance Structure Analysis (CSA) was performed to test the relationship among latent factors of a model for job requirements, job resources, job burnout, and organizational effectiveness of the private security guards, and the fitness of the model analyzed with CSA was determined by the goodness-of-fit index ($x^2$, df, p, RMR, GFI, CFI, TLI, RMSEA). The level of significance was set at .05, and the following results were obtained. First, even though the effect of job requirements on job burnout was not statistically significant, it had a positive influence overall, and this result can be considered such that the higher the perception of job requirements by the member of the organization, the higher the perception of job burnout. Second, the influence of job resources on job burnout was negative, which can be considered that the higher the perception of job resources, the lower the perception of job burnout. Third, even though the influence of job requirements on organizational effectiveness was statistically nonsignificant, it had a negative influence overall, and this result can be considered that the higher the perception of job requirements, the lower the perception of organizational effectiveness. Fourth, job resources had a positive influence on organizational effectiveness, and it can be considered that the higher the perception of job resources, the higher the perception of organizational effectiveness. Fifth, the results of the analysis between job burnout and organizational effectiveness revealed that, even though the influence of job burnout on organizational effectiveness was statistically nonsignificant, it had partial negative influences on sublevels of organizational effectiveness, and this may suggest that the higher the perception of job burnout by the organization members, the lower the organizational effectiveness. Sixth, the analysis of mediating role in the relationship between job requirements and organizational effectiveness, job burnout was taking partial mediating role between job requirements and organizational effectiveness. These results suggest that reducing job burnout by managing job requirements, organizational effectiveness that leads to job satisfaction, organizational commitment, and turnover intention can be maximized. Seventh, the analysis of mediating role in the relationship among job requirements, job resources, and organizational effectiveness, job burnout was assuming a partial mediating role in the relationships among job requirements, job resources, and organizational effectiveness. These results suggest that organizational effectiveness can be maximized by either lowering job requirements or burnout management through reorganizing job resources.

  • PDF

The Comparison of Results Among Hepatitis B Test Reagents Using National Standard Substance (국가 표준물질을 이용한 B형 간염 검사 시약 간의 결과 비교)

  • Lee, Young-Ji;Sim, Seong-Jae;Back, Song-Ran;Seo, Mee-Hye;Yoo, Seon-Hee;Cho, Shee-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.203-207
    • /
    • 2010
  • Purpose: Hepatitis B is infection caused by Hepatitis B virus (HBV). Currently, there are several methods, Kits and equipments for conducting Hepatitis B test. Due to ununiformed methods, it would cause some differences. To manage these differences, it needs process evaluating function of test system and reagent using particular standard substance. The aim of this study is to investigate tendency of RIA method's reagent used in Asan Medical Center through comparing several other test reagents using national standard substance. Materials and Methods: The standard substance in National Institute of Food and Drug Safety Evaluation's biology medicine consists of 5 things, 4 antigens and 1 antibody. We tested reagents using A, B company's Kits according to each test method. All tests are measured repeatedly to obtain accurate results. Results: Test result of "HBs Ag Mixed titer Performance panel" is obtained match rate compared S/CO unit standard with RIA method and EIA 3 reagents, CIA 2 reagents is that company A's reagent is 94.4% (17/18), 83.3% (15/18), B is 88.9% (16/18), 77.8% (14/18). Test result of "HBs Ag Low titer Performance panel" is obtain that EIA 2 reagents is shown 7 posive results, CIA 3 reagents is 11, and RIA method's company A's reagent is 3, B is 2 of 13 in low panel. "HBV surface antigen 86.76 IU/vial" tested dilution. A is obtain positive results to 600 times(0.14 IU/mL), B is 300 times (0.29 IU/mL). Case of "HBV human immunoglobulin 95.45 IU/vial", A is shown positive result to 10,000 times (9.5 mIU/mL) and B is 4,000 times (24 mIU/mL). Test result of "HBs Ag Working Standards 0.02~11.52 IU/mL" is shown that Company A's kit concentration level was 0.38IU/mL, company B was 2.23 IU/mL and higher level of concentration was positive results. Conclusion: When comparing various test reagents and RIA method according to National Standard substances for Hepatitis B test, we recognized that there were no significant trends between reagents. For hepatitis B virus antigen-antibody titers even in parts of the test up to 600 times the antigen, antibodies to 10,000 times the maximum positive results could be obtained. Therefore, we confirmed that results from Asan Medical Center are performed smoothly by reagents and system for hepatitis B virus test.

  • PDF

The Comparison of Image Quality and Quantitative Indices by Wide Beam Reconstruction Method and Filtered Back Projection Method in Tl-201 Myocardial Perfusion SPECT (Tl-201 심근관류 SPECT 검사에서 광대역 재구성(Wide Beam Reconstruction: WBR) 방법과 여과 후 역투영법에 따른 영상의 질 및 정량적 지표 값 비교)

  • Yoon, Soon-Sang;Nam, Ki-Pyo;Shim, Dong-Oh;Kim, Dong-Seok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.122-127
    • /
    • 2010
  • Purpose: The Xpress3.$cardiac^{TM}$ which is a kind of wide beam reconstruction (WBR) method developed by UltraSPECT (Haifa, Israel) enables the acquisition of at quarter time while maintaining image quality. The purpose of this study is to investigate the usefulness of WBR method for decreasing scan times and to compare to it with filtered back projection (FBP), which is the method routinely used. Materials and Methods: Phantom and clinical studies were performed. The anthropomorphic torso phantom was made on an equality with counts from patient's body. The Tl-201 concentrations in the compartments were 74 kBq (2 ${\mu}Ci$)/cc in myocardium, 11.1 kBq (0.3 ${\mu}Ci$)/cc in soft tissue, and 2.59 kBq (0.07 ${\mu}Ci$)/cc in lung. The non-gated Tl-201 myocardial perfusion SPECT data were acquired with the phantom. The former study was scanned for 50 seconds per frame with FBP method, and the latter study was acquired for 13 seconds per frame with WBR method. Using the Xeleris ver. 2.0551, full width at half maximum (FWHM) and average image contrast were compared. In clinical studies, we analyzed the 30 patients who were examined by Tl-201 gated myocardial perfusion SPECT in department of nuclear medicine at Asan Medical Center from January to April 2010. The patients were imaged at full time (50 second per frame) with FBP algorithm and again quarter-time (13 second per frame) with the WBR algorithm. Using the 4D MSPECT (4DM), Quantitative Perfusion SPECT (QPS), and Quantitative Gated SPECT (QGS) software, the summed stress score (SSS), summed rest score (SRS), summed difference score, end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (EF) were analyzed for their correlations and statistical comparison by paired t-test. Results: As a result of the phantom study, the WBR method improved FWHM more than about 30% compared with FBP method (WBR data 5.47 mm, FBP data 7.07 mm). And the WBR method's average image contrast was also higher than FBP method's. However, in result of quantitative indices, SSS, SDS, SRS, EDV, ESV, EF, there were statistically significant differences from WBR and FBP(p<0.01). In the correlation of SSS, SDS, SRS, there were significant differences for WBR and FBP (0.18, 0.34, 0.08). But EDV, ESV, EF showed good correlation with WBR and FBP (0.88, 0.89, 0.71). Conclusion: From phantom study results, we confirmed that the WBR method reduces an acquisition time while improving an image quality compared with FBP method. However, we should consider significant differences in quantitative indices. And it needs to take an evaluation test to apply clinical study to find a cause of differences out between phantom and clinical results.

  • PDF

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Comparison and evaluation of volumetric modulated arc therapy and intensity modulated radiation therapy plans for postoperative radiation therapy of prostate cancer patient using a rectal balloon (직장풍선을 삽입한 전립선암 환자의 수술 후 방사선 치료 시 용적변조와 세기변조방사선치료계획 비교 평가)

  • Jung, hae youn;Seok, jin yong;Hong, joo wan;Chang, nam jun;Choi, byeong don;Park, jin hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.45-52
    • /
    • 2015
  • Purpose : The dose distribution of organ at risk (OAR) and normal tissue is affected by treatment technique in postoperative radiation therapy for prostate cancer. The aim of this study was to compare dose distribution characteristic and to evaluate treatment efficiency by devising VMAT plans according to applying differed number of arc and IMRT plan for postoperative patient of prostate cancer radiation therapy using a rectal balloon. Materials and Methods : Ten patients who received postoperative prostate radiation therapy in our hospital were compared. CT images of patients who inserted rectal balloon were acquired with 3 mm thickness and 10 MV energy of HD120MLC equipped Truebeam STx (Varian, Palo Alto, USA) was applied by using Eclipse (Version 11.0, Varian, Palo Alto, USA). 1 Arc, 2 Arc VMAT plans and 7-field IMRT plan were devised for each patient and same values were applied for dose volume constraint and plan normalization. To evaluate these plans, PTV coverage, conformity index (CI) and homogeneity index (HI) were compared and $R_{50%}$ was calculated to assess low dose spillage as per treatment plan. $D_{25%}$ of rectum and bladder Dmean were compared on OAR. And to evaluate the treatment efficiency, total monitor units(MU) and delivery time were considered. Each assessed result was analyzed by average value of 10 patients. Additionally, portal dosimetry was carried out for accuracy verification of beam delivery. Results : There was no significant difference on PTV coverage and HI among 3 plans. Especially CI and $R_{50%}$ on 7F-IMRT were the highest as 1.230, 3.991 respectively(p=0.00). Rectum $D_{25%}$ was similar between 1A-VMAT and 2A-VMAT. But approximately 7% higher value was observed on 7F-IMRT compare to the others(p=0.02) and bladder Dmean were similar among the all plan(P>0.05). Total MU were 494.7, 479.7, 757.9 respectively(P=0.00) for 1A-VMAT, 2A-VMAT, 7F-IMRT and at the most on 7F-IMRT. The delivery time were 65.2sec, 133.1sec, 145.5sec respectively(p=0.00). The obvious shortest time was observed on 1A-VMAT. All plans indicated over 99.5%(p=0.00) of gamma pass rate (2 mm, 2%) in portal dosimetry quality assurance. Conclusion : As a result of study, postoperative prostate cancer radiation therapy for patient using a rectal balloon, there was no significant difference of PTV coverage but 1A-VMAT and 2A-VMAT were more efficient for dose reduction of normal tissue and OARs. Between VMAT plans. $R_{50%}$ and MU were little lower in 2A-VMAT but 1A-VMAT has the shortest delivery time. So it is regarded to be an effective plan and it can reduce intra-fractional motion of patient also.

  • PDF

Diagnosis of Obstructive Sleep Apnea Syndrome Using Overnight Oximetry Measurement (혈중산소포화도검사를 이용한 폐쇄성 수면무호흡증의 흡증의 진단)

  • Youn, Tak;Park, Doo-Heum;Choi, Kwang-Ho;Kim, Yong-Sik;Woo, Jong-Inn;Kwon, Jun-Soo;Ha, Kyoo-Seob;Jeong, Do-Un
    • Sleep Medicine and Psychophysiology
    • /
    • v.9 no.1
    • /
    • pp.34-40
    • /
    • 2002
  • Objectives: The gold standard for diagnosing obstructive sleep apnea syndrome (OSAS) is nocturnal polysomnography (NPSG). This is rather expensive and somewhat inconvenient, however, and consequently simpler and cheaper alternatives to NPSG have been proposed. Oximetry is appealing because of its widespread availability and ease of application. In this study, we have evaluated whether oximetry alone can be used to diagnose or screen OSAS. The diagnostic performance of an analysis algorithm using arterial oxygen saturation ($SaO_2$) base on 'dip index', mean of $SaO_2$, and CT90 (the percentage of time spent at $SaO_2$<90%) was compared with that of NPSG. Methods: Fifty-six patients referred for NPSG to the Division of Sleep Studies at Seoul National University Hospital, were randomly selected. For each patient, NPSG with oximetry was carried out. We obtained three variables from the oximetry data such as the dip index most linearly correlated with respiratory disturbance index (RDI) from NPSG, mean $SaO_2$, and CT90 with diagnosis from NPSG. In each case, sensitivity, specificity and positive and negative predictive values of oximetry data were calculated. Results: Thirty-nine patients out of fifty-six patients were diagnosed as OSAS with NPSG. Mean RDI was 17.5, mean $SaO_2$ was 94.9%, and mean CT90 was 5.1%. The dip index [4%-4sec] was most linearly correlated with RDI (r=0.861). With dip index [4%-4sec]${\geq}2$ as diagnostic criteria, we obtained sensitivity of 0.95, specificity of 0.71, positive predictive value of 0.88, and negative predictive value of 0.86. Using mean $SaO_2{\leq}97%$, we obtained sensitivity of 0.95, specificity of 0.41, positive predictive value of 0.79, and negative predictive value of 0.78. Using $CT90{\geq}5%$, we obtained sensitivity of 0.28, specificity of 1.00, positive predictive value of 1.00, and negative predictive value of 0.38. Conclusions: The dip index [4%-4sec] and mean $SaO_2{\leq}97%$ obtained from nocturnal oximetry data are helpful in diagnosis of OSAS. CT90${\leq}$5% can be also used in excluding OSAS.

  • PDF

Environmental Pollution in Korea and Its Control (우리나라의 환경오염 현황과 그 대책)

  • 윤명조
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1972.03a
    • /
    • pp.5-6
    • /
    • 1972
  • Noise and air pollution, which accompany the development of industry and the increase of population, contribute to the deterioration of urban environment. The air pollution level of Seoul has gradually increased and the city residents are suffering from a high pollution of noise. If no measures were taken against pollution, the amount of emission of pollutant into air would be 36.7 thousand tons per year per square kilometer in 1975, three times more than that of 1970, and it would be the same level as that of United States in 1968. The main sources of air pollution in Seoul are the exhaust has from vehicles and the combustion of bunker-C oil for heating purpose. Thus, it is urgent that an exhaust gas cleaner should be instaled to every car and the fuel substituted by less sulfur-contained-oil to prevent the pollution. Transportation noise (vehicular noise and train noise) is the main component of urban noise problem. The average noise level in downtown area is about 75㏈ with maximum of 85㏈ and the vehicular homing was checked 100㏈ up and down. Therefore, the reduction of the number of bus-stop the strict regulation of homing in downtown area and a better maintenance of car should be an effective measures against noise pollution in urban areas. Within the distance of 200 metres from railroad, the train noise exceeds the limit specified by the pollution control law in Korea. Especially, the level of noise and steam-whistle of train as measured by the ISO evaluation can adversely affect the community activities of residents. To prevent environmental destruction, many developed countries have taken more positive action against worsening pollution and such an action is now urgently required in this country.

  • PDF

A Thermal Time-Driven Dormancy Index as a Complementary Criterion for Grape Vine Freeze Risk Evaluation (포도 동해위험 판정기준으로서 온도시간 기반의 휴면심도 이용)

  • Kwon, Eun-Young;Jung, Jea-Eun;Chung, U-Ran;Lee, Seung-Jong;Song, Gi-Cheol;Choi, Dong-Geun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.1
    • /
    • pp.1-9
    • /
    • 2006
  • Regardless of the recent observed warmer winters in Korea, more freeze injuries and associated economic losses are reported in fruit industry than ever before. Existing freeze-frost forecasting systems employ only daily minimum temperature for judging the potential damage on dormant flowering buds but cannot accommodate potential biological responses such as short-term acclimation of plants to severe weather episodes as well as annual variation in climate. We introduce 'dormancy depth', in addition to daily minimum temperature, as a complementary criterion for judging the potential damage of freezing temperatures on dormant flowering buds of grape vines. Dormancy depth can be estimated by a phonology model driven by daily maximum and minimum temperature and is expected to make a reasonable proxy for physiological tolerance of buds to low temperature. Dormancy depth at a selected site was estimated for a climatological normal year by this model, and we found a close similarity in time course change pattern between the estimated dormancy depth and the known cold tolerance of fruit trees. Inter-annual and spatial variation in dormancy depth were identified by this method, showing the feasibility of using dormancy depth as a proxy indicator for tolerance to low temperature during the winter season. The model was applied to 10 vineyards which were recently damaged by a cold spell, and a temperature-dormancy depth-freeze injury relationship was formulated into an exponential-saturation model which can be used for judging freeze risk under a given set of temperature and dormancy depth. Based on this model and the expected lowest temperature with a 10-year recurrence interval, a freeze risk probability map was produced for Hwaseong County, Korea. The results seemed to explain why the vineyards in the warmer part of Hwaseong County have been hit by more freeBe damage than those in the cooler part of the county. A dormancy depth-minimum temperature dual engine freeze warning system was designed for vineyards in major production counties in Korea by combining the site-specific dormancy depth and minimum temperature forecasts with the freeze risk model. In this system, daily accumulation of thermal time since last fall leads to the dormancy state (depth) for today. The regional minimum temperature forecast for tomorrow by the Korea Meteorological Administration is converted to the site specific forecast at a 30m resolution. These data are input to the freeze risk model and the percent damage probability is calculated for each grid cell and mapped for the entire county. Similar approaches may be used to develop freeze warning systems for other deciduous fruit trees.

Assessment Study on Educational Programs for the Gifted Students in Mathematics (영재학급에서의 수학영재프로그램 평가에 관한 연구)

  • Kim, Jung-Hyun;Whang, Woo-Hyung
    • Communications of Mathematical Education
    • /
    • v.24 no.1
    • /
    • pp.235-257
    • /
    • 2010
  • Contemporary belief is that the creative talented can create new knowledge and lead national development, so lots of countries in the world have interest in Gifted Education. As we well know, U.S.A., England, Russia, Germany, Australia, Israel, and Singapore enforce related laws in Gifted Education to offer Gifted Classes, and our government has also created an Improvement Act in January, 2000 and Enforcement Ordinance for Gifted Improvement Act was also announced in April, 2002. Through this initiation Gifted Education can be possible. Enforcement Ordinance was revised in October, 2008. The main purpose of this revision was to expand the opportunity of Gifted Education to students with special education needs. One of these programs is, the opportunity of Gifted Education to be offered to lots of the Gifted by establishing Special Classes at each school. Also, it is important that the quality of Gifted Education should be combined with the expansion of opportunity for the Gifted. Social opinion is that it will be reckless only to expand the opportunity for the Gifted Education, therefore, assessment on the Teaching and Learning Program for the Gifted is indispensible. In this study, 3 middle schools were selected for the Teaching and Learning Programs in mathematics. Each 1st Grade was reviewed and analyzed through comparative tables between Regular and Gifted Education Programs. Also reviewed was the content of what should be taught, and programs were evaluated on assessment standards which were revised and modified from the present teaching and learning programs in mathematics. Below, research issues were set up to assess the formation of content areas and appropriateness for Teaching and Learning Programs for the Gifted in mathematics. A. Is the formation of special class content areas complying with the 7th national curriculum? 1. Which content areas of regular curriculum is applied in this program? 2. Among Enrichment and Selection in Curriculum for the Gifted, which one is applied in this programs? 3. Are the content areas organized and performed properly? B. Are the Programs for the Gifted appropriate? 1. Are the Educational goals of the Programs aligned with that of Gifted Education in mathematics? 2. Does the content of each program reflect characteristics of mathematical Gifted students and express their mathematical talents? 3. Are Teaching and Learning models and methods diverse enough to express their talents? 4. Can the assessment on each program reflect the Learning goals and content, and enhance Gifted students' thinking ability? The conclusions are as follows: First, the best contents to be taught to the mathematical Gifted were found to be the Numeration, Arithmetic, Geometry, Measurement, Probability, Statistics, Letter and Expression. Also, Enrichment area and Selection area within the curriculum for the Gifted were offered in many ways so that their Giftedness could be fully enhanced. Second, the educational goals of Teaching and Learning Programs for the mathematical Gifted students were in accordance with the directions of mathematical education and philosophy. Also, it reflected that their research ability was successful in reaching the educational goals of improving creativity, thinking ability, problem-solving ability, all of which are required in the set curriculum. In order to accomplish the goals, visualization, symbolization, phasing and exploring strategies were used effectively. Many different of lecturing types, cooperative learning, discovery learning were applied to accomplish the Teaching and Learning model goals. For Teaching and Learning activities, various strategies and models were used to express the students' talents. These activities included experiments, exploration, application, estimation, guess, discussion (conjecture and refutation) reconsideration and so on. There were no mention to the students about evaluation and paper exams. While the program activities were being performed, educational goals and assessment methods were reflected, that is, products, performance assessment, and portfolio were mainly used rather than just paper assessment.