• Title/Summary/Keyword: number and quantify

Search Result 287, Processing Time 0.028 seconds

Standardizing GC-FID Measurement of Nonmethane Hydrocarbons in Air for International Intercomparison Using Retention Index and Effective Carbon Number Concept

  • Liaw, Sheng-Ju;Tso, Tai-Ly
    • Analytical Science and Technology
    • /
    • v.8 no.4
    • /
    • pp.807-814
    • /
    • 1995
  • Accurate measurements of ozone precursors are required to understand the process and extent of ozone formation in rural and urban areas. Nonmethane hydrocarbons (NMHCs) have been identified as important ozone precursors. Identification and quantification of NMHCs are difficult because of the large number present and the wide molecular weight range encountered in typical air samples. A major plan of the research team of the Climate and Air Quality Taiwan Station (CATs) was the measurement of atmospheric nonmethane hydrocarbons. An analytical method has been development for the analysis of the individual nonmethane hydrocarbons in ambient air at ppb (v) and subppb(v) levels. The whole ambient air samples were collected in canisters and analyzed by GC-FID with $Al_2O_3$/KCl PLOT column. Our targeted for quantitative analysis 43 compounds that may be substantial contributors to ozone formation. The retention indices and molar response factors of some commercially available $C_2{\sim}C_{10}$ hydrocarbons were determined and used to identify and quantify air samples. A quality assurance program was instituted to ensure that good measurements were made by participating in the International Nonmethane Hydrocarbon Intercomparison Experiments (NOMHICE).

  • PDF

Quantitative Analysis of Random Errors of the WRF-FLEXPART Model for Backward-in-time Simulation over the Seoul Metropolitan Area (수도권 영역의 시간 후방 모드 WRF-FLEXPART 모의를 위한 입자 수에 따른 무작위 오차의 정량 분석)

  • Woo, Ju-Wan;Lee, Jae-Hyeong;Lee, Sang-Hyun
    • Atmosphere
    • /
    • v.29 no.5
    • /
    • pp.551-566
    • /
    • 2019
  • Quantitative understanding of a random error that is associated with Lagrangian particle dispersion modeling is a prerequisite for backward-in-time mode simulations. This study aims to quantify the random error of the WRF-FLEXPART model and suggest an optimum number of the Lagrangian particles for backward-in-time simulations over the Seoul metropolitan area. A series of backward-in-time simulations of the WRF-FLEXPART model has conducted at two receptor points by changing the number of Lagrangian particles and the relative error, as a quantitative indicator of random error, is analyzed to determine the optimum number of the release particles. The results show that in the Seoul metropolitan area a 1-day Lagrangian transport contributes 80~90% in residence time and ~100% in atmospheric enhancement of carbon monoxide. The relative errors in both the residence time and the atmospheric concentration enhancement are larger when the particles release in the daytime than in the nighttime, and in the inland area than in the coastal area. The sensitivity simulations reveal that the relative errors decrease with increasing the number of Lagrangian particles. The use of small number of Lagrangian particles caused significant random errors, which is attributed to the random number sampling process. For the particle number of 6000, the relative error in the atmospheric concentration enhancement is estimated as -6% ± 10% with reduction of computational time to 21% ± 7% on average. This study emphasizes the importance of quantitative analyses of the random errors in interpreting backward-in-time simulations of the WRF-FLEXPART model and in determining the number of Lagrangian particles as well.

A machine learning informed prediction of severe accident progressions in nuclear power plants

  • JinHo Song;SungJoong Kim
    • Nuclear Engineering and Technology
    • /
    • v.56 no.6
    • /
    • pp.2266-2273
    • /
    • 2024
  • A machine learning platform is proposed for the diagnosis of a severe accident progression in a nuclear power plant. To predict the key parameters for accident management including lost signals, a long short term memory (LSTM) network is proposed, where multiple accident scenarios are used for training. Training and test data were produced by MELCOR simulation of the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident at unit 3. Feature variables were selected among plant parameters, where the importance ranking was determined by a recursive feature elimination technique using RandomForestRegressor. To answer the question of whether a reduced order ML model could predict the complex transient response, we performed a systematic sensitivity study for the choices of target variables, the combination of training and test data, the number of feature variables, and the number of neurons to evaluate the performance of the proposed ML platform. The number of sensitivity cases was chosen to guarantee a 95 % tolerance limit with a 95 % confidence level based on Wilks' formula to quantify the uncertainty of predictions. The results of investigations indicate that the proposed ML platform consistently predicts the target variable. The median and mean predictions were close to the true value.

Automatic Estimation of Tillers and Leaf Numbers in Rice Using Deep Learning for Object Detection

  • Hyeokjin Bak;Ho-young Ban;Sungryul Chang;Dongwon Kwon;Jae-Kyeong Baek;Jung-Il Cho ;Wan-Gyu Sang
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.81-81
    • /
    • 2022
  • Recently, many studies on big data based smart farming have been conducted. Research to quantify morphological characteristics using image data from various crops in smart farming is underway. Rice is one of the most important food crops in the world. Much research has been done to predict and model rice crop yield production. The number of productive tillers per plant is one of the important agronomic traits associated with the grain yield of rice crop. However, modeling the basic growth characteristics of rice requires accurate data measurements. The existing method of measurement by humans is not only labor intensive but also prone to human error. Therefore, conversion to digital data is necessary to obtain accurate and phenotyping quickly. In this study, we present an image-based method to predict leaf number and evaluate tiller number of individual rice crop using YOLOv5 deep learning network. We performed using various network of the YOLOv5 model and compared them to determine higher prediction accuracy. We ako performed data augmentation, a method we use to complement small datasets. Based on the number of leaves and tiller actually measured in rice crop, the number of leaves predicted by the model from the image data and the existing regression equation were used to evaluate the number of tillers using the image data.

  • PDF

Selecting Climate Change Scenarios Reflecting Uncertainties (불확실성을 고려한 기후변화 시나리오의 선정)

  • Lee, Jae-Kyoung;Kim, Young-Oh
    • Atmosphere
    • /
    • v.22 no.2
    • /
    • pp.149-161
    • /
    • 2012
  • Going by the research results of the past, of all the uncertainties resulting from the research on climate change, the uncertainty caused by the climate change scenario has the highest degree of uncertainty. Therefore, depending upon what kind of climate change scenario one adopts, the projection of the water resources in the future will differ significantly. As a matter of principle, it is highly recommended to utilize all the GCM scenarios offered by the IPCC. However, this could be considered to be an impractical alternative if a decision has to be made at an action officer's level. Hence, as an alternative, it is deemed necessary to select several scenarios so as to express the possible number of cases to the maximum extent possible. The objective standards in selecting the climate change scenarios have not been properly established and the scenarios have been selected, either at random or subject to the researcher's discretion. In this research, a new scenario selection process, in which it is possible to have the effect of having utilized all the possible scenarios, with using only a few principal scenarios and maintaining some of the uncertainties, has been suggested. In this research, the use of cluster analysis and the selection of a representative scenario in each cluster have efficiently reduced the number of climate change scenarios. In the cluster analysis method, the K-means clustering method, which takes advantage of the statistical features of scenarios has been employed; in the selection of a representative scenario in each cluster, the selection method was analyzed and reviewed and the PDF method was used to select the best scenarios with the closest simulation accuracy and the principal scenarios that is suggested by this research. In the selection of the best scenarios, it has been shown that the GCM scenario which demonstrated high level of simulation accuracy in the past need not necessarily demonstrate the similarly high level of simulation accuracy in the future and various GCM scenarios were selected for the principal scenarios. Secondly, the "Maximum entropy" which can quantify the uncertainties of the climate change scenario has been used to both quantify and compare the uncertainties associated with all the scenarios, best scenarios and the principal scenarios. Comparison has shown that the principal scenarios do maintain and are able to better explain the uncertainties of all the scenarios than the best scenarios. Therefore, through the scenario selection process, it has been proven that the principal scenarios have the effect of having utilized all the scenarios and retaining the uncertainties associated with the climate change to the maximum extent possible, while reducing the number of scenarios at the same time. Lastly, the climate change scenario most suitable for the climate on the Korean peninsula has been suggested. Through the scenario selection process, of all the scenarios found in the 4th IPCC report, principal climate change scenarios, which are suitable for the Korean peninsula and maintain most of the uncertainties, have been suggested. Therefore, it is assessed that the use of the scenario most suitable for the future projection of water resources on the Korean peninsula will be able to provide the projection of the water resources management that maintains more than 70~80% level of uncertainties of all the scenarios.

A FLOW AND PRESSURE DISTRIBUTION OF APR+ REACTOR UNDER THE 4-PUMP RUNNING CONDITIONS WITH A BALANCED FLOW RATE

  • Euh, D.J.;Kim, K.H.;Youn, Y.J.;Bae, J.H.;Chu, I.C.;Kim, J.T.;Kang, H.S.;Choi, H.S.;Lee, S.T.;Kwon, T.S.
    • Nuclear Engineering and Technology
    • /
    • v.44 no.7
    • /
    • pp.735-744
    • /
    • 2012
  • In order to quantify the flow distribution characteristics of APR+ reactor, a test was performed on a test facility, ACOP ($\underline{A}$PR+ $\underline{C}$ore Flow & $\underline{P}$ressure Test Facility), having a length scale of 1/5 referring to the prototype plant. The major parameters are core inlet flow and outlet pressure distribution and sectional pressure drops along the major flow path inside reactor vessel. To preserve the flow characteristics of prototype plant, the test facility was designed based on a preservation of major flow path geometry. An Euler number is considered as primary dimensionless parameter, which is conserved with a 1/40.9 of Reynolds number scaling ratio. ACOP simplifies each fuel assembly into a hydraulic simulator having the same axial flow resistance and lateral cross flow characteristics. In order to supply boundary condition to estimate thermal margins of the reactor, the distribution of inlet core flow and core exit pressure were measured in each of 257 fuel assembly simulators. In total, 584 points of static pressure and differential pressures were measured with a limited number of differential pressure transmitters by developing a sequential operation system of valves. In the current study, reactor flow characteristics under the balanced four-cold leg flow conditions at each of the cold legs were quantified, which is a part of the test matrix composing the APR+ flow distribution test program. The final identification of the reactor flow distribution was obtained by ensemble averaging 15 independent test data. The details of the design of the test facility, experiment, and data analysis are included in the current paper.

Severity-Adjusted Mortality Rates of Coronary Artery Bypass Graft Surgery Using MedisGroups (MedisGroups를 이용한 관상동맥우회술의 중증도 보정사망률에 관한 연구)

  • Kwon, Young-Dae
    • Quality Improvement in Health Care
    • /
    • v.7 no.2
    • /
    • pp.218-228
    • /
    • 2000
  • Background : Among 'structure', 'process' and 'outcome' approaches, outcome evaluation is considered as the most direct and best approach to assess the quality of health care providers. Risk-adjustment is an essential method to compare outcome across providers. This study has aims to judge performance of hospitals by severity adjusted mortality rates of coronary artery bypass graft (CABG) surgery. Methods : Medical records of 584 patients who got the CABG surgery in 6 general hospitals during 1996 and 1997 were reviewed by trained nurses. The MedisGroups was used to quantify severity of patients. The predictive probability of death was calculated for each patient in the sample from a multivariate logistic regression model including the severity score, age and sex. For evaluation of hospital performance, we calculated ratio of observed number to expected number of deaths and z score [(observed number of deaths - expected number of deaths)/square root of the variance in the number of deaths], and compared observed mortality rate with confidence interval of adjusted mortality rate for each hospital. Results : The overall in-hospital mortality was 7.0%, ranged from 2.7% to 15.7% by hospital. After severity adjustment the mortality by hospital was from 2.7% to 10.7%. One hospital with poor performance was distinctly divided from others with good performance. Conclusion : In conclusion, severity-adjusted mortality rate of CABG surgery might be applied as an indicator for hospital performance evaluation in Korea. But more pilot studies and improvement of methodologies has to be done to use it as quality indicator.

  • PDF

Quantitative Detection of Salmonella typhimurium Contamination in Milk, Using Real-Time PCR

  • JUNG SUNG JE;KIM HYUN-JOONG;KIM HAE-YEONG
    • Journal of Microbiology and Biotechnology
    • /
    • v.15 no.6
    • /
    • pp.1353-1358
    • /
    • 2005
  • A rapid and quantitative real-time PCR was developed to target the invasion A (invA) gene of Salmonella spp. We developed quantitative standard curves based on plasmids containing the invA gene. Based on these curves, we detected Salmonella spp. in artificially contaminated buffered peptone water (BPW) and milk samples. We were able to determine the invA gene copy number per ml of food samples, with the minimum detection limit of $4.1{\times}10^{3}$ copies/ml of BPW and $3.3{\times}10^{3}$ copies/ml of milk. When applied directly to detect and quantify Salmonella spp. in BPW and milk, the present real-time PCR assay was as sensitive as the plate count method; however, copy numbers were one to two logs higher than the colony-forming units obtained by the plate count methods. In the present work, the real-time PCR assay was shown to significantly reduce the total time necessary for the detection of Salmonella spp. in foods and to provide an important model for other foodborne pathogens.

Estimation of Link-Based Traffic-Related Air Pollutant Emissions and the Exposure Intensity on Pedestrian Near Busy Streets (유동인구 밀집지역 인근의 도로구간별 배출량 산정 및 보행자 노출 강도 평가)

  • Lee, Sangeun;Shin, Myunghwan;Lee, Seokjoo;Hong, Dahee;Jang, Dongik;Keel, Jihoon;Jung, Taekho;Lee, Taewoo;Hong, Youdeog
    • Journal of ILASS-Korea
    • /
    • v.23 no.2
    • /
    • pp.81-89
    • /
    • 2018
  • The objective of this study is to estimate the level of exposure of traffic-related air pollutants (TRAPs) on the pedestrians in Seoul area. The road network's link-based pollutant emission was calculated by using a set of mobile source emission factor package and associated activity information. The population information, which is the number of pedestrian, was analyzed in conjunction with the link-based traffic emissions in order to quantify exposure level by selected 23 spots. We proposed the Exposure Intensity, which is defined by the amount of traffic emission and the population, to quantify the probability of exposure of pedestrian. Link-based traffic NOx and PM emissions vary by up to four times depending on the location of each spot. The Hot-spots is estimated to be around 1.8 times higher Exposure Intensity than the average of the 23 selected spots. The information of Exposure Intensity of each spot allows us to develop localized policies for air quality and health. Even in the same area, the Exposure Intensity over time also shows a large fluctuation, which gives suggestions for establishing site-specific counter-measures.

HI superprofiles of galaxies from THINGS and LITTLE THINGS

  • Kim, Minsu;Oh, Se-Heon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.68.3-69
    • /
    • 2021
  • We present a novel profile stacking technique based on optimal profile decomposition of a 3D spectral line data cube, and its performance test using the HI data cubes of sample galaxies from HI galaxy surveys, THINGS and LITTLE THINGS. Compared to the previous approach which aligns all the spectra of a cube using their central velocities derived from either moment analysis, single Gaussian or hermite h3 polynomial fitting, the new method makes a profile decomposition of the profiles from which an optimal number of single Gaussian components is derived for each profile. The so-called superprofile which is derived by co-adding all the aligned profiles from which the other Gaussian models are subtracted is found to have weaker wings compared to the ones constructed in a typical manner. This could be due to the reduced number of asymmetric profiles in the new method. A practical test made on the HI data cubes of the THINGS and LITTLE THINGS galaxies shows that our new method can extract more mass of kinematically cold HI components in the galaxies than the previous results. Additionally, we fit a double Gaussian model to the superprofiles whose S/N is boosted, and quantify not only their profile shapes but derive the ratio of the Gaussian model parameters, such as the intensity ratio and velocity dispersion ratio of the narrower and broader Gaussian components. We discuss how the superprofile properties of the sample galaxies are correlated with their other physical properties, including star formation rate, stellar mass, metallicity, and gas mass.

  • PDF