• Title/Summary/Keyword: Quantitative parameter

Search Result 434, Processing Time 0.03 seconds

Complementary measures for Environmental Performance Evaluation Index of External Space of Green Standard for Energy and Environmental Design for Apartment Complex - Focused on the Respect of Response to Climate Change - (공동주택 녹색건축인증기준의 외부공간 환경성능 평가지표 보완방안 - 기후변화 대응 측면을 중심으로 -)

  • Ye, Tae-Gon;Kim, Kwang-Hyun;Kwon, Young-Sang
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.34 no.1
    • /
    • pp.3-14
    • /
    • 2018
  • An apartment complex is a building use with great potential to contribute to solving problems related to urban ecological environment and climate change. The first goal of this study is to grasp the current situation of application and limitations of the ecological area rate, which is a representative evaluation index used to evaluate the environmental performance of the external space of an apartment complex in Green Standard for Energy and Environmental Design (G-SEED). The second goal is to propose a prototype of the evaluation index for evaluating greenhouse gas (GHG) reduction performance in order to supplement the evaluation index for the environmental performance of the external space in terms of response to climate change. We analyzed 43 cases of apartment complexes certified according to G-SEED, which was enforced since July 1, 2010, and found application characteristics of each space type and the limitations of ecological area rate. We analyzed overseas green building certification systems such as LEED and BREEAM that derived implications for supplementing the limitations of ecological area rate, which is focused on the evaluation of soil and water circulation function, and set up a development direction of complementary measures. Through analysis of previous studies, relevant regulations and standards, and technical documents of the manufacturer, the heat island mitigation performance of the pavement and roof surfaces of the apartment complex and the carbon uptake performance of the trees in the apartment complex was selected as parameters to yield the GHG reduction performance of the external space of the apartment complex. Finally, a quantitative evaluation method for each parameter and a prototype of the evaluation index for the GHG reduction performance were proposed. As a result of applying the prototype to an apartment complex case, the possibility of adoption and applicability as an evaluation index of G-SEED were proved.

Characteristics of KOMPSAT-3A Key Image Quality Parameters During Normal Operation Phase (정상운영기간동안의 KOMPSAT-3A호 주요 영상 품질 인자별 특성)

  • Seo, DooChun;Kim, Hyun-Ho;Jung, JaeHun;Lee, DongHan
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1493-1507
    • /
    • 2020
  • The LEOP Cal/Val (Launch and Early Operation Phase Calibration/Validation) was carried out during 6 months after KOMPSAT-3A (KOMPSAT-3A Korea Multi-Purpose Satellite-3A) was launched in March 2015. After LEOP Cal/Val was successfully completed, high resolution KOMPSAT-3A has been successfully distributing to users over the past 8 years. The sub-meter high-resolution satellite image data obtained from KOMPSAT-3A is used as basic data for qualitative and quantitative information extraction in various fields such as mapping, GIS (Geographic Information System), and national land management, etc. The KARI (Korea Aerospace Research Institute) periodically checks and manages the quality of KOMPSAT-3A's product and the characteristics of satellite hardware to ensure the accuracy and reliability of information extracted from satellite data of KOMPSAT-3A. To minimize the deterioration of image quality due to aging of satellite hardware, payload and attitude sensors of KOMPSAT-3A, continuous improvement of image quality has been carried out. In this paper, the Cal/Val work-flow defined in the KOMPSAT-3A development phase was illustrated for the period of before and after the launch. The MTF, SNR, and location accuracy are the key parameters to estimate image quality and the methods of the measurements of each parameter are also described in this work. On the basis of defined quality parameters, the performance was evaluated and measured during the period of after LEOP Cal/Val. The current status and characteristics of MTF, SNR, and location accuracy of KOMPSAT-3A from 2016 to May 2020 were described as well.

Realtime Streamflow Prediction using Quantitative Precipitation Model Output (정량강수모의를 이용한 실시간 유출예측)

  • Kang, Boosik;Moon, Sujin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6B
    • /
    • pp.579-587
    • /
    • 2010
  • The mid-range streamflow forecast was performed using NWP(Numerical Weather Prediction) provided by KMA. The NWP consists of RDAPS for 48-hour forecast and GDAPS for 240-hour forecast. To enhance the accuracy of the NWP, QPM to downscale the original NWP and Quantile Mapping to adjust the systematic biases were applied to the original NWP output. The applicability of the suggested streamflow prediction system which was verified in Geum River basin. In the system, the streamflow simulation was computed through the long-term continuous SSARR model with the rainfall prediction input transform to the format required by SSARR. The RQPM of the 2-day rainfall prediction results for the period of Jan. 1~Jun. 20, 2006, showed reasonable predictability that the total RQPM precipitation amounts to 89.7% of the observed precipitation. The streamflow forecast associated with 2-day RQPM followed the observed hydrograph pattern with high accuracy even though there occurred missing forecast and false alarm in some rainfall events. However, predictability decrease in downstream station, e.g. Gyuam was found because of the difficulties in parameter calibration of rainfall-runoff model for controlled streamflow and reliability deduction of rating curve at gauge station with large cross section area. The 10-day precipitation prediction using GQPM shows significantly underestimation for the peak and total amounts, which affects streamflow prediction clearly. The improvement of GDAPS forecast using post-processing seems to have limitation and there needs efforts of stabilization or reform for the original NWP.

Quantitative Analysis of X-Ray Fluorescence for Understanding the Effect of Elevated Temperatures on Cement Pastes (XRF (X-ray fluorescence)를 활용한 고온환경에 노출된 시멘트 페이스트 분석의 이해)

  • Kil-Song Jeon;Young-Sun Heo
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.6
    • /
    • pp.130-137
    • /
    • 2023
  • By using XRF (X-ray fluorescence), this study investigates the variation of chemical properties in cement pastes at elevated temperatures. High-temperature conditions were prepared by using an electric furnace, planning a total of 11 target temperatures ranging from room temperature to 1000 ℃. A standard library of geo-quant basic was applied for the analysis of 12 elements in cement paste, including Ca, Si, Al, Fe, S, Mg, Ti, Sr, P, Mn, Zn and K. The results revealed that, as the temperature increased, the proportion of each element in the cement paste also increased. With the exception of a few elements present in extremely low amounts in the cement pastes, the variation in the composition ratio of most elements exhibited a strong correlation with temperature, with an R-squared value exceeding 0.98. In this study, cement pastes exposed to normal and high-temperature environments were compared. The authors established that the reasons for the different results in this comparison can be explained from the same perspective as when comparing raw cement with cement paste. Furthermore, this study discussed the potentially most dominant parameter when investigating the properties of cement paste using XRF.

The Selection of Appropriate Sampler for the Assessment of Macrobenthos Community in Saemangeum, the West Coast of Korea (새만금 외해역에서 대형 저서동물 군집 조사를 위한 적정 채집기의 선택)

  • 유재원;김창수;박미라;이형곤;이재학;홍재상
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.8 no.3
    • /
    • pp.285-294
    • /
    • 2003
  • To select an appropriate sampler for the environmental monitoring survey in coastal waters of Saemangeum, Jeollabuk-do, a macrobenthic sampling was conducted in April 2002. Employed samplers were dredge (type Charcot), a semi-quantitative sampler and Smith-McIntyre (SM) and van Veen grab (VV) as quantitative ones. One haul was tried for dredge and 3 replicates (0.1 ㎡${\times}$3) for SM and W at each of 11 stations. Comparisons of sediment volume in sampler bucket and of precision of biological parameters (i.e., density, biomass, species number and diversity index, H') were made between SM and VV. Sediment volume was significantly different (SM > VV) at p-value of 0.0050 (paired t-test) and, in average, 3 replicate samples of SM and VV satisfied a precision level of 0.2 by applying 4th root transformation. Patterns of observed and expected species numbers and H' were compared. Dredge-VV samples showed higher affinity than any other pair. Several dominant species in the area were underestimated in dredge samples (e.g., polychaete Heteromastus filiformis. Aricidea assimilis etc.). Quantifying the agreement pattern of multi-species responses was accomplished by estimating correlations between similarity matrices. Correlation between dredge and VV was slightly higher, but near-per-fect matches were found in general. Different ranks and composition among principal species lists were presumably linked to the effect of penetration depth that differs among samplers. Lower level of some species' abundance in VV samples (ca. 50% compared with those of SM) was explained in this context. It seem appropriate to regard the effect as a probable cause of relatively higher correlations in dredge-VV, Overall bio-logica1 features indicated that a better choice could be SM in situations of requiring high data quality. The others work well, however, on observing and defining faunal characteristics and their capability cannot be questionted if we do not expect a first-order quality.

Study on the Methodology of the Microbial Risk Assessment in Food (식품중 미생물 위해성평가 방법론 연구)

  • 이효민;최시내;윤은경;한지연;김창민;김길생
    • Journal of Food Hygiene and Safety
    • /
    • v.14 no.4
    • /
    • pp.319-326
    • /
    • 1999
  • Recently, it is continuously rising to concern about the health risk being induced by microorganisms in food such as Escherichia coli O157:H7 and Listeria monocytogenes. Various organizations and regulatory agencies including U.S.FPA, U.S.DA and FAO/WHO are preparing the methodology building to apply microbial quantitative risk assessment to risk-based food safety program. Microbial risks are primarily the result of single exposure and its health impacts are immediate and serious. Therefore, the methodology of risk assessment differs from that of chemical risk assessment. Microbial quantitative risk assessment consists of tow steps; hazard identification, exposure assessment, dose-response assessment and risk characterization. Hazard identification is accomplished by observing and defining the types of adverse health effects in humans associated with exposure to foodborne agents. Epidemiological evidence which links the various disease with the particular exposure route is an important component of this identification. Exposure assessment includes the quantification of microbial exposure regarding the dynamics of microbial growth in food processing, transport, packaging and specific time-temperature conditions at various points from animal production to consumption. Dose-response assessment is the process characterizing dose-response correlation between microbial exposure and disease incidence. Unlike chemical carcinogens, the dose-response assessment for microbial pathogens has not focused on animal models for extrapolation to humans. Risk characterization links the exposure assessment and dose-response assessment and involve uncertainty analysis. The methodology of microbial dose-response assessment is classified as nonthreshold and thresh-old approach. The nonthreshold model have assumption that one organism is capable of producing an infection if it arrives at an appropriate site and organism have independence. Recently, the Exponential, Beta-poission, Gompertz, and Gamma-weibull models are using as nonthreshold model. The Log-normal and Log-logistic models are using as threshold model. The threshold has the assumption that a toxicant is produce by interaction of organisms. In this study, it was reviewed detailed process including risk value using model parameter and microbial exposure dose. Also this study suggested model application methodology in field of exposure assessment using assumed food microbial data(NaCl, water activity, temperature, pH, etc.) and the commercially used Food MicroModel. We recognized that human volunteer data to the healthy man are preferred rather than epidemiological data fur obtaining exact dose-response data. But, the foreign agencies are studying the characterization of correlation between human and animal. For the comparison of differences to the population sensitivity: it must be executed domestic study such as the establishment of dose-response data to the Korean volunteer by each microbial and microbial exposure assessment in food.

  • PDF

Application of Molecular Biological Technique for Development of Stability Indicator in Uncontrolled Landfill (불량매립지 안정화 지표 개발을 위한 분자생물학적 기술의 적용)

  • Park, Hyun-A;Han, Ji-Sun;Kim, Chang-Gyun;Lee, Jin-Young
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.28 no.2
    • /
    • pp.128-136
    • /
    • 2006
  • This study was conducted for developing the stability parameter in uncontrolled landfill by using a biomolecular investigation on the microbial community growing through leachate plume. Landfill J(which is in Cheonan) and landfill T(which is in Wonju) were chosen for this study among a total of 244 closed uncontrolled landfills. It addressed the genetic diversity of the microbial community in the leachate by 165 rDNA gene cloning using PCR and compared quantitative analysis of denitrifiers and methanotrophs with the conventional water quality parameters. From the BLAST search, genes of 47.6% in landfill J, and 32.5% in landfill T, respectively, showed more than 97% of the similarity where Proteobacteria phylum was most significantly observed. It showed that the numbers of denitrification genes, i.e. nirS gene and cnorB gene in the J site are 7 and 4 times higher than those in T site, which is well reflecting from a difference of site closure showing 7 and 13 years after being closed, respectively. In addition, the quantitative analysis on methane formation gene showed that J1 spot immediately bordering with the sources has the greatest number of methane formation bacteria, and it was decreased rapidly according to distribute toward the outer boundary of landfill. The comparative investigation between the number of genes, i.e. nirS gene, cnorB gene and MCR gene, md the conventional monitoring parameters, i.e. TOC, $NH_3-N,\;NO_3-N,\;NO_2-N,\;Cl^-$, alkalinity, addressed that more than 99% of the correlation was observed except for the $NO_3-N$. It was concluded that biomolecular investigation was well consistent with the conventional monitoring parameters to interpret their influences and stability made by leachate plume formed in downgradient around the uncontrolled sites.

Comparison of Effectiveness about Image Quality and Scan Time According to Reconstruction Method in Bone SPECT (영상 재구성 방법에 따른 Bone SPECT 영상의 질과 검사시간에 대한 실효성 비교)

  • Kim, Woo-Hyun;Jung, Woo-Young;Lee, Ju-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.9-14
    • /
    • 2009
  • Purpose: Nowadays in the nuclear medicine, many studies and efforts are being made to reduce the scan time, as well as the waiting time to be needed to execute exams after injection of radionuclide medicines. Several methods are being used in clinic, such as developing new radionuclide compounds that enable to be absorbed into target organs more quickly and reducing acquisition scan time by increase the number of Gamma Camera detectors to examine. Each medical equipment manufacturer has improved the imaging process techniques to reduce scan time. In this paper, we tried to analyze the difference of image quality between FBP, 3D OSEM reconstruction methods that commercialized and being clinically applied, and Astonish reconstruction method (A kind of Iterative fast reconstruction method of Philips), also difference of image quality on scan time. Material and Methods: We investigated in 32 patients that examined the Bone SPECT from June to July 2008 at department of nuclear medicine, ASAN Medical Center in Seoul. 40sec/frame and 20sec/frame images were acquired that using Philips‘ PRECEDENCE 16 Gamma Camera and then reconstructed those images by using the Astonish (Philips’ Reconstruction Method), 3D OSEM and FBP methods. The blinded test was performed to the clinical interpreting physicians with all images analyzed by each reconstruction method for qualitative analysis. And we analyzed target to non target ratio by draws lesions as the center of disease for quantitative analysis. At this time, each image was analyzed with same location and size of ROI. Results: In a qualitative analysis, there was no significant difference by acquisition time changes in image quality. In a quantitative analysis, the images reconstructed Astonish method showed good quality due to better sharpness and distinguish sharply between lesions and peripheral lesions. After measuring each mean value and standard deviation value of target to non target ratio with 40 sec/frame and 20sec/frame images, those values are Astonish (40 sec-$13.91{\pm}5.62$ : 20 sec-$13.88{\pm}5.92$), 3D OSEM (40 sec-$10.60{\pm}3.55$ : 20 sec-$10.55{\pm}3.64$), FBP (40 sec-$8.30{\pm}4.44$ : 20 sec-$8.19{\pm}4.20$). We analyzed target to non target ratio from 20 sec and 40 sec images. And we analyzed the result, In Astonish (t=0.16, p=0.872), 3D OSEM (t=0.51, p=0.610), FBP (t=0.73, p=0.469) methods, there was no significant difference statistically by acquisition time change in image quality. But FBP indicates no statistical differences while some images indicate difference between 40 sec/frame and 20 sec/frame images by various factors. Conclusions: In the circumstance, try to find a solution to reduce nuclear medicine scan time, the development of nuclear medicine equipment hardware has decreased while software has marched forward at a relentless. Due to development of computer hardware, the image reconstruction time was reduced and the expanded capacity to restore enables iterative methods that couldn't be performed before due to technical limits. As imaging process technique developed, it reduced scan time and we could observe that image quality keep similar level. While keeping exam quality and reducing scan time can induce the reduction of patient's pain and sensory waiting time, also accessibility of nuclear medicine exam will be improved and it provide better service to patients and clinical physician who order exams. Consequently, those things make the image of department of nuclear medicine be improved. Concurrent Imaging - A new function that setting up each image acquisition parameter and enables to acquire images simultaneously with various parameters to once examine.

  • PDF

Usefulness of F-18 FDG PET/CT in Adrenal Incidentaloma: Differential Diagnosis of Adrenal Metastasis in Oncologic Patients (부신 우연종에서 F-18 FDG PET/CT의 유용성: 악성 종양 환자에서 부신 전이의 감별진단)

  • Lee, Hong-Je;Song, Bong-Il;Kang, Sung-Min;Jeong, Shin-Young;Seo, Ji-Hyoung;Lee, Sang-Woo;Yoo, Jeong-Soo;Ahn, Byeong-Cheol;Lee, Jae-Tae
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.421-428
    • /
    • 2009
  • Purpose: We have evaluated characteristics of adrenal masses incidentally observed in nonenhanced F-18 FDG PET/CT of the oncologic patients and the diagnostic ability of F-18 FDG PET/CT to differentiate malignant from benign adrenal masses. Materials and Methods: Between Mar 2005 and Aug 2008, 75 oncologic patients (46 men, 29 women; mean age, $60.8{\pm}10.2$ years; range, 35-87 years) with 89 adrenal masses incidentally found in PET/CT were enrolled in this study. For quantitative analysis, size (cm), Hounsfield unit (HU), maximum standardized uptake value (SUVmax), SUVratio of all 89 adrenal masses were measured. SUVmax of the adrenal mass divided by SUVliver, which is SUVmax of the segment 8, was defined as SUVratio. The final diagnosis of adrenal masses was based on pathologic confirmation, radiologic evaluation (HU<0 : benign), and clinical decision. Results: Size, HU, SUVmax, and SUVratio were all significantly different between benign and malignant adrenal masses.(P < 0.05) And, SUVratio was the most accurate parameter. A cut-off value of 1.0 for SUVratio provided 90.9% sensitivity and 75.6% specificity. In small adrenal masses (1.5 cm or less), only SUVratio had statistically significant difference between benign and malignant adrenal masses. Similarly a cut-off value of 1.0 for SUVratio provided 80.0% sensitivity and 86.4% specificity. Conclusion: F-18 FDG PET/CT can offer more accurate information with quantitative analysis in differentiating malignant from benign adrenal masses incidentally observed in oncologic patients, compared to nonenhanced CT.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.