• Title/Summary/Keyword: uncertainty of estimation

Search Result 750, Processing Time 0.027 seconds

Toxicity Assessment and Establishment of Acceptable Daily Intake of Fungicide Isotianil (살균제 Isotianil의 독성평가와 일일섭취허용량 설정)

  • Jeong, Mi-Hye;Hong, Soon-Sung;Park, Kynng-Hun;Park, Jae-Eup;Hong, Moo-Ki;Lim, Moo-Hyeog;Kim, Young-Bum;Han, Bum-Sook;Han, Jeung-Sul
    • The Korean Journal of Pesticide Science
    • /
    • v.14 no.4
    • /
    • pp.490-498
    • /
    • 2010
  • Isotianil is a fungicide which has prevention effects against rice blast disease. In order to register this new pesticide, the series of toxicity data on animal testing were reviewed to evaluate its hazards to consumers and to determine its acceptable daily intake. Isotianil was almost excreted by urine and feces. It has low acute oral toxicity while has no skin toxicity and ocular irritation. Its skin sensitization was evaluated as slight. Genotoxicity of parent compound and metabolite was negligible. Chronic toxicity tests on rats and dogs showed changes of hematology, clinical biochemistry and liver weight. It had no reproductive and teratogenic effects. The estimation of Acceptable Daily Intake(ADI) is based on the lowest no-observed adverse effect level (NOAEL). The lowest NOAEL of 2.83 mg/kg bw/day was found in the twelve-months rats study. The NOAEL was based on increased liver weight and treatment-related effect on clinica chemistry finding at the nest higher dose level of 2.83 mg/kg bw/day. Therefore, it is considered appropriated to apply an uncertainty factor of 100 to the NOAEL 2.83 mg/kg bw/day from the rat study, resulting in an ADI of 0.028 mg/kg bw/day.

A Study on Development of Reliability Assessment of GHG-CAPSS (GHG-CAPSS 신뢰도 평가 방법 개발을 위한 연구)

  • Kim, Hye Rim;Kim, Seung Do;Hong, Yu Deok;Lee, Su Bin;Jung, Ju Young
    • Journal of Climate Change Research
    • /
    • v.2 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Greenhouse gas(GHG) inventories were reported recently in various fields. It, however, has been rarely to mention about the accuracy and reliability of the GHG inventory results. Some reliable assessment methods were introduced to judge the accuracy of the GHG inventory results. It is, hence, critical to develop an evaluation methodology. This project was designed 1) to develop evaluation methodology for reliability of inventory results by GHG-CAPSS, 2) to check the feasibility of the developed evaluation methodology as a result of applying this methodology to two emission sources: liquid fossil fuel and landfill, and 3) to construct the technical roadmap for future role of GHG-CAPSS. Qualitative and quantitative assessment methodologies were developed to check the reliability and accuracy of the inventory results. Qualitative assessment methodology was designed to evaluate the accuracy and reliability of estimation methods of GHG emissions from emission and sink sources, activity data, emission factor, and quality management schemes of inventory results. On the other hand, quantitative assessment methodology was based on the uncertainty assessment of emission results. According to the results of applying the above evaluation methodologies to two emission sources, those seem to be working properly. However, it is necessary to develop source-specific rating systems because emission and sink sources exhibit source-specific characteristics of GHG emissions and sinks.

A Study on the Development of National Defense Leadership through the Change of Civil-Military Relationships (민군관계의 변화와 국방리더십의 발전방안에 관한 연구)

  • Lee, Chang-Gi
    • Journal of National Security and Military Science
    • /
    • s.4
    • /
    • pp.83-118
    • /
    • 2006
  • This study is to develop digital leadership in a field of national defense. Today, korean society is facing the crisis of national security. But national defense leadership is not show in the circumstance of national security crisis. As you know, national defense leadership is a process that make use of influence. Which means it converges people's interest and demands well and also show people the right vision of national defense and make them to comply the policy about national security. Because of the environmental change, our national defense leadership is having a new turning point. First, international order, which is under post-cold war, raises possibility of guarantee of peace and security in international society but also, cause the increase of multiple uncertainty and small size troubles in security circumstance. In addition, Korean society is rushing into democratization and localization period by success in peaceful change of political power went through about three times. The issue of political neutralization of military is stepping into settlement but still, negative inheritance of old military regime is worrying about it. In this situation, we can't expect rise in estimation about the importance of security and military's reason for being. So, military have to give their concern to not only internal maintenance of order and control and growth of soldiers but also developing external leadership to strength influence to society and military's the reason for being. So for these alternative I'm suggesting a digital leadership of national defense which fits digital era. This digital leadership is the leadership which can accept and understand digital technology and lead the digital organization. To construct digital national defense we need a practical leadership. The leadership has to be digital leadership with digital competence that can direct vision of digital national defense and carry out the policy. A leader who ha s digital leadership can lead the digital society. The ultimate key to construct digital government, digital corporate and digital citizen depends on digital leader with digital mind. To be more specific, digital leadership has network leadership, next generation leadership, knowledge driven management leadership, innovation oriented leadership. A leader with this kind of leadership is the real person with digital leadership. From now on, to rise this, we have to build up human resource development strategy and develop educational training program.

  • PDF

A Preliminary Quantification of $^{99m}Tc$-HMPAO Brain SPECT Images for Assessment of Volumetric Regional Cerebral Blood Flow ($^{99m}Tc$-HMPAO 뇌혈류 SPECT 영상의 부위별 체적 혈류 평가에 관한 기초 연구)

  • Kwark, Cheol-Eun;Park, Seok-Gun;Yang, Hyung-In;Choi, Chang-Woon;Lee, Kyung-Han;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.27 no.2
    • /
    • pp.170-174
    • /
    • 1993
  • The quantitative methods for the assessment of the cerebral blood flow using $^{99m}Tc$-HMPAO brain SPECT utilize the measured count distribution in some specific reconstructed tomographic slice or in algebraic summation of a few neighboring slices, rather than the true volumetric distribution, to estimate the relative regional cerebral blood flow, and consequently produce the biased estimates of the true regional cerebral blood flow. This kind of biases are thought to originate mainly from the arbitrarily irregular shape of the cerebral region of interest(ROI) which are analyzed. In this study, a semi-automated method for the direct quantification of the volumetric regional cerebral blood flow estimate is proposed, and the results are compared to those calculated by the previous planar approaches. Bias factors due to the partial volume effect and the uncertainty in ROI determination are not considered presently for the methodological comparison of planar/volumetric assessment protocol.

  • PDF

The Study on the Elaboration of Technology Valuation Model and the Adequacy of Volatility based on Real Options (실물옵션 기반 기술가치 평가모델 정교화와 변동성 유효구간에 관한 연구)

  • Sung, Tae-Eung;Lee, Jongtaik;Kim, Byunghoon;Jun, Seung-Pyo;Park, Hyun-Woo
    • Journal of Korea Technology Innovation Society
    • /
    • v.20 no.3
    • /
    • pp.732-753
    • /
    • 2017
  • Recently, when evaluating the technology values in the fields of biotechnology, pharmaceuticals and medicine, we have needed more to estimate those values in consideration of the period and cost for the commercialization to be put into in future. The existing discounted cash flow (DCF) method has limitations in that it can not consider consecutive investment or does not reflect the probabilistic property of commercialized input cost of technology-applied products. However, since the value of technology and investment should be considered as opportunity value and the information of decision-making for resource allocation should be taken into account, it is regarded desirable to apply the concept of real options, and in order to reflect the characteristics of business model for the target technology into the concept of volatility in terms of stock price which we usually apply to in evaluation of a firm's value, we need to consider 'the continuity of stock price (relatively minor change)' and 'positive condition'. Thus, as discussed in a lot of literature, it is necessary to investigate the relationship among volatility, underlying asset values, and cost of commercialization in the Black-Scholes model for estimating the technology value based on real options. This study is expected to provide more elaborated real options model, by mathematically deriving whether the ratio of the present value of the underlying asset to the present value of the commercialization cost, which reflects the uncertainty in the option pricing model (OPM), is divided into the "no action taken" (NAT) area under certain threshold conditions or not, and also presenting the estimation logic for option values according to the observation variables (or input values).

Level Shifts and Long-term Memory in Stock Distribution Markets (주식유통시장의 층위이동과 장기기억과정)

  • Chung, Jin-Taek
    • Journal of Distribution Science
    • /
    • v.14 no.1
    • /
    • pp.93-102
    • /
    • 2016
  • Purpose - The purpose of paper is studying the static and dynamic side for long-term memory storage properties, and increase the explanatory power regarding the long-term memory process by looking at the long-term storage attributes, Korea Composite Stock Price Index. The reason for the use of GPH statistic is to derive the modified statistic Korea's stock market, and to research a process of long-term memory. Research design, data, and methodology - Level shifts were subjected to be an empirical analysis by applying the GPH method. It has been modified by taking into account the daily log return of the Korea Composite Stock Price Index a. The Data, used for the stock market to analyze whether deciding the action by the long-term memory process, yield daily stock price index of the Korea Composite Stock Price Index and the rate of return a log. The studies were proceeded with long-term memory and long-term semiparametric method in deriving the long-term memory estimators. Chapter 2 examines the leading research, and Chapter 3 describes the long-term memory processes and estimation methods. GPH statistics induced modifications of statistics and discussed Whittle statistic. Chapter 4 used Korea Composite Stock Price Index to estimate the long-term memory process parameters. Chapter 6 presents the conclusions and implications. Results - If the price of the time series is generated by the abnormal process, it may be located in long-term memory by a time series. However, test results by price fixed GPH method is not followed by long-term memory process or fractional differential process. In the case of the time-series level shift, the present test method for a long-term memory processes has a considerable amount of bias, and there exists a structural change in the stock distribution market. This structural change has implications in level shift. Stratum level shift assays are not considered as shifted strata. They exist distinctly in the stock secondary market as bias, and are presented in the test statistic of non-long-term memory process. It also generates an error as a long-term memory that could lead to false results. Conclusions - Changes in long-term memory characteristics associated with level shift present the following two suggestions. One, if any impact outside is flowed for a long period of time, we can know that the long-term memory processes have characteristic of the average return gradually. When the investor makes an investment, the same reasoning applies to him in the light of the characteristics of the long-term memory. It is suggested that when investors make decisions on investment, it is necessary to consider the characters of the long-term storage in reference with causing investors to increase the uncertainty and potential. The other one is the thing which must be considered variously according to time-series. The research for price-earnings ratio and investment risk should be composed of the long-term memory characters, and it would have more predictability.

Principles and Current Trends of Neural Decoding (뉴럴 디코딩의 원리와 최신 연구 동향 소개)

  • Kim, Kwangsoo;Ahn, Jungryul;Cha, Seongkwang;Koo, Kyo-in;Goo, Yong Sook
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.342-351
    • /
    • 2017
  • The neural decoding is a procedure that uses spike trains fired by neurons to estimate features of original stimulus. This is a fundamental step for understanding how neurons talk each other and, ultimately, how brains manage information. In this paper, the strategies of neural decoding are classified into three methodologies: rate decoding, temporal decoding, and population decoding, which are explained. Rate decoding is the firstly used and simplest decoding method in which the stimulus is reconstructed from the numbers of the spike at given time (e. g. spike rates). Since spike number is a discrete number, the spike rate itself is often not continuous and quantized, therefore if the stimulus is not static and simple, rate decoding may not provide good estimation for stimulus. Temporal decoding is the decoding method in which stimulus is reconstructed from the timing information when the spike fires. It can be useful even for rapidly changing stimulus, and our sensory system is believed to have temporal rather than rate decoding strategy. Since the use of large numbers of neurons is one of the operating principles of most nervous systems, population decoding has advantages such as reduction of uncertainty due to neuronal variability and the ability to represent a stimulus attributes simultaneously. Here, in this paper, three different decoding methods are introduced, how the information theory can be used in the neural decoding area is also given, and at the last machinelearning based algorithms for neural decoding are introduced.

Prediction of Potential Habitat of Japanese evergreen oak (Quercus acuta Thunb.) Considering Dispersal Ability Under Climate Change (분산 능력을 고려한 기후변화에 따른 붉가시나무의 잠재서식지 분포변화 예측연구)

  • Shin, Man-Seok;Seo, Changwan;Park, Seon-Uk;Hong, Seung-Bum;Kim, Jin-Yong;Jeon, Ja-Young;Lee, Myungwoo
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.3
    • /
    • pp.291-306
    • /
    • 2018
  • This study was designed to predict potential habitat of Japanese evergreen oak (Quercus acuta Thunb.) in Korean Peninsula considering its dispersal ability under climate change. We used a species distribution model (SDM) based on the current species distribution and climatic variables. To reduce the uncertainty of the SDM, we applied nine single-model algorithms and the pre-evaluation weighted ensemble method. Two representative concentration pathways (RCP 4.5 and 8.5) were used to simulate the distribution of Japanese evergreen oak in 2050 and 2070. The final future potential habitat was determined by considering whether it will be dispersed from the current habitat. The dispersal ability was determined using the Migclim by applying three coefficient values (${\theta}=-0.005$, ${\theta}=-0.001$ and ${\theta}=-0.0005$) to the dispersal-limited function and unlimited case. All the projections revealed potential habitat of Japanese evergreen oak will be increased in Korean Peninsula except the RCP 4.5 in 2050. However, the future potential habitat of Japanese evergreen oak was found to be limited considering the dispersal ability of this species. Therefore, estimation of dispersal ability is required to understand the effect of climate change and habitat distribution of the species.

Seismic Performance Assessment of Unreinforced Masonry Wall Buildings Using Incremental Dynamic Analysis (증분동적해석을 통한 비보강 조적벽식 건물의 내진성능 평가)

  • Kwon, Ki Hyuk;Kim, Man Hoe;Kim, Hyung Joon
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.17 no.3
    • /
    • pp.28-39
    • /
    • 2013
  • The most common housing type in Korea is low-rise buildings with unreinforced masonry walls (UMWs) that have been known as a vulnerable seismic-force-resisting system (SFRS) due to the lack of ductility capacities compared to high lateral stiffness of an UMW. However, there are still a little experimental investigation on the shear strength and stiffness of UMWs and on the seismic performance of buildings using UMWs as a SFRS. In Korea, the shear strength and stiffness of UMWs have been evaluated with the equations suggested in FEMA 356 which can not reflect the structural and material characteristics, and workmanship of domestic UMW construction. First of all, this study demonstrates the differences in shear strength and stiffness of UMWs obtained from between FEMA 356 and test results. The influence of these differences on the seismic performance of UMW buildings is then discussed with incremental dynamic analyses results of a prototype UMW building that were selected by the site survey of more than 200 UMW buildings and existing test results of UMWs. The seismic performance assessment of the prototype UMW building are analyzed based on collapse margin ratios and beta values repesenting uncertainty of seismic capacity. Analysis results show that the seismic performance of the UMW building estimated using the equations in FEMA 356 underestimates both a collapse margin ratio and a beta value compared to that estimated by test results. Whatever the estimation is carried out two cases, the seismic performance of the prototype building does not meet the criteria prescribed in a current Korean seismic code and about 90% collapse probability presents for more than 30-year-old UMW buildings under earthquakes with 2400 return years.

Occurrence and Estimation Using Monte-Carlo Simulation of Aflatoxin $M_1$in Domestic Cow’s Milk and Milk Products (국내산 우유 및 유제품에서의 Aflatoxin $M_1$오염수준 및 Monte-Carlo Simulation을 이용한 발생 추정)

  • 박경진;이미영;노우섭;천석조;심추창;김창남;신은하;손동화
    • Journal of Food Hygiene and Safety
    • /
    • v.16 no.3
    • /
    • pp.200-205
    • /
    • 2001
  • In this study, occurrence of aflatoxin M$_1$(AEM$_1$) in domestic milk and milk products was determined. The level of AFM$_1$ in market milk (0.047 ppb) was lower than that in raw milk (0.083 pub) but this looks like that is due to dilution in collecting process rather than the effect of sterilization. In the case of nonfat dry milk, level of AFM$_1$appeared high by 0.24 ppb but it is thought to be not different from market milk actually because nonfat dry milk is diluted at intake. In the case of ice cream, finished products were contaminated with AFM$_1$of 0.020 ppd and also have the possibility of the contamination of AFB$_1$due to secondary raw material such as nuts and almond. On the basis of the results of this study and previous studies, Monte-Carlo simulation is conducted to estimate the contamination level of AFM$_1$in domestic market milk. To consider uncertainty and variability fitting procedure was passed through. And we used beta distribution to estimate the prevalence and triangular distribution to estimate the concentration level of AFM$_1$in milk. As a result, the 5%, 50% and 95% points of the distribution of the probability of AFM$_1$contamination level in milk is 0.0214, 0.0946 and 0.1888 ppb, respectively. Also we estimate that AFM$_1$in almost milk was low more than 0.5 ppb that is American acceptable level but 80.4% exceeded far 0.05 ppb that is European standard.

  • PDF