• Title/Summary/Keyword: Uncertainty estimation

Search Result 744, Processing Time 0.028 seconds

Fault-Free Process for IT System with TRM(Technical Reference Model) based Fault Check Point and Event Rule Engine (기술분류체계 기반의 장애 점검포인트와 이벤트 룰엔진을 적용한 무장애체계 구현)

  • Hyun, Byeong-Tag;Kim, Tae-Woo;Um, Chang-Sup;Seo, Jong-Hyen
    • Information Systems Review
    • /
    • v.12 no.3
    • /
    • pp.1-17
    • /
    • 2010
  • IT Systems based on Global Single Instance (GSI) can manage a corporation's internal information, resources and assets effectively and raise business efficiency through consolidation of their business process and productivity. But, It has also dangerous factor that IT system fault failure can cause a state of paralysis of a business itself, followed by huge loss of money. Many of studies have been conducted about fault-tolerance based on using redundant component. The concept of fault tolerance is rather simple but, designing and adopting fault-tolerance system is not easy due to uncertainty of a type and frequency of faults. So, Operational fault management that working after developed IT system is important more and more along with technical fault management. This study proposes the fault management process that including a pre-estimation method using TRM (Technical Reference Model) check point and event rule engine. And also proposes a effect of fault-free process through built fault management system to representative company of Hi-tech industry. After adopting fault-free process, a number of failure decreased by 46%, a failure time decreased by 56% and the Opportunity loss costs decreased by 77%.

Occurrence and Estimation Using Monte-Carlo Simulation of Aflatoxin $M_1$in Domestic Cow’s Milk and Milk Products (국내산 우유 및 유제품에서의 Aflatoxin $M_1$오염수준 및 Monte-Carlo Simulation을 이용한 발생 추정)

  • 박경진;이미영;노우섭;천석조;심추창;김창남;신은하;손동화
    • Journal of Food Hygiene and Safety
    • /
    • v.16 no.3
    • /
    • pp.200-205
    • /
    • 2001
  • In this study, occurrence of aflatoxin M$_1$(AEM$_1$) in domestic milk and milk products was determined. The level of AFM$_1$ in market milk (0.047 ppb) was lower than that in raw milk (0.083 pub) but this looks like that is due to dilution in collecting process rather than the effect of sterilization. In the case of nonfat dry milk, level of AFM$_1$appeared high by 0.24 ppb but it is thought to be not different from market milk actually because nonfat dry milk is diluted at intake. In the case of ice cream, finished products were contaminated with AFM$_1$of 0.020 ppd and also have the possibility of the contamination of AFB$_1$due to secondary raw material such as nuts and almond. On the basis of the results of this study and previous studies, Monte-Carlo simulation is conducted to estimate the contamination level of AFM$_1$in domestic market milk. To consider uncertainty and variability fitting procedure was passed through. And we used beta distribution to estimate the prevalence and triangular distribution to estimate the concentration level of AFM$_1$in milk. As a result, the 5%, 50% and 95% points of the distribution of the probability of AFM$_1$contamination level in milk is 0.0214, 0.0946 and 0.1888 ppb, respectively. Also we estimate that AFM$_1$in almost milk was low more than 0.5 ppb that is American acceptable level but 80.4% exceeded far 0.05 ppb that is European standard.

  • PDF

Prediction of Rock Fragmentation and Design of Blasting Pattern based on 3-D Spatial Distribution of Rock Factor (발파암 계수의 3차원 공간 분포에 기초한 암석 파쇄도 예측 및 발파 패턴 설계)

  • Shim Hyun-Jin;Seo Jong-Seok;Ryu Dong-Woo
    • Tunnel and Underground Space
    • /
    • v.15 no.4 s.57
    • /
    • pp.264-274
    • /
    • 2005
  • The optimum blasting pattern to excavate a quarry efficiently and economically can be determined based on the minimum production cost which is generally estimated according to rock fragmentation. Therefore it is a critical problem to predict fragment size distribution of blasted rocks over an entire quarry. By comparing various prediction models, it can be ascertained that the result obtained from Kuz-Ram model relatively coincides with that of field measurements. Kuz-Ram model uses the concept of rock factor to signify conditions of rock mass such as block size, rock jointing, strength and others. For the evaluation of total production cost, it is imperative to estimate 3-D spatial distribution of rock factor for the entire quarry. In this study, a sequential indicator simulation technique is adopted for estimation of spatial distribution of rock factor due to its higher reproducibility of spatial variability and distribution models than Kriging methods. Further, this can reduce the uncertainty of predictor using distribution information of sample data The entire quarry is classified into three types of rock mass and optimum blasting pattern is proposed for each type based on 3-D spatial distribution of rock factor. In addition, plane maps of rock factor distribution for each ground levels is provided to estimate production costs for each process and to make a plan for an optimum blasting pattern.

Estimation of nighttime aerosol optical thickness from Suomi-NPP DNB observations over small cities in Korea (Suomi-NPP위성 DNB관측을 이용한 우리나라 소도시에서의 야간 에어로졸 광학두께 추정)

  • Choo, Gyo-Hwang;Jeong, Myeong-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.2
    • /
    • pp.73-86
    • /
    • 2016
  • In this study, an algorithm to estimate Aerosol Optical Thickness (AOT) over small cities during nighttime has been developed by using the radiance from artificial light sources in small cities measured from Visible Infrared Imaging Radiometer Suite (VIIRS) sensor's Day/Night Band (DNB) aboard the Suomi-National Polar Partnership (Suomi-NPP) satellite. The algorithm is based on Beer's extinction law with the light sources from the artificial lights over small cities. AOT is retrieved for cloud-free pixels over individual cities, and cloud-screening was conducted by using the measurements from M-bands of VIIRS at infrared wavelengths. The retrieved nighttime AOT is compared with the aerosol products from MODerate resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua satellites. As a result, the correlation coefficients over individual cities range from around 0.6 and 0.7 between the retrieved nighttime AOT and MODIS AOT with Root-Mean-Squared Difference (RMSD) ranged from 0.14 to 0.18. In addition, sensitivity tests were conducted for the factors affecting the nighttime AOT to estimate the range of uncertainty in the nighttime AOT retrievals. The results of this study indicate that it is promising to infer AOT using the DNB measaurements over small cities in Korea at night. After further development and refinement in the future, the developed retrieval algorithm is expected to produce nighttime aerosol information which is not operationally available over Korea.

A Preliminary Quantification of $^{99m}Tc$-HMPAO Brain SPECT Images for Assessment of Volumetric Regional Cerebral Blood Flow ($^{99m}Tc$-HMPAO 뇌혈류 SPECT 영상의 부위별 체적 혈류 평가에 관한 기초 연구)

  • Kwark, Cheol-Eun;Park, Seok-Gun;Yang, Hyung-In;Choi, Chang-Woon;Lee, Kyung-Han;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.27 no.2
    • /
    • pp.170-174
    • /
    • 1993
  • The quantitative methods for the assessment of the cerebral blood flow using $^{99m}Tc$-HMPAO brain SPECT utilize the measured count distribution in some specific reconstructed tomographic slice or in algebraic summation of a few neighboring slices, rather than the true volumetric distribution, to estimate the relative regional cerebral blood flow, and consequently produce the biased estimates of the true regional cerebral blood flow. This kind of biases are thought to originate mainly from the arbitrarily irregular shape of the cerebral region of interest(ROI) which are analyzed. In this study, a semi-automated method for the direct quantification of the volumetric regional cerebral blood flow estimate is proposed, and the results are compared to those calculated by the previous planar approaches. Bias factors due to the partial volume effect and the uncertainty in ROI determination are not considered presently for the methodological comparison of planar/volumetric assessment protocol.

  • PDF

Optimal Design of Water Distribution System considering the Uncertainties on the Demands and Roughness Coefficients (수요와 조도계수의 불확실성을 고려한 상수도관망의 최적설계)

  • Jung, Dong-Hwi;Chung, Gun-Hui;Kim, Joong-Hoon
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.10 no.1
    • /
    • pp.73-80
    • /
    • 2010
  • The optimal design of water distribution system have started with the least cost design of single objective function using fixed hydraulic variables, eg. fixed water demand and pipe roughness. However, more adequate design is accomplished with considering uncertainties laid on water distribution system such as uncertain future water demands, resulting in successful estimation of real network's behaviors. So, many researchers have suggested a variety of approaches to consider uncertainties in water distribution system using uncertainties quantification methods and the optimal design of multi-objective function is also studied. This paper suggests the new approach of a multi-objective optimization seeking the minimum cost and maximum robustness of the network based on two uncertain variables, nodal demands and pipe roughness uncertainties. Total design procedure consists of two folds: least cost design and final optimal design under uncertainties. The uncertainties of demands and roughness are considered with Latin Hypercube sampling technique with beta probability density functions and multi-objective genetic algorithms (MOGA) is used for the optimization process. The suggested approach is tested in a case study of real network named the New York Tunnels and the applicability of new approach is checked. As the computation time passes, we can check that initial populations, one solution of solutions of multi-objective genetic algorithm, spread to lower right section on the solution space and yield Pareto Optimum solutions building Pareto Front.

The Study on the Elaboration of Technology Valuation Model and the Adequacy of Volatility based on Real Options (실물옵션 기반 기술가치 평가모델 정교화와 변동성 유효구간에 관한 연구)

  • Sung, Tae-Eung;Lee, Jongtaik;Kim, Byunghoon;Jun, Seung-Pyo;Park, Hyun-Woo
    • Journal of Korea Technology Innovation Society
    • /
    • v.20 no.3
    • /
    • pp.732-753
    • /
    • 2017
  • Recently, when evaluating the technology values in the fields of biotechnology, pharmaceuticals and medicine, we have needed more to estimate those values in consideration of the period and cost for the commercialization to be put into in future. The existing discounted cash flow (DCF) method has limitations in that it can not consider consecutive investment or does not reflect the probabilistic property of commercialized input cost of technology-applied products. However, since the value of technology and investment should be considered as opportunity value and the information of decision-making for resource allocation should be taken into account, it is regarded desirable to apply the concept of real options, and in order to reflect the characteristics of business model for the target technology into the concept of volatility in terms of stock price which we usually apply to in evaluation of a firm's value, we need to consider 'the continuity of stock price (relatively minor change)' and 'positive condition'. Thus, as discussed in a lot of literature, it is necessary to investigate the relationship among volatility, underlying asset values, and cost of commercialization in the Black-Scholes model for estimating the technology value based on real options. This study is expected to provide more elaborated real options model, by mathematically deriving whether the ratio of the present value of the underlying asset to the present value of the commercialization cost, which reflects the uncertainty in the option pricing model (OPM), is divided into the "no action taken" (NAT) area under certain threshold conditions or not, and also presenting the estimation logic for option values according to the observation variables (or input values).

Principles and Current Trends of Neural Decoding (뉴럴 디코딩의 원리와 최신 연구 동향 소개)

  • Kim, Kwangsoo;Ahn, Jungryul;Cha, Seongkwang;Koo, Kyo-in;Goo, Yong Sook
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.342-351
    • /
    • 2017
  • The neural decoding is a procedure that uses spike trains fired by neurons to estimate features of original stimulus. This is a fundamental step for understanding how neurons talk each other and, ultimately, how brains manage information. In this paper, the strategies of neural decoding are classified into three methodologies: rate decoding, temporal decoding, and population decoding, which are explained. Rate decoding is the firstly used and simplest decoding method in which the stimulus is reconstructed from the numbers of the spike at given time (e. g. spike rates). Since spike number is a discrete number, the spike rate itself is often not continuous and quantized, therefore if the stimulus is not static and simple, rate decoding may not provide good estimation for stimulus. Temporal decoding is the decoding method in which stimulus is reconstructed from the timing information when the spike fires. It can be useful even for rapidly changing stimulus, and our sensory system is believed to have temporal rather than rate decoding strategy. Since the use of large numbers of neurons is one of the operating principles of most nervous systems, population decoding has advantages such as reduction of uncertainty due to neuronal variability and the ability to represent a stimulus attributes simultaneously. Here, in this paper, three different decoding methods are introduced, how the information theory can be used in the neural decoding area is also given, and at the last machinelearning based algorithms for neural decoding are introduced.

Why Do Individuals Postpone Their Enrollments for Military Service under a Conscription System? : Investigating Individuals' Psychological and Demographic Characteristics (징병제하에서 왜 군 입대를 늦추는가? : 심리적, 인구통계학적 특성 검토)

  • Kim, Sang-Hoon;Kim, Jin-Gyo;Jeong, Yong-Gyun
    • Journal of the military operations research society of Korea
    • /
    • v.32 no.2
    • /
    • pp.188-211
    • /
    • 2006
  • This study aims to empirically investigate the effects of the individual-level characteristics on their timing decisions for their enlistments even though military services are their duties under a draft system. The individual characteristics considered include five psychological factors, such as attitude, uncertainty, information search level, future expectation, and perceived risk towards army, and other several demographic variables. Measurement scales for these psychological variables are developed and a duration model for individuals' enrollment timing decisions is also proposed. The proposed model is fitted to a survey data set collected from both those who have completed military service and those who have not. The estimation results show that two of five psychological variables, negative attitude and perceived risk, and several demographic variables, including education level, income level, residence area, and the number of family members serving the army, have meaningful impacts on the timing decisions for military service. Specifically, the enlistment timings are found to be more delayed as negative attitude towards army is stronger, perceived risk on army is higher, education level is higher, academic performance is better, income level is either low or high, residence area is either Seoul or big cities, and the proportion of family members enlisted is smaller. Several important managerial implications for alleviating problems resulting from enrollment postponements are also discussed.

Position Estimation of Autonomous Mobile Robot Using Geometric Information of a Moving Object (이동물체의 기하학적 위치정보를 이용한 자율이동로봇의 위치추정)

  • Jin, Tae-Seok;Lee, Jang-Myung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.438-444
    • /
    • 2004
  • The intelligent robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robots need to recognize their position and posture in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for a robot to estimate of his position by solving uncertainty for mobile robot navigation, as one of the best important problems. In this paper, we describe a method for the localization of a mobile robot using image information of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using the a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot's position. Since the equations are based or the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot. The Kalman filter scheme is applied for this method. its performance is verified by the computer simulation and the experiment.