• Title/Summary/Keyword: Output Uncertainty

Search Result 319, Processing Time 0.025 seconds

Development of Magnetic Sensor for Measurement of the Cable Tension of Large Scale Bridge (대형교량 케이블 장력 측정을 위한 자기센서 개발)

  • Park, Hae-Won;Ahn, Bong-Young;Lee, Seung-Seok;Kim, Jong-Woo
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.27 no.4
    • /
    • pp.339-344
    • /
    • 2007
  • Safety of large scale cable in bridge is very important because it may cause the unwanted catastrophic failure. Although the proof load were considered at the design stage, its soundness must be monitored continuously because the cable may be broken out without warning by the variable external load. The cable tension of in-use structures has been mainly measured by the resonance method and its use has been limited because of relatively large measurement uncertainty. Recently a new magnetic method was developed and its reliability is known to be good for evaluating the cable tension. In this study a system which can deliver the calibrated load to the cable was developed and the measurement reliability of developed magnetic sensor according to the change of external load was analyzed quantitatively. The effect of magnetization frequency, bias magnetic field, and temperature on the sensor output was also evaluated.

Docking Assessment Algorithm for AUVs with Uncertainties (불확실성이 포함된 무인잠수정의 도킹 평가 알고리즘)

  • Chon, Seung-jae;Sur, Joo-no;Jeong, Seong-hoon
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.5
    • /
    • pp.352-360
    • /
    • 2019
  • This paper proposes a docking assessment algorithm for an autonomous underwater vehicles (AUVs) with sensor uncertainties. The proposed algorithm consists of two assessments, state assessment and probability assessment. The state assessment verifies the reachability by comparing forward distance to the docking station with expected distance to reach same depth as the docking station and necessity for correcting its route by comparing calculated inaccessible areas based on turning radius of the AUV to position of the docking station. When the AUV and the docking station is close enough and the state assessment is satisfied, the probability assessment is conducted by computing success probability of docking based on the direction angle, relative position to the docking station, and sensor uncertainties of the AUV. The final output of the algorithm is decided by comparing the success probability to threshold whether to try docking or to correct its route. To verify the validation of the suggested algorithm, the scenario that the AUV approaches to the docking station is implemented through Matlab simulation.

Application of POD reduced-order algorithm on data-driven modeling of rod bundle

  • Kang, Huilun;Tian, Zhaofei;Chen, Guangliang;Li, Lei;Wang, Tianyu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.36-48
    • /
    • 2022
  • As a valid numerical method to obtain a high-resolution result of a flow field, computational fluid dynamics (CFD) have been widely used to study coolant flow and heat transfer characteristics in fuel rod bundles. However, the time-consuming, iterative calculation of Navier-Stokes equations makes CFD unsuitable for the scenarios that require efficient simulation such as sensitivity analysis and uncertainty quantification. To solve this problem, a reduced-order model (ROM) based on proper orthogonal decomposition (POD) and machine learning (ML) is proposed to simulate the flow field efficiently. Firstly, a validated CFD model to output the flow field data set of the rod bundle is established. Secondly, based on the POD method, the modes and corresponding coefficients of the flow field were extracted. Then, an deep feed-forward neural network, due to its efficiency in approximating arbitrary functions and its ability to handle high-dimensional and strong nonlinear problems, is selected to build a model that maps the non-linear relationship between the mode coefficients and the boundary conditions. A trained surrogate model for modes coefficients prediction is obtained after a certain number of training iterations. Finally, the flow field is reconstructed by combining the product of the POD basis and coefficients. Based on the test dataset, an evaluation of the ROM is carried out. The evaluation results show that the proposed POD-ROM accurately describe the flow status of the fluid field in rod bundles with high resolution in only a few milliseconds.

Prediction of ship power based on variation in deep feed-forward neural network

  • Lee, June-Beom;Roh, Myung-Il;Kim, Ki-Su
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.13 no.1
    • /
    • pp.641-649
    • /
    • 2021
  • Fuel oil consumption (FOC) must be minimized to determine the economic route of a ship; hence, the ship power must be predicted prior to route planning. For this purpose, a numerical method using test results of a model has been widely used. However, predicting ship power using this method is challenging owing to the uncertainty of the model test. An onboard test should be conducted to solve this problem; however, it requires considerable resources and time. Therefore, in this study, a deep feed-forward neural network (DFN) is used to predict ship power using deep learning methods that involve data pattern recognition. To use data in the DFN, the input data and a label (output of prediction) should be configured. In this study, the input data are configured using ocean environmental data (wave height, wave period, wave direction, wind speed, wind direction, and sea surface temperature) and the ship's operational data (draft, speed, and heading). The ship power is selected as the label. In addition, various treatments have been used to improve the prediction accuracy. First, ocean environmental data related to wind and waves are preprocessed using values relative to the ship's velocity. Second, the structure of the DFN is changed based on the characteristics of the input data. Third, the prediction accuracy is analyzed using a combination comprising five hyperparameters (number of hidden layers, number of hidden nodes, learning rate, dropout, and gradient optimizer). Finally, k-means clustering is performed to analyze the effect of the sea state and ship operational status by categorizing it into several models. The performances of various prediction models are compared and analyzed using the DFN in this study.

Comparison between Uncertainties of Cultivar Parameter Estimates Obtained Using Error Calculation Methods for Forage Rice Cultivars (오차 계산 방식에 따른 사료용 벼 품종의 품종모수 추정치 불확도 비교)

  • Young Sang Joh;Shinwoo Hyun;Kwang Soo Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.3
    • /
    • pp.129-141
    • /
    • 2023
  • Crop models have been used to predict yield under diverse environmental and cultivation conditions, which can be used to support decisions on the management of forage crop. Cultivar parameters are one of required inputs to crop models in order to represent genetic properties for a given forage cultivar. The objectives of this study were to compare calibration and ensemble approaches in order to minimize the uncertainty of crop yield estimates using the SIMPLE crop model. Cultivar parameters were calibrated using Log-likelihood (LL) and Generic Composite Similarity Measure (GCSM) as an objective function for Metropolis-Hastings (MH) algorithm. In total, 20 sets of cultivar parameters were generated for each method. Two types of ensemble approach. First type of ensemble approach was the average of model outputs (Eem), using individual parameters. The second ensemble approach was model output (Epm) of cultivar parameter obtained by averaging given 20 sets of parameters. Comparison was done for each cultivar and for each error calculation methods. 'Jowoo' and 'Yeongwoo', which are forage rice cultivars used in Korea, were subject to the parameter calibration. Yield data were obtained from experiment fields at Suwon, Jeonju, Naju and I ksan. Data for 2013, 2014 and 2016 were used for parameter calibration. For validation, yield data reported from 2016 to 2018 at Suwon was used. Initial calibration indicated that genetic coefficients obtained by LL were distributed in a narrower range than coefficients obtained by GCSM. A two-sample t-test was performed to compare between different methods of ensemble approaches and no significant difference was found between them. Uncertainty of GCSM can be neutralized by adjusting the acceptance probability. The other ensemble method (Epm) indicates that the uncertainty can be reduced with less computation using ensemble approach.

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

Evaluation of Agro-Climatic Index Using Multi-Model Ensemble Downscaled Climate Prediction of CMIP5 (상세화된 CMIP5 기후변화전망의 다중모델앙상블 접근에 의한 농업기후지수 평가)

  • Chung, Uran;Cho, Jaepil;Lee, Eun-Jeong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.17 no.2
    • /
    • pp.108-125
    • /
    • 2015
  • The agro-climatic index is one of the ways to assess the climate resources of particular agricultural areas on the prospect of agricultural production; it can be a key indicator of agricultural productivity by providing the basic information required for the implementation of different and various farming techniques and practicalities to estimate the growth and yield of crops from the climate resources such as air temperature, solar radiation, and precipitation. However, the agro-climate index can always be changed since the index is not the absolute. Recently, many studies which consider uncertainty of future climate change have been actively conducted using multi-model ensemble (MME) approach by developing and improving dynamic and statistical downscaling of Global Climate Model (GCM) output. In this study, the agro-climatic index of Korean Peninsula, such as growing degree day based on $5^{\circ}C$, plant period based on $5^{\circ}C$, crop period based on $10^{\circ}C$, and frost free day were calculated for assessment of the spatio-temporal variations and uncertainties of the indices according to climate change; the downscaled historical (1976-2005) and near future (2011-2040) RCP climate sceneries of AR5 were applied to the calculation of the index. The result showed four agro-climatic indices calculated by nine individual GCMs as well as MME agreed with agro-climatic indices which were calculated by the observed data. It was confirmed that MME, as well as each individual GCM emulated well on past climate in the four major Rivers of South Korea (Han, Nakdong, Geum, and Seumjin and Yeoungsan). However, spatial downscaling still needs further improvement since the agro-climatic indices of some individual GCMs showed different variations with the observed indices at the change of spatial distribution of the four Rivers. The four agro-climatic indices of the Korean Peninsula were expected to increase in nine individual GCMs and MME in future climate scenarios. The differences and uncertainties of the agro-climatic indices have not been reduced on the unlimited coupling of multi-model ensembles. Further research is still required although the differences started to improve when combining of three or four individual GCMs in the study. The agro-climatic indices which were derived and evaluated in the study will be the baseline for the assessment of agro-climatic abnormal indices and agro-productivity indices of the next research work.

Practical Output Dosimetry with Undefined $N_{dw}{^{Co-60}}$ of Cylindrical Ionization Chamber for High Energy Photon Beams of Linear Accelerator ($N_{dw}{^{Co-60}}$이 정의되지 않은 원통형 이온전리함을 이용한 고에너지 광자선의 임상적 출력선량 결정)

  • Oh, Young-Kee;Choi, Tae-Jin;Song, Ju-Young
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.114-122
    • /
    • 2012
  • For the determination of absorbed dose to water from a linear accelerator photon beams, it needs a exposure calibration factor $N_x$ or air kerma calibration factor $N_k$ of air ionization chamber. We used the exposure calibration factor $N_x$ to find the absorbed dose calibration factors of water in a reference source through the TG-21 and TRS-277 protocol. TG-21 used for determine the absorbed dose in accuracy, but it required complex calculations including the chamber dependent factors. The authors obtained the absorbed dose calibration factor $N_{dw}{^{Co-60}}$ for reduce the complex calculations with unknown $N_{dw}$ only with $N_x$ or $N_k$ calibration factor in a TM31010 (S/N 1055, 1057) ionization chambers. The results showed the uncertainty of calculated $N_{dw}$ of IC-15 which was known the $N_x$ and $N_{dw}$ is within -0.6% in TG-21, but 1.0% in TRS-277. and TM31010 was compared the $N_{dw}$ of SSDL to that of PSDL as shown the 0.4%, -2.8% uncertainty, respectively. The authors experimented with good agreement the calculated $N_{dw}$ is reliable for cross check the discrepancy of the calibration factor with unknown that of TM31010 and IC-15 chamber.

Feasibility study of the beating cancellation during the satellite vibration test

  • Bettacchioli, Alain
    • Advances in aircraft and spacecraft science
    • /
    • v.5 no.2
    • /
    • pp.225-237
    • /
    • 2018
  • The difficulties of satellite vibration testing are due to the commonly expressed qualification requirements being incompatible with the limited performance of the entire controlled system (satellite + interface + shaker + controller). Two features cause the problem: firstly, the main satellite modes (i.e., the first structural mode and the high and low tank modes) are very weakly damped; secondly, the controller is just too basic to achieve the expected performance in such cases. The combination of these two issues results in oscillations around the notching levels and high amplitude beating immediately after the mode. The beating overshoots are a major risk source because they can result in the test being aborted if the qualification upper limit is exceeded. Although the abort is, in itself, a safety measure protecting the tested satellite, it increases the risk of structural fatigue, firstly because the abort threshold has been already reached, and secondly, because the test must restart at the same close-resonance frequency and remain there until the qualification level is reached and the sweep frequency can continue. The beat minimum relates only to small successive frequency ranges in which the qualification level is not reached. Although they are less problematic because they do not cause an inadvertent test shutdown, such situations inevitably result in waiver requests from the client. A controlled-system analysis indicates an operating principle that cannot provide sufficient stability: the drive calculation (which controls the process) simply multiplies the frequency reference (usually called cola) and a function of the following setpoint, the ratio between the amplitude already reached and the previous setpoint, and the compression factor. This function value changes at each cola interval, but it never takes into account the sensor signal phase. Because of these limitations, we firstly examined whether it was possible to empirically determine, using a series of tests with a very simple dummy, a controller setting process that significantly improves the results. As the attempt failed, we have performed simulations seeking an optimum adjustment by finding the Least Mean Square of the difference between the reference and response signal. The simulations showed a significant improvement during the notch beat and a small reduction in the beat amplitude. However, the small improvement in this process was not useful because it highlighted the need to change the reference at each cola interval, sometimes with instructions almost twice the qualification level. Another uncertainty regarding the consequences of such an approach involves the impact of differences between the estimated model (used in the simulation) and the actual system. As limitations in the current controller were identified in different approaches, we considered the feasibility of a new controller that takes into account an estimated single-input multi-output (SIMO) model. Its parameters were estimated from a very low-level throughput. Against this backdrop, we analyzed the feasibility of an LQG control in cancelling beating, and this article highlights the relevance of such an approach.

R&D Efficiency Analysis Case of the Machine Tools Industry by Using DEA (DEA를 활용한 민간 기업의 R&D 효율성 분석 사례: 공작기계 A사를 중심으로)

  • Jeon, Soo-Jin;Lee, Jin-Soo;Hong, Jae-Bum
    • Journal of Technology Innovation
    • /
    • v.24 no.4
    • /
    • pp.27-53
    • /
    • 2016
  • This case analyzed the efficiency of 79 R&D projects performed within one private research center in machine tools industry. DEA was used for efficiency analysis. Input variables were R&D investment expense and man-month. Output variables were achievement rate on target development period and expected net sales within 5-years. Samples are divided into product development, Prior technology development, and control technology development. The key result is that Prior technology showed the lowest efficiency because of high uncertainty. It was so difficult to determine its goals and to make its specific plans. With respect to scale, the proportions of CRS(constant returns to scale) were 34.6%, 14.3% and 38.9% for product development, prior technology, control technology respectively. As for IRS(increase returns to scale), they were 53.8%, 85.7% and 38.9% for product development, prior technology, control technology respectively. As for DRS(decrease returns to scale) they were 11.5%, 0% and 22.2% for product development, prior technology, control technology respectively. On the whole, in this case, insufficient input was more problematic than excessive input, which means the lack of investment in R&D. Prior technology can be the source of the future competitiveness of companies. To operate inefficient DMU efficiently, the optimal input should be managed and it is derived from comparison with the reference group.