• Title/Summary/Keyword: true random number

Search Result 45, Processing Time 0.026 seconds

Optimization of Stochastic System Using Genetic Algorithm and Simulation

  • 유지용
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1999.10a
    • /
    • pp.75-80
    • /
    • 1999
  • This paper presents a new method to find a optimal solution for stochastic system. This method uses Genetic Algorithm(GA) and simulation. GA is used to search for new alternative and simulation is used to evaluate alternative. The stochastic system has one or more random variables as inputs. Random inputs lead to random outputs. Since the outputs are random, they can be considered only as estimates of the true characteristics of they system. These estimates could greatly differ from the corresponding real characteristics for the system. We need multiple replications to get reliable information on the system. And we have to analyze output data to get a optimal solution. It requires too much computation to be practical. We address the problem of reducing computation. The procedure on this paper use GA character, an iterative process, to reduce the number of replications. The same chromosomes could exit in post and present generation. Computation can be reduced by using the information of the same chromosomes which exist in post and present current generation.

  • PDF

Simulation Optimization for Optimal at Design of Stochastic Manufacturing System Using Genetic Algorithm (추계적 생산시스템의 최적 설계를 위한 전자 알고리즘을 애용한 시뮬레이션 최적화 기법 개발)

  • 이영해;유지용;정찬석
    • Journal of the Korea Society for Simulation
    • /
    • v.9 no.1
    • /
    • pp.93-108
    • /
    • 2000
  • The stochastic manufacturing system has one or more random variables as inputs that lead to random outputs. Since the outputs are random, they can be considered only as estimates of the true characteristics of the system. These estimates could greatly differ from the corresponding real characteristics for the system. Multiple replications are necessary to get reliable information on the system and output data should be analyzed to get optimal solution. It requires too much computation time practically, In this paper a GA method, named Stochastic Genetic Algorithm(SGA) is proposed and tested to find the optimal solution fast and efficiently by reducing the number of replications.

  • PDF

Natural frequency characteristics of composite plates with random properties

  • Salim, S.;Iyengar, N.G.R.;Yadav, D.
    • Structural Engineering and Mechanics
    • /
    • v.6 no.6
    • /
    • pp.659-671
    • /
    • 1998
  • Exercise of complete control on all aspects of any manufacturing / fabrication process is very difficult, leading to uncertainties in the material properties and geometric dimensions of structural components. This is especially true for laminated composites because of the large number of parameters associated with its fabrication. When the basic parameters like elastic modulus, density and Poisson's ratio are random, the derived response characteristics such as deflections, natural frequencies, buckling loads, stresses and strains are also random, being functions of the basic random system parameters. In this study the basic elastic properties of a composite lamina are assumed to be independent random variables. Perturbation formulation is used to model the random parameters assuming the dispersions small compared to the mean values. The system equations are analyzed to obtain the mean and the variance of the plate natural frequencies. Several application problems of free vibration analysis of composite plates, employing the proposed method are discussed. The analysis indicates that, at times it may be important to include the effect of randomness in material properties of composite laminates.

A Study on the Design of a Beta Ray Sensor for True Random Number Generators (진성난수 생성기를 위한 베타선 센서 설계에 관한 연구)

  • Kim, Young-Hee;Jin, HongZhou;Park, Kyunghwan;Kim, Jongbum;Ha, Pan-Bong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.619-628
    • /
    • 2019
  • In this paper, we designed a beta ray sensor for a true random number generator. Instead of biasing the gate of the PMOS feedback transistor to a DC voltage, the current flowing through the PMOS feedback transistor is mirrored through a current bias circuit designed to be insensitive to PVT fluctuations, thereby minimizing fluctuations in the signal voltage of the CSA. In addition, by using the constant current supplied by the BGR (Bandgap Reference) circuit, the signal voltage is charged to the VCOM voltage level, thereby reducing the change in charge time to enable high-speed sensing. The beta ray sensor designed with 0.18㎛ CMOS process shows that the minimum signal voltage and maximum signal voltage of the CSA circuit which are resulted from corner simulation are 205mV and 303mV, respectively. and the minimum and maximum widths of the pulses generated by comparing the output signal through the pulse shaper with the threshold voltage (VTHR) voltage of the comparator, were 0.592㎲ and 1.247㎲, respectively. resulting in high-speed detection of 100kHz. Thus, it is designed to count up to 100 kilo pulses per second.

A Note on Performance of Conditional Akaike Information Criteria in Linear Mixed Models

  • Lee, Yonghee
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.5
    • /
    • pp.507-518
    • /
    • 2015
  • It is not easy to select a linear mixed model since the main interest for model building could be different and the number of parameters in the model could not be clearly defined. In this paper, performance of conditional Akaike Information Criteria and its bias-corrected version are compared with marginal Bayesian and Akaike Information Criteria through a simulation study. The results from the simulation study indicate that bias-corrected conditional Akaike Information Criteria shows promising performance when candidate models exclude large models containing the true model, but bias-corrected one prefers over-parametrized models more intensively when a set of candidate models increases. Marginal Bayesian and Akaike Information Criteria also have some difficulty to select the true model when the design for random effects is nested.

Error propagation in 2-D self-calibration algorithm (2차원 자가 보정 알고리즘에서의 불확도 전파)

  • 유승봉;김승우
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.434-437
    • /
    • 2003
  • Evaluation or the patterning accuracy of e-beam lithography machines requires a high precision inspection system that is capable of measuring the true xy-locations of fiducial marks generated by the e-beam machine under test. Fiducial marks are fabricated on a single photo mask over the entire working area in the form of equally spaced two-dimensional grids. In performing the evaluation, the principles of self-calibration enable to determine the deviations of fiducial marks from their nominal xy-locations precisely, not being affected by the motion errors of the inspection system itself. It is. however, the fact that only repeatable motion errors can be eliminated, while random motion errors encountered in probing the locations of fiducial marks are not removed. Even worse, a random error occurring from the measurement of a single mark propagates and affects in determining locations of other marks, which phenomenon in fact limits the ultimate calibration accuracy of e-beam machines. In this paper, we describe an uncertainty analysis that has been made to investigate how random errors affect the final result of self-calibration of e-beam machines when one uses an optical inspection system equipped with high-resolution microscope objectives and a precision xy-stages. The guide of uncertainty analysis recommended by the International Organization for Standardization is faithfully followed along with necessary sensitivity analysis. The uncertainty analysis reveals that among the dominant components of the patterning accuracy of e-beam lithography, the rotationally symmetrical component is most significantly affected by random errors, whose propagation becomes more severe in a cascading manner as the number of fiducial marks increases

  • PDF

Determination of sample size to serological surveillance plan for pullorum disease and fowl typhoid (추백리-가금티푸스의 혈청학적 모니터링 계획수립을 위한 표본크기)

  • Pak, Son-Il;Park, Choi-Kyu
    • Korean Journal of Veterinary Research
    • /
    • v.48 no.4
    • /
    • pp.457-462
    • /
    • 2008
  • The objective of this study was to determine appropriate sample size that simulated different assumptions for diagnostic test characteristics and true prevalences when designing serological surveillance plan for pullorum disease and fowl typhoid in domestic poultry production. The number of flocks and total number of chickens to be sampled was obtained to provide 95% confidence of detecting at least one infected flock, taking imperfect diagnostic tests into account. Due to lack of reliable data, within infected flock prevalence (WFP) was assumed to follow minimum 1%, most likely 5% and maximum 9% and true flock prevalence of 0.1%, 0.5% and 1% in order. Sensitivity were modeled using the Pert distribution: minimum 75%, most likely 80% and maximum 90% for plate agglutination test and 80%, 85%, and 90% for ELISA test. Similarly, the specificity was modeled 85%, 90%, 95% for plate agglutination test and 90%, 95%, 99% for ELISA test. In accordance with the current regulation, flock-level test characteristics calculated assuming that 30 samples are taken from per flock. The model showed that the current 112,000 annual number of testing plan which is based on random selection of flocks is far beyond the sample size estimated in this study. The sample size was further reduced with increased sensitivity and specificity of the test and decreased WFP. The effect of increasing samples per flock on total sample size to be sampled and optimal combination of sensitivity and specificity of the test for the purpose of the surveillance is discussed regarding cost.

A machine learning informed prediction of severe accident progressions in nuclear power plants

  • JinHo Song;SungJoong Kim
    • Nuclear Engineering and Technology
    • /
    • v.56 no.6
    • /
    • pp.2266-2273
    • /
    • 2024
  • A machine learning platform is proposed for the diagnosis of a severe accident progression in a nuclear power plant. To predict the key parameters for accident management including lost signals, a long short term memory (LSTM) network is proposed, where multiple accident scenarios are used for training. Training and test data were produced by MELCOR simulation of the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident at unit 3. Feature variables were selected among plant parameters, where the importance ranking was determined by a recursive feature elimination technique using RandomForestRegressor. To answer the question of whether a reduced order ML model could predict the complex transient response, we performed a systematic sensitivity study for the choices of target variables, the combination of training and test data, the number of feature variables, and the number of neurons to evaluate the performance of the proposed ML platform. The number of sensitivity cases was chosen to guarantee a 95 % tolerance limit with a 95 % confidence level based on Wilks' formula to quantify the uncertainty of predictions. The results of investigations indicate that the proposed ML platform consistently predicts the target variable. The median and mean predictions were close to the true value.

A Novel GNSS Spoofing Detection Technique with Array Antenna-Based Multi-PRN Diversity

  • Lee, Young-Seok;Yeom, Jeong Seon;Noh, Jae Hee;Lee, Sang Jeong;Jung, Bang Chul
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.10 no.3
    • /
    • pp.169-177
    • /
    • 2021
  • In this paper, we propose a novel global navigation satellite system (GNSS) spoofing detection technique through an array antenna-based direction of arrival (DoA) estimation of satellite and spoofer. Specifically, we consider a sophisticated GNSS spoofing attack scenario where the spoofer can accurately mimic the multiple pseudo-random number (PRN) signals since the spoofer has its own GNSS receiver and knows the location of the target receiver in advance. The target GNSS receiver precisely estimates the DoA of all PRN signals using compressed sensing-based orthogonal matching pursuit (OMP) even with a small number of samples, and it performs spoofing detection from the DoA estimation results of all PRN signals. In addition, considering the initial situation of a sophisticated spoofing attack scenario, we designed the algorithm to have high spoofing detection performance regardless of the relative spoofing signal power. Therefore, we do not consider the assumption in which the power of the spoofing signal is about 3 dB greater than that of the authentic signal. Then, we introduce design parameters to get high true detection probability and low false alarm probability in tandem by considering the condition for the presence of signal sources and the proximity of the DoA between authentic signals. Through computer simulations, we compare the DoA estimation performance between the conventional signal direction estimation method and the OMP algorithm in few samples. Finally, we show in the sophisticated spoofing attack scenario that the proposed spoofing detection technique using OMP-based estimated DoA of all PRN signals outperforms the conventional spoofing detection scheme in terms of true detection and false alarm probability.

A Study on the Establishment of Entropy Source Model Using Quantum Characteristic-Based Chips (양자 특성 기반 칩을 활용한 엔트로피 소스 모델 수립 방법에 관한 연구)

  • Kim, Dae-Hyung;Kim, Jubin;Ji, Dong-Hwa
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.140-142
    • /
    • 2021
  • Mobile communication technology after 5th generation requires high speed, hyper-connection, and low latency communication. In order to meet technical requirements for secure hyper-connectivity, low-spec IoT devices that are considered the end of IoT services must also be able to provide the same level of security as high-spec servers. For the purpose of performing these security functions, it is required for cryptographic keys to have the necessary degree of stability in cryptographic algorithms. Cryptographic keys are usually generated from cryptographic random number generators. At this time, good noise sources are needed to generate random numbers, and hardware random number generators such as TRNG are used because it is difficult for the low-spec device environment to obtain sufficient noise sources. In this paper we used the chip which is based on quantum characteristics where the decay of radioactive isotopes is unpredictable, and we presented a variety of methods (TRNG) obtaining an entropy source in the form of binary-bit series. In addition, we conducted the NIST SP 800-90B test for the entropy of output values generated by each TRNG to compare the amount of entropy with each method.

  • PDF