• Title/Summary/Keyword: Iteration processes

Search Result 72, Processing Time 0.029 seconds

Measurement of Dynamic Elastic Constants of RPV Steel Weld due to Localized Microstructural Variation (원자로 용접부의 국부적 미세조직 변화에 따른 동적탄성계수 측정)

  • Cheong, Yong-Moo;Kim, Joo-Hag;Hong, Jun-Hwa;Jung, Hyun-Kyu
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.20 no.5
    • /
    • pp.390-396
    • /
    • 2000
  • The dynamic elastic constants of the simulated weld HAZ (heat-affected zone) of SA 508 Class 3 reactor pressure vessel (RPV) steel were investigated by resonant ultrasound spectroscopy (RUS). The resonance frequencies of rectangular parallelepiped samples woe calculated from the initial estimates of elastic stiffness $c_{11},\;c_{12}\;and\;c_{44}$ with an assumption of isotropic property, dimension and density. Through the comparison of calculated resonant frequencies with the measured resonant frequencies by RUS, very accurate elastic constants of SA 508 Class 3 steel were determined by iteration and convergence processes. Clear differences of Youngs modulus and shear modulus were shown from samples with different thermal cycles and microstructures. Youngs modulus and shear modulus of samples with fine-grained bainite were higher than those with coarse-grained tempered martensite. This tendency was confirmed from other results such as micro-hardness test.

  • PDF

comparison of Numercal Methods for Obtaining 2-D Impurity Profile in Semiconductor (반도체 내에서의 2차원 불순물 분포를 얻기 위한 수치해법의 비교)

  • Yang, Yeong-Il;Gyeong, Jong-Min;O, Hyeong-Cheol
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.22 no.3
    • /
    • pp.95-102
    • /
    • 1985
  • An efficient numerical scheme for assessing the two-dimensional diffusion problem for modelling impurity profile in semiconductor is described. 4 unique combination of ADI (Al-ternating Direction Bmplicit) method and Gauss Elimination has resulted in a reduction of CPU time for most diffusion processes by a factor of 3, compared to other iteration schemes such as SOR (Successive Over-Relaxation) or Stone's iterative method without additional storage re-quirement. Various numerical schemes were compared for 2-D as well as 1-0 diffusion profile in terms of their CPU time while retaining the magnitude of relative error within 0.001%. good agree-ment between 1-D and 2-D simulation profile as well as between 1-D simulation profile and experiment has been obtained.

  • PDF

Mass Transfer Model and Coefficient on Biotrickling Filtration for Air Pollution Control (대기오염제어를 위한 생물살수여과법에서 물질전달 Model과 계수에 관한 연구)

  • Won, Yang-Soo;Jo, Wan-Keun
    • Korean Chemical Engineering Research
    • /
    • v.53 no.4
    • /
    • pp.489-495
    • /
    • 2015
  • A fundamental mathematical model for mass transfer processes has been used to understand the air pollution control process in biotrickling filtration and to evaluate the mass transfer coefficients of gas/liquid (trickling liquid), gas/solid (biomass) and liquid/solid based upon experimental results and mathematical model calculations for selected operating conditions. The mass transfer models for the utilization of the steady-state mass balance for gas/liquid, and dynamic mass balance model for gas/solid & liquid/solid in biotrickling filters were established and discussed. The mass transfer model considered the reactor to comprise finite sections, for each of which dynamic mass balances for gas/solid and liquid/solid system were solved by numerical analysis code (numerical iteration). To determine the mass transfer coefficients ($K_La$) of gas/liquid, gas/solid & liquid/solid in a biotrickling filter, the calculation results based upon mass balance equation was optimized to coincide with the experimental results for the selected operating conditions. Finally, this study contributed the development of experimental methods and discussed the mathematical model to determine the mass transfer coefficients in a biotrickling filtration for air pollution control.

Runtime-Guard Coverage Guided Fuzzer Avoiding Deoptimization for Optimized Javascript Functions (최적화 컴파일된 자바스크립트 함수에 대한 최적화 해제 회피를 이용하는 런타임 가드 커버리지 유도 퍼저)

  • Kim, Hong-Kyo;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.3
    • /
    • pp.443-454
    • /
    • 2020
  • The JavaScript engine is a module that receives JavaScript code as input and processes it, among many functions that are loaded into web browsers and display web pages. Many fuzzing test studies have been conducted as vulnerabilities in JavaScript engines could threaten the system security of end-users running JavaScript through browsers. Some of them have increased fuzzing efficiency by guiding test coverage in JavaScript engines, but no coverage guided fuzzing of optimized, dynamically generated machine code was attempted. Optimized JavaScript codes are difficult to perform sufficient iterative testing through fuzzing due to the function of runtime guards to free the code in the event of exceptional control flow. To solve these problems, this paper proposes a method of performing fuzzing tests on optimized machine code by avoiding deoptimization. In addition, we propose a method to measure the coverage of runtime-guards by the dynamic binary instrumentation and to guide increment of runtime-guard coverage. In our experiment, our method has outperformed the existing method at two measures: runtime coverage and iteration by time.

Systolic Arrays for Lattice-Reduction-Aided MIMO Detection

  • Wang, Ni-Chun;Biglieri, Ezio;Yao, Kung
    • Journal of Communications and Networks
    • /
    • v.13 no.5
    • /
    • pp.481-493
    • /
    • 2011
  • Multiple-input multiple-output (MIMO) technology provides high data rate and enhanced quality of service for wireless communications. Since the benefits from MIMO result in a heavy computational load in detectors, the design of low-complexity suboptimum receivers is currently an active area of research. Lattice-reduction-aided detection (LRAD) has been shown to be an effective low-complexity method with near-maximum-likelihood performance. In this paper, we advocate the use of systolic array architectures for MIMO receivers, and in particular we exhibit one of them based on LRAD. The "Lenstra-Lenstra-Lov$\acute{a}$sz (LLL) lattice reduction algorithm" and the ensuing linear detections or successive spatial-interference cancellations can be located in the same array, which is considerably hardware-efficient. Since the conventional form of the LLL algorithm is not immediately suitable for parallel processing, two modified LLL algorithms are considered here for the systolic array. LLL algorithm with full-size reduction-LLL is one of the versions more suitable for parallel processing. Another variant is the all-swap lattice-reduction (ASLR) algorithm for complex-valued lattices, which processes all lattice basis vectors simultaneously within one iteration. Our novel systolic array can operate both algorithms with different external logic controls. In order to simplify the systolic array design, we replace the Lov$\acute{a}$sz condition in the definition of LLL-reduced lattice with the looser Siegel condition. Simulation results show that for LR-aided linear detections, the bit-error-rate performance is still maintained with this relaxation. Comparisons between the two algorithms in terms of bit-error-rate performance, and average field-programmable gate array processing time in the systolic array are made, which shows that ASLR is a better choice for a systolic architecture, especially for systems with a large number of antennas.

Finding the Workflow Critical Path in the Extended Structural Workflow Schema (확장된 구조적 워크플루우 스키마에서 워크플로우 임계 경로의 결정)

  • Son, Jin-Hyeon;Kim, Myeong-Ho
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.138-147
    • /
    • 2002
  • The concept of the critical path in the workflow is important because it can be utilized In many issues in workflow systems, e.g., workflow resource management and workflow time management. However, the critical path in the contest of the workflow has not been much addressed in the past. This is because control flows in the workflow, generally including sequence, parallel, alternative, iteration and so on, are much more complex than those in the ordinary graph or network. In this paper we first describe our workflow model that has considerable work(low control constructs. They would provide the sufficient expressive power for modeling the growing complexities of today's most business processes. Then, we propose a method to systematically determine the critical path in a workflow schema built by the workflow control constructs described in our workflow model.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Study on CGM-LMS Hybrid Based Adaptive Beam Forming Algorithm for CDMA Uplink Channel (CDMA 상향채널용 CGM-LMS 접목 적응빔형성 알고리듬에 관한 연구)

  • Hong, Young-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.895-904
    • /
    • 2007
  • This paper proposes a robust sub-optimal smart antenna in Code Division Multiple Access (CDMA) basestation. It makes use of the property of the Least Mean Square (LMS) algorithm and the Conjugate Gradient Method (CGM) algorithm for beamforming processes. The weight update takes place at symbol level which follows the PN correlators of receiver module under the assumption that the post correlation desired signal power is far larger than the power of each of the interfering signals. The proposed algorithm is simple and has as low computational load as five times of the number of antenna elements(O(5N)) as a whole per each snapshot. The output Signal to Interference plus Noise Ratio (SINR) of the proposed smart antenna system when the weight vector reaches the steady state has been examined. It has been observed in computer simulations that proposed beamforming algorithm improves the SINR significantly compared to the single antenna case. The convergence property of the weight vector has also been investigated to show that the proposed hybrid algorithm performs better than CGM and LMS during the initial stage of the weight update iteration. The Bit Error Rate (BER) characteristics of the proposed array has also been shown as the processor input Signal to Noise Ratio (SNR) varies.

Control of pH Neutralization Process using Simulation Based Dynamic Programming in Simulation and Experiment (ICCAS 2004)

  • Kim, Dong-Kyu;Lee, Kwang-Soon;Yang, Dae-Ryook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.620-626
    • /
    • 2004
  • For general nonlinear processes, it is difficult to control with a linear model-based control method and nonlinear controls are considered. Among the numerous approaches suggested, the most rigorous approach is to use dynamic optimization. Many general engineering problems like control, scheduling, planning etc. are expressed by functional optimization problem and most of them can be changed into dynamic programming (DP) problems. However the DP problems are used in just few cases because as the size of the problem grows, the dynamic programming approach is suffered from the burden of calculation which is called as 'curse of dimensionality'. In order to avoid this problem, the Neuro-Dynamic Programming (NDP) approach is proposed by Bertsekas and Tsitsiklis (1996). To get the solution of seriously nonlinear process control, the interest in NDP approach is enlarged and NDP algorithm is applied to diverse areas such as retailing, finance, inventory management, communication networks, etc. and it has been extended to chemical engineering parts. In the NDP approach, we select the optimal control input policy to minimize the value of cost which is calculated by the sum of current stage cost and future stages cost starting from the next state. The cost value is related with a weight square sum of error and input movement. During the calculation of optimal input policy, if the approximate cost function by using simulation data is utilized with Bellman iteration, the burden of calculation can be relieved and the curse of dimensionality problem of DP can be overcome. It is very important issue how to construct the cost-to-go function which has a good approximate performance. The neural network is one of the eager learning methods and it works as a global approximator to cost-to-go function. In this algorithm, the training of neural network is important and difficult part, and it gives significant effect on the performance of control. To avoid the difficulty in neural network training, the lazy learning method like k-nearest neighbor method can be exploited. The training is unnecessary for this method but requires more computation time and greater data storage. The pH neutralization process has long been taken as a representative benchmark problem of nonlin ar chemical process control due to its nonlinearity and time-varying nature. In this study, the NDP algorithm was applied to pH neutralization process. At first, the pH neutralization process control to use NDP algorithm was performed through simulations with various approximators. The global and local approximators are used for NDP calculation. After that, the verification of NDP in real system was made by pH neutralization experiment. The control results by NDP algorithm was compared with those by the PI controller which is traditionally used, in both simulations and experiments. From the comparison of results, the control by NDP algorithm showed faster and better control performance than PI controller. In addition to that, the control by NDP algorithm showed the good results when it applied to the cases with disturbances and multiple set point changes.

  • PDF

Seismic interval velocity analysis on prestack depth domain for detecting the bottom simulating reflector of gas-hydrate (가스 하이드레이트 부존층의 하부 경계면을 규명하기 위한 심도영역 탄성파 구간속도 분석)

  • Ko Seung-Won;Chung Bu-Heung
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.638-642
    • /
    • 2005
  • For gas hydrate exploration, long offset multichannel seismic data acquired using by the 4km streamer length in Ulleung basin of the East Sea. The dataset was processed to define the BSRs (Bottom Simulating Reflectors) and to estimate the amount of gas hydrates. Confirmation of the presence of Bottom Simulating reflectors (BSR) and investigation of its physical properties from seismic section are important for gas hydrate detection. Specially, faster interval velocity overlying slower interval velocity indicates the likely presences of gas hydrate above BSR and free gas underneath BSR. In consequence, estimation of correct interval velocities and analysis of their spatial variations are critical processes for gas hydrate detection using seismic reflection data. Using Dix's equation, Root Mean Square (RMS) velocities can be converted into interval velocities. However, it is not a proper way to investigate interval velocities above and below BSR considering the fact that RMS velocities have poor resolution and correctness and the assumption that interval velocities increase along the depth. Therefore, we incorporated Migration Velocity Analysis (MVA) software produced by Landmark CO. to estimate correct interval velocities in detail. MVA is a process to yield velocities of sediments between layers using Common Mid Point (CMP) gathered seismic data. The CMP gathered data for MVA should be produced after basic processing steps to enhance the signal to noise ratio of the first reflections. Prestack depth migrated section is produced using interval velocities and interval velocities are key parameters governing qualities of prestack depth migration section. Correctness of interval velocities can be examined by the presence of Residual Move Out (RMO) on CMP gathered data. If there is no RMO, peaks of primary reflection events are flat in horizontal direction for all offsets of Common Reflection Point (CRP) gathers and it proves that prestack depth migration is done with correct velocity field. Used method in this study, Tomographic inversion needs two initial input data. One is the dataset obtained from the results of preprocessing by removing multiples and noise and stacked partially. The other is the depth domain velocity model build by smoothing and editing the interval velocity converted from RMS velocity. After the three times iteration of tomography inversion, Optimum interval velocity field can be fixed. The conclusion of this study as follow, the final Interval velocity around the BSR decreased to 1400 m/s from 2500 m/s abruptly. BSR is showed about 200m depth under the seabottom

  • PDF