• Title/Summary/Keyword: Iteration processes

Search Result 72, Processing Time 0.023 seconds

The Accuracy Design of LM Guide System in Machine Tools (공작기계 직선 베어링 안내면의 정도 설계에 관한 연구)

  • 김경호;박천홍;송창규;이후상;김승우
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.05a
    • /
    • pp.692-695
    • /
    • 2000
  • This paper is concerned with Accuracy Design of LM Guide System in Machine Tools. Elastic deformation of bearing is calculated by Hertz contact theory and motion error of LM block is analyzed. A new algorithm using block stiffness is proposed fur the analysis of motion accuracy of the table. The best advantage of this algorithm is fast analysis speed because it isn't necessary iteration processes for satisfying equilibrium equation of the table. Motion errors of the table analyzed under artificial form error of rail theoretically and experimentally. Only one of two rails is bent by putting a thickness gauge into horizontal direction. This form error of rail is measured by gap sensor against the other rail. Then, motion errors of the table are predicted by proposed new algorithm theoretically and measured by laser interferometer. Measurements are carried out by changing the preload and thickness. The results show that the table motion errors are reduced from 1/2 to 1/60 times than form error of rail according to its height and width. And the effect of preloading is almost negligible.

  • PDF

Parameter Estimation of Single and Decentralized Control Systems Using Pulse Response Data

  • Cheres, Eduard;Podshivalov, Lev
    • Bulletin of the Korean Chemical Society
    • /
    • v.24 no.3
    • /
    • pp.279-284
    • /
    • 2003
  • The One Pass Method (OPM) previously presented for the identification of single input single output systems is used to estimate the parameters of a Decentralized Control System (DCS). The OPM is a linear and therefore a simple estimation method. All of the calculations are performed in one pass, and no initial parameter guess, iteration, or powerful search methods are required. These features are of interest especially when the parameters of multi input-output model are estimated. The benefits of the OPM are revealed by comparing its results against those of two recently published methods based on pulse testing. The comparison is performed using two databases from the literature. These databases include single and multi input-output process transfer functions and relevant disturbances. The closed loop responses of these processes are roughly captured by the previous methods, whereas the OPM gives much more accurate results. If the parameters of a DCS are estimated, the OPM yields the same results in multi or single structure implementation. This is a novel feature, which indicates that the OPM is a convenient and practice method for the parameter estimation of multivariable DCSs.

Non-grey Radiative Transfer in the Solar Surface Convection

  • Bach, Kie-Hunn;Kim, Yong-Cheol
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.36 no.1
    • /
    • pp.34.1-34.1
    • /
    • 2011
  • Combining a detailed non-grey radiative transfer computation with the three dimensional hydrodynamics, we investigate a reliable numerical scheme for turbulent convection in the solar surface. The solar photosphere is the extremely turbulent region composed of partly ionized compressible gases in high temperature. Especially, the super adiabatic layer (SAL) near the solar photosphere is the shallow transition region where the energy transport varies steeply from convection to radiation. In order to describe physical processes accurately, a detailed treatment of radiative transfer should be considered as well as the high resolution computation of fluid dynamics. For a direct computation of radiation fields, the Accelerated Lambda Iteration (ALI) methods have been applied to hydrodynamical medium, incorporating the Opacity Distribution Function (ODF) as a realistic schemes for non-grey problems. Computational domain is the rectangular box of dimensions $42{\times}3Mn$ with the resolution of $1202{\times}190$ meshed grids, which covers several granules horizontally and 8 ~ 9 pressure scale heights vertically. During several convective turn-over times, the 3-D snapshots have been compiled with a second order accuracy. In addition, our radiation-hydrodynamical computation has been compared with the classical approximations such as grey atmospheres and Eddington approximation.

  • PDF

Adaptive Hard Decision Aided Fast Decoding Method using Parity Request Estimation in Distributed Video Coding (패리티 요구량 예측을 이용한 적응적 경판정 출력 기반 고속 분산 비디오 복호화 기술)

  • Shim, Hiuk-Jae;Oh, Ryang-Geun;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.635-646
    • /
    • 2011
  • In distributed video coding, low complexity encoder can be realized by shifting encoder-side complex processes to decoder-side. However, not only motion estimation/compensation processes but also complex LDPC decoding process are imposed to the Wyner-Ziv decoder, therefore decoder-side complexity has been one important issue to improve. LDPC decoding process consists of numerous iterative decoding processes, therefore complexity increases as the number of iteration increases. This iterative LDPC decoding process accounts for more than 60% of whole WZ decoding complexity, therefore it can be said to be a main target for complexity reduction. Previously, HDA (Hard Decision Aided) method is introduced for fast LDPC decoding process. For currently received parity bits, HDA method certainly reduces the complexity of decoding process, however, LDPC decoding process is still performed even with insufficient amount of parity request which cannot lead to successful LDPC decoding. Therefore, we can further reduce complexity by avoiding the decoding process for insufficient parity bits. In this paper, therefore, a parity request estimation method is proposed using bit plane-wise correlation and temporal correlation. Joint usage of HDA method and the proposed method achieves about 72% of complexity reduction in LDPC decoding process, while rate distortion performance is degraded only by -0.0275 dB in BDPSNR.

Control of pH Neutralization Process using Simulation Based Dynamic Programming (ICCAS 2003)

  • Kim, Dong-Kyu;Yang, Dae-Ryook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2617-2622
    • /
    • 2003
  • The pH neutralization process has long been taken as a representative benchmark problem of nonlinear chemical process control due to its nonlinearity and time-varying nature. For general nonlinear processes, it is difficult to control with a linear model-based control method so nonlinear controls must be considered. Among the numerous approaches suggested, the most rigorous approach is the dynamic optimization. However, as the size of the problem grows, the dynamic programming approach is suffered from the curse of dimensionality. In order to avoid this problem, the Neuro-Dynamic Programming (NDP) approach was proposed by Bertsekas and Tsitsiklis (1996). The NDP approach is to utilize all the data collected to generate an approximation of optimal cost-to-go function which was used to find the optimal input movement in real time control. The approximation could be any type of function such as polynomials, neural networks and etc. In this study, an algorithm using NDP approach was applied to a pH neutralization process to investigate the feasibility of the NDP algorithm and to deepen the understanding of the basic characteristics of this algorithm. As the global approximator, the neural network which requires training and k-nearest neighbor method which requires querying instead of training are investigated. The global approximator requires optimal control strategy. If the optimal control strategy is not available, suboptimal control strategy can be used even though the laborious Bellman iterations are necessary. For pH neutralization process it is rather easy to devise an optimal control strategy. Thus, we used an optimal control strategy and did not perform the Bellman iteration. Also, the effects of constraints on control moves are studied. From the simulations, the NDP method outperforms the conventional PID control.

  • PDF

High Density Salt & Pepper Noise Reduction using Lagrange Interpolation and Iteration Process (Lagrange 보간 및 반복 처리를 이용한 고밀도 Salt & Pepper 잡음 제거)

  • Kwon, Se-Ik;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.4
    • /
    • pp.965-972
    • /
    • 2015
  • Along with the rapid development in digital times, image media are being used in internet, computer and digital camera. But image deterioration occurs due to various exterior reasons in the procedures of acquisition, processing, transmission and recording of digital image and major reason is noise. Therefore in order to remove salt & pepper noise, this study suggested the algorithm which replaces the noise to original pixel in case of non-noise, and processes the noise with Lagrange interpolation method in case of noise. In case high density noise was added and the noise could not be removed, noise characteristics were improved by processing the noises repeatedly. And for objective judgment, this method was compared with existing methods and PSNR(peak signal to noise ratio) was used as judgment standard.

Performance of Turbo Coded OFDM Systems in W-CDMA Wireless Communication Channel (W-CDMA 무선통신 채널에서 터보 부호를 적용한 OFDM 시스템의 성능 분석)

  • Shin, Myung-Sik;Yang, Hae-Sool
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.183-191
    • /
    • 2010
  • In the recent digital communication systems, the performance of Turbo Code used as the error correction coding method depends on the interleaver size influencing the free distance determination and the iterative decoding algorithms of the turbo decoder. However, some iterations are needed to get a better performance, but these processes require a large time delay. Recently methods of reducing the number of iteration have been studied without degrading original performance. In this paper, the new method of combining ME (Mean Estimate) stopping criterion with SDR (sign difference ratio) stopping criterion among previous stopping criteria is proposed, and the fact of compensating each method's missed detection is verified. Faster decoding is realized that about 1~2 time iterations to reduced through adopting this method into serially concatenated both decoders. System Environments were assumed W-CDMA forward link system with intense MAI (multiple access interference).

A Study on Horizontal Shuffle Scheduling for High Speed LDPC decoding in DVB-S2 (DVB-S2 기반 고속 LDPC 복호를 위한 Horizontal Shuffle Scheduling 방식에 관한 연구)

  • Lim, Byeong-Su;Kim, Min-Hyuk;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.10
    • /
    • pp.2143-2149
    • /
    • 2012
  • DVB-S2 employs LDPC codes which approach to the Shannon's limit, since it has characteristics of a good distance, error floor does not appear. Furthermore it is possible to processes full parallel processing. However, it is very difficult to high speed decoding because of a large block size and number of many iterations. This paper present HSS algorithm to reduce the iteration numbers without performance degradation. In the flooding scheme, the decoder waits until all the check-to-variable messages are updated at all parity check nodes before computing the variable metric and updating the variable-to-check messages. The HSS algorithm is to update the variable metric on a check by check basis in the same way as one code draws benefit from the other. Eventually, LDPC decoding speed based on HSS algorithm improved 30% ~50% compared to conventional one without performance degradation.

A Fully-implicit Velocity Pressure coupling Algorithm-IDEAL and Its Applications

  • SUN, Dong-Liang;QU, Zhi-Guo;He, Ya-Ling;Tao, Wen-Quan
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2008.03a
    • /
    • pp.1-13
    • /
    • 2008
  • An efficient segregated algorithm for the coupling of velocity and pressure of incompressible fluid flow, called IDEAL Inner Doubly-Iterative Efficient Algorithm for Linked-Equations), has been proposed by the present authors. In the algorithm there exist double inner iterative processes for pressure equation at each iteration level, which almost completely overcome two approximations in SIMPLE algorithm. Thus the coupling between velocity and pressure is fully guaranteed, greatly enhancing the convergence rate and stability of solution process. The performance of the IDEAL algorithm for three-dimensional incompressible fluid flow and heat transfer problems is analyzed and a systemic comparison is made between the algorithm and three other most widely-used algorithms (SIMPLER, SIMPLEC and PISO). It is found that the IDEAL algorithm is the most robust and the most efficient one among the four algorithms compared. This new algorithm is used for the velocity prediction of a new interface capturing method. VOSET, also proposed by the present author. It is found that the combination of VOSET and IDEAL can appreciably enhance both the interface capture accuracy and convergence rate of computations.

  • PDF

A Fully-implicit Velocity Pressure coupling Algorithm-IDEAL and Its Applications

  • Sun, Dong-Liang;Qu, Zhi-Guo;He, Ya-Ling;Tao, Wen-Quan
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.1-13
    • /
    • 2008
  • An efficient segregated algorithm for the coupling of velocity and pressure of incompressible fluid flow, called IDEAL (Inner Doubly-Iterative Efficient Algorithm for Linked-Equations), has been proposed by the present authors. In the algorithm there exist double inner iterative processes for pressure equation at each iteration level, which almost completely overcome two approximations in SIMPLE algorithm. Thus the coupling between velocity and pressure is fully guaranteed, greatly enhancing the convergence rate and stability of solution process. The performance of the IDEAL algorithm for three-dimensional incompressible fluid flow and heat transfer problems is analyzed and a systemic comparison is made between the algorithm and three other most widely-used algorithms (SIMPLER, SIMPLEC and PISO). It is found that the IDEAL algorithm is the most robust and the most efficient one among the four algorithms compared. This new algorithm is used for the velocity prediction of a new interface capturing method -VOSET, also proposed by the present author. It is found that the combination of VOSET and IDEAL can appreciably enhance both the interface capture accuracy and convergence rate of computations.

  • PDF