• Title/Summary/Keyword: Iterative technique

Search Result 569, Processing Time 0.024 seconds

Implementation of High-Throughput SHA-1 Hash Algorithm using Multiple Unfolding Technique (다중 언폴딩 기법을 이용한 SHA-1 해쉬 알고리즘 고속 구현)

  • Lee, Eun-Hee;Lee, Je-Hoon;Jang, Young-Jo;Cho, Kyoung-Rok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.4
    • /
    • pp.41-49
    • /
    • 2010
  • This paper proposes a new high speed SHA-1 architecture using multiple unfolding and pre-computation techniques. We unfolds iterative hash operations to 2 continuos hash stage and reschedules computation timing. Then, the part of critical path is computed at the previous hash operation round and the rest is performed in the present round. These techniques reduce 3 additions to 2 additions on the critical path. It makes the maximum clock frequency of 118 MHz which provides throughput rate of 5.9 Gbps. The proposed architecture shows 26% higher throughput with a 32% smaller hardware size compared to other counterparts. This paper also introduces a analytical model of multiple SHA-1 architecture at the system level that maps a large input data on SHA-1 block in parallel. The model gives us the required number of SHA-1 blocks for a large multimedia data processing that it helps to make decision hardware configuration. The hs fospeed SHA-1 is useful to generate a condensed message and may strengthen the security of mobile communication and internet service.

Review on the Three-Dimensional Inversion of Magnetotelluric Date (MT 자료의 3차원 역산 개관)

  • Kim Hee Joon;Nam Myung Jin;Han Nuree;Choi Jihyang;Lee Tae Jong;Song Yoonho;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.7 no.3
    • /
    • pp.207-212
    • /
    • 2004
  • This article reviews recent developments in three-dimensional (3-D) magntotelluric (MT) imaging. The inversion of MT data is fundamentally ill-posed, and therefore the resultant solution is non-unique. A regularizing scheme must be involved to reduce the non-uniqueness while retaining certain a priori information in the solution. The standard approach to nonlinear inversion in geophysis has been the Gauss-Newton method, which solves a sequence of linearized inverse problems. When running to convergence, the algorithm minimizes an objective function over the space of models and in the sense produces an optimal solution of the inverse problem. The general usefulness of iterative, linearized inversion algorithms, however is greatly limited in 3-D MT applications by the requirement of computing the Jacobian(partial derivative, sensitivity) matrix of the forward problem. The difficulty may be relaxed using conjugate gradients(CG) methods. A linear CG technique is used to solve each step of Gauss-Newton iterations incompletely, while the method of nonlinear CG is applied directly to the minimization of the objective function. These CG techniques replace computation of jacobian matrix and solution of a large linear system with computations equivalent to only three forward problems per inversion iteration. Consequently, the algorithms are efficient in computational speed and memory requirement, making 3-D inversion feasible.

Groundwater Flow Model for the Pollutant Transport in Subsurface Porous Media Theory and Modeling (지하다공질(地下多孔質) 매체(媒體)속에서의 오염물질이동(汚染物質移動) 해석(解析)을 위한 지하수(地下水)흐름 모형(模型))

  • Cho, Won Cheal
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.9 no.3
    • /
    • pp.97-106
    • /
    • 1989
  • This paper is on the modeling of two-dimensional groundwater flow, which is the first step of the development of Dynamic System Model for groundwater flow and pollutant transport in subsurface porous media. The particular features of the model are its versatility and flexibility to deal with as many real-world problems as possible. Points as well as distributed sources/sinks are included to represent recharges/pumping and rainfall infiltrations. All sources/sinks can be transient or steady state. Prescribed hydraulic head on the Dirichlet boundaries and fluxes on Neumann or Cauchy boundaries can be time-dependent or constant. Sources/sinks strength over each element and node, hydraulic head at each Dirichlet boundary node and flux at each boundary segment can vary independently of each other. Either completely confined or completely unconfined aquifers, or partially confined and partially unconfined aquifers can be dealt with effectively. Discretization of a compound region with very irregular curved boundaries is made easy by including both quadrilateral and triangular elements in the formulation. Large-field problems can be solved efficiently by including a pointwise iterative solution strategy as an optional alternative to the direct elimination solution methed for the matrix equation approximating the partial differential equation of groundwater flow. The model also includes transient flow through confining leaky aquifers lying above and/or below the aquifer of interest. The model is verified against three simple cases to which analytical solutions are available. The groundwater flow model shall be combined with the model of pollutant transport in subsurface porous media. Then the combined model, with the applications of the Eigenvalue technique and the Dynamic system theory, shall be improved to the Dynamic System Model which can simulate the real groundwater flow and the pollutant transport accurately and effectively for the analyses and predictions.

  • PDF

Time-domain Seismic Waveform Inversion for Anisotropic media (이방성을 고려한 탄성매질에서의 시간영역 파형역산)

  • Lee, Ho-Yong;Min, Dong-Joo;Kwon, Byung-Doo;Yoo, Hai-Soo
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.51-56
    • /
    • 2008
  • The waveform inversion for isotropic media has ever been studied since the 1980s, but there has been few studies for anisotropic media. We present a seismic waveform inversion algorithm for 2-D heterogeneous transversely isotropic structures. A cell-based finite difference algorithm for anisotropic media in time domain is adopted. The steepest descent during the non-linear iterative inversion approach is obtained by backpropagating residual errors using a reverse time migration technique. For scaling the gradient of a misfit function, we use the pseudo Hessian matrix which is assumed to neglect the zero-lag auto-correlation terms of impulse responses in the approximate Hessian matrix of the Gauss-Newton method. We demonstrate the use of these waveform inversion algorithm by applying them to a two layer model and the anisotropic Marmousi model data. With numerical examples, we show that it's difficult to converge to the true model when we assumed that anisotropic media are isotropic. Therefore, it is expected that our waveform inversion algorithm for anisotropic media is adequate to interpret real seismic exploration data.

  • PDF

Implementation of Stopping Criterion Algorithm using Sign Change Ratio for Extrinsic Information Values in Turbo Code (터보부호에서 외부정보에 대한 부호변화율을 이용한 반복중단 알고리즘 구현)

  • Jeong Dae-Ho;Shim Byong-Sup;Kim Hwan-Yong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.7 s.349
    • /
    • pp.143-149
    • /
    • 2006
  • Turbo code, a kind of error correction coding technique, has been used in the field of digital mobile communication system. As the number of iterations increases, it can achieves remarkable BER performance over AWGN channel environment. However, if the number of iterations is increased in the several channel environments, any further iteration results in very little improvement, and requires much delay and computation in proportion to the number of iterations. To solve this problems, it is necessary to device an efficient criterion to stop the iteration process and prevent unnecessary delay and computation. In this paper, it proposes an efficient and simple criterion for stopping the iteration process in turbo decoding. By using sign changed ratio of extrinsic information values in turbo decoder, the proposed algorithm can largely reduce the average number of iterations without BER performance degradation. As a result of simulations, the average number of iterations is reduced by about $12.48%{\sim}22.22%$ compared to CE algorithm and about $20.43%{\sim}54.02%$ compared to SDR algorithm.

A Stable Multilevel Partitioning Algorithm for VLSI Circuit Designs Using Adaptive Connectivity Threshold (가변적인 연결도 임계치 설정에 의한 대규모 집적회로 설계에서의 안정적인 다단 분할 방법)

  • 임창경;정정화
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.10
    • /
    • pp.69-77
    • /
    • 1998
  • This paper presents a new efficient and stable multilevel partitioning algorithm for VLSI circuit design. The performance of multilevel partitioning algorithms that are proposed to enhance the performance of previous iterative-improvement partitioning algorithms for large scale circuits, depend on choice of construction methods for partition hierarchy. As the most of previous multilevel partitioning algorithms forces experimental constraints on the process of hierarchy construction, the stability of their performances goes down. The lack of stability causes the large variation of partition results during multiple runs. In this paper, we minimize the use of experimental constraints and propose a new method for constructing partition hierarchy. The proposed method clusters the cells with the connection status of the circuit. After constructing the partition hierarchy, a partition improvement algorithm, HYIP$^{[11]}$ using hybrid bucket structure, unclusters the hierachy to get partition results. The experimental results on ACM/SIGDA benchmark circuits show improvement up to 10-40% in minimum outsize over the previous algorithm $^{[3] [4] [5] [8] [10]}$. Also our technique outperforms ML$^{[10]}$ represented multilevel partition method by about 5% and 20% for minimum and average custsize, respectively. In addition, the results of our algorithm with 10 runs are better than ML algorithm with 100 runs.

  • PDF

Improvement of Small Baseline Subset (SBAS) Algorithm for Measuring Time-series Surface Deformations from Differential SAR Interferograms (차분 간섭도로부터 지표변위의 시계열 관측을 위한 개선된 Small Baseline Subset (SBAS) 알고리즘)

  • Jung, Hyung-Sup;Lee, Chang-Wook;Park, Jung-Won;Kim, Ki-Dong;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.2
    • /
    • pp.165-177
    • /
    • 2008
  • Small baseline subset (SBAS) algorithm has been recently developed using an appropriate combination of differential interferograms, which are characterized by a small baseline in order to minimize the spatial decorrelation. This algorithm uses the singular value decomposition (SVD) to measure the time-series surface deformation from the differential interferograms which are not temporally connected. And it mitigates the atmospheric effect in the time-series surface deformation by using spatially low-pass and temporally high-pass filter. Nevertheless, it is not easy to correct the phase unwrapping error of each interferogram and to mitigate the time-varying noise component of the surface deformation from this algorithm due to the assumption of the linear surface deformation in the beginning of the observation. In this paper, we present an improved SBAS technique to complement these problems. Our improved SBAS algorithm uses an iterative approach to minimize the phase unwrapping error of each differential interferogram. This algorithm also uses finite difference method to suppress the time-varying noise component of the surface deformation. We tested our improved SBAS algorithm and evaluated its performance using 26 images of ERS-1/2 data and 21 images of RADARSAT-1 fine beam (F5) data at each different locations. Maximum deformation amount of 40cm in the radar line of sight (LOS) was estimated from ERS-l/2 datasets during about 13 years, whereas 3 cm deformation was estimated from RADARSAT-1 ones during about two years.

Implicit Numerical Integration of Two-surface Plasticity Model for Coarse-grained Soils (Implicit 수치적분 방법을 이용한 조립토에 관한 구성방정식의 수행)

  • Choi, Chang-Ho
    • Journal of the Korean Geotechnical Society
    • /
    • v.22 no.9
    • /
    • pp.45-59
    • /
    • 2006
  • The successful performance of any numerical geotechnical simulation depends on the accuracy and efficiency of the numerical implementation of constitutive model used to simulate the stress-strain (constitutive) response of the soil. The corner stone of the numerical implementation of constitutive models is the numerical integration of the incremental form of soil-plasticity constitutive equations over a discrete sequence of time steps. In this paper a well known two-surface soil plasticity model is implemented using a generalized implicit return mapping algorithm to arbitrary convex yield surfaces referred to as the Closest-Point-Projection method (CPPM). The two-surface model describes the nonlinear behavior of coarse-grained materials by incorporating a bounding surface concept together with isotropic and kinematic hardening as well as fabric formulation to account for the effect of fabric formation on the unloading response. In the course of investigating the performance of the CPPM integration method, it is proven that the algorithm is an accurate, robust, and efficient integration technique useful in finite element contexts. It is also shown that the algorithm produces a consistent tangent operator $\frac{d\sigma}{d\varepsilon}$ during the iterative process with quadratic convergence rate of the global iteration process.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.