• Title/Summary/Keyword: Algorithm decomposition

Search Result 789, Processing Time 0.035 seconds

Bitwise Decomposition Algorithm for Gray Coded M-PSK Signals (Gray 부호화된 M-PSK 신호의 비트 정보 분할 알고리듬)

  • Kim Ki-Seol;Hyun Kwang-Min;Park Sang-Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.8A
    • /
    • pp.784-789
    • /
    • 2006
  • In this paper, we propose a bitwise information decomposition algorithm for an M-PSK signal based on the Max-Log-MAP algorithm. In order to obtain the algorithm, we use a coordinate transformation from M-PSK to M-PAM signal space. Using the proposed algorithm, we analyze the performance of a Turbo iterative decoding method. The proposed algorithm can be applicable not only for a communication with PSK and iterative decoding method but for adaptive modulation and coding system.

PDE-based Image Interpolators

  • Cha, Young-Joon;Kim, Seong-Jai
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.1010-1019
    • /
    • 2010
  • This article presents a PDE-based interpolation algorithm to effectively reproduce high resolution imagery. Conventional PDE-based interpolation methods can produce sharp edges without checkerboard effects; however, they are not interpolators but approximators and tend to weaken fine structures. In order to overcome the drawback, a texture enhancement method is suggested as a post-process of PDE-based interpolation methods. The new method rectifies the image by simply incorporating the bilinear interpolation of the weakened texture components and therefore makes the resulting algorithm an interpolator. It has been numerically verified that the new algorithm, called the PDE-based image interpolator (PII), restores sharp edges and enhances texture components satisfactorily. PII outperforms the PDE-based skeleton-texture decomposition (STD) approach. Various numerical examples are shown to verify the claim.

A FASTER LU DECOMPOSITION FOR PARALLEL C PROGRAMS

  • Lee, Sang-Moon;Lee, Chin-Young
    • Journal of applied mathematics & informatics
    • /
    • v.3 no.2
    • /
    • pp.217-234
    • /
    • 1996
  • This report introduces a faster parallel LU decomposi-tion algorithm that gives a speedup almost equal to the number of nodes used. The new algorithm takes an advantage of an important C feature that lays out a matrix using a row major scheme and is based on the currently widely used LU decomposition algorithm with one major modification to eliminate most of the communication overhead. Empirical results are included in this report. For example solving a dense matrix that contains 100,000,000 elements gives a speedup of 50 when executed on 50 nodes of an intel Paragon in parallel.

A Study of Fast Contingency Analysis Algorithm (신속한 상정사고해석 알고리즘에 관한 연구)

  • Moon, Young-Hyun
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.34 no.11
    • /
    • pp.421-429
    • /
    • 1985
  • With the rapid increase of contingency cases due to complication of power system, the reduction of computation time in contingency analysis has become more significant than ever before. This paper deals with the development of a fast contingency analysis algorithm by using a matrix decomposition method. The proposed matrix decomposition method of contingency analysis yields an accurate solution by using the original triangular factor table. An outstanding feature of this method is of no need of factor table modification for network changes due to contingency outages. The proposed method is also applicable to multiple contingency analysis withremarkable reduction of computation time. The algorithm has been tested for a number of single and multiple contigencies in 17-bus and 50-bus systems. The numerical results show its applicability to practical power systems.

  • PDF

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

Resynthesis of Logic Gates on Mapped Circuit for Low Power (저전력 기술 매핑을 위한 논리 게이트 재합성)

  • 김현상;조준동
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.11
    • /
    • pp.1-10
    • /
    • 1998
  • The advent of deep submicron technologies in the age of portable electronic systems creates a moving target for CAB algorithms, which now need to reduce power as well as delay and area in the existing design methodology. This paper presents a resynthesis algorithm for logic decomposition on mapped circuits. The existing algorithm uses a Huffman encoding, but does not consider glitches and effects on logic depth. The proposed algorithm is to generalize the Huffman encoding algorithm to minimize the switching activity of non-critical subcircuits and to preserve a given logic depth. We show how to obtain a transition-optimum binary tree decomposition for AND tree with zero gate delay. The algorithm is tested using SIS (logic synthesizer) and Level-Map (LUT-based FPGA lower power technology mapper) and shows 58%, 8% reductions on power consumptions, respectively.

  • PDF

Security Analysis of ElGamal-Type Signature Scheme Using Integer Decomposition (정수의 분해를 이용한 ElGamal형 서명기법의 안전성 분석)

  • 이익권;김동렬
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.2
    • /
    • pp.15-22
    • /
    • 2004
  • For an ElGamal-type signature scheme using a generate g of order q, it has been well-known that the message nonce should be chosen randomly in the interval (0, q-1) for each message to be signed. In (2), H. Kuwakado and H. Tanaka proposed a polynomial time algorithm that gives the private key of the signer if two signatures with message nonces 0<$k_1$, $k_2$$\leq$Ο(equation omitted) are available. Recently, R. Gallant, R. Lambert, and S. Vanstone suggested a method to improve the efficiency of elliptic curve crytosystem using integer decomposition. In this paper, by applying the integer decomposition method to the algorithm proposed by Kuwakado and Tanaka, we extend the algorithm to work in the case when |$k_1$ |,|$k_2$, |$\leq$Ο(equation mitted) and improve the efficiency and completeness of the algorithm.

Linear Sub-band Decomposition based Pre-processing Algorithm for Perceptual Video Coding (지각적 동영상 부호화를 위한 선형 부 대역 분해 기반 전처리 기법)

  • Choi, Kwang Yeon;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.80-87
    • /
    • 2017
  • This paper proposes a pre-processing algorithm to improve perceptual video coding efficiency which decomposes an input frame via a sub-band decomposition, and suppresses only high frequency band(s) having low visual sensitivity. First, we decompose the input frame into several frequency subbands by a linear sub-band decomposition. Next, high frequency subband(s) which is rarely recognized by human visual system (HVS) is suppressed by applying relatively small gain(s). Finally, the high frequency suppressed frame is compressed by a specific video encoder. We can find from the experimental results that if comparing before-use and after-use of the proposed pre-processing prior to the encoder, no visual difference is shown. Also, the proposed algorithm achieves bit-saving of 13.12% on average in a H.264 video encoder.

Multi-scale Decomposition tone mapping using Guided Image Filter (가이디드 이미지 필터를 이용한 다중 스케일 분할 톤 매핑 기법)

  • Gao, Ming;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.23 no.4
    • /
    • pp.474-483
    • /
    • 2018
  • In this paper, we propose a multi-scale high dynamic range (HDR) tone mapping algorithm using guided image filter (GIF). The GIF is used to divide an image into a base layer and a detail layer, then the range of the detail layer is reduced with a compression function to enhance the detail information of the image. However, in most cases, an image includes the detail and edge information in different scales. That is to say, it is difficult to represent all detail features under a certain scale, and a single-scale image decomposition method is not free from artifacts around edges. To solve the problems, the multi-scale image decomposition method is proposed. It utilizes the detail layers of several scale to determine how much edge is preserved. Experiment results show that the proposed algorithm has better image performance in preserving edge compared to conventional algorithm.

Space-Frequency Adaptive Image Restoration Using Vaguelette-Wavelet Decomposition (공간-주파수 적응적 영상복원을 위한 Vaguelette-Wavelet분석 기술)

  • Jun, Sin-Young;Lee, Eun-Sung;Kim, Sang-Jin;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.112-122
    • /
    • 2009
  • In this paper, we present a novel space-frequency adaptive image restoration approach using vaguelette-wavelet decomposition (VWD). The proposed algorithm classifies a degraded image into flat and edge regions by using spatial information of the wavelet coefficient. For reducing the noise we perform an adaptive wavelet shrinkage process. At edge region candidates, we adopt entropy approach for estimating the noise and remove it by using relative between sub-bands. After shrinking wavelet coefficients process, we restore the degraded image using the VWD. The proposed algorithm can reduce the noise without affecting the sharpness details. Based on the experimental results, the proposed algorithm efficiently proved to be able to restore the degraded image while preserving details.