• Title/Summary/Keyword: Normalized Min-Sum method

Search Result 5, Processing Time 0.016 seconds

A Study on Efficient CNU Algorithm for High Speed LDPC decoding in DVB-S2 (DVB-S2 기반 고속 LDPC 복호를 위한 효율적인 CNU 계산방식에 관한 연구)

  • Lim, Byeong-Su;Kim, Min-Hyuk;Jung, Ji-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.1892-1897
    • /
    • 2012
  • In this paper, efficient CNU(Check Node Update) algorithms are analyzed for high speed LDPC decoding in DVB-S2 standard. In aspect to CNU methods, there are some kinds of CNU methods. Among of them, MP (Min Product) method is quite often used in LDPC decoding. However MP needs LUT (Look Up Table) that is critical path in LDPC decoding speed. A new SC-NMS (Self-Corrected Normalized Min-Sum) method is proposed in the paper. NMS needs only normalized scaling factor instead of LUT and compensates the overestimation of MP approximation. In addition, SC method is proposed. It gives a faster convergence toward a decoded codeword. If a message change its sign between two iterations, it is not reliable and to avoid to propagate noisy information, its module is set to 0. The performance of SC-NMS has a little degrade compare to MP by 0.1 dB, however considering computational complexity and decoding speed, SC-NMS algorithm is optimal method for CNU algorithm.

Combined Normalized and Offset Min-Sum Algorithm for Low-Density Parity-Check Codes (LDPC 부호의 복호를 위한 정규화와 오프셋이 조합된 최소-합 알고리즘)

  • Lee, Hee-ran;Yun, In-Woo;Kim, Joon Tae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.36-47
    • /
    • 2020
  • The improved belief-propagation-based algorithms, such as normalized min-sum algorithm (NMSA) or offset min-sum algorithm (OMSA), are widely used to decode LDPC(Low-Density Parity-Check) codes because they are less computationally complex and work well even at low SNR(Signal-to-Noise Ratio). However, these algorithms work well only when an appropriate normalization factor or offset value is used. A new method that uses a CMD(Check Node Message Distribution) chart and least-square method, which has been recently proposed, has advantages on computational complexity over other approaches to get optimal coefficients. Furthermore, this method can be used to derive coefficients for each iteration. In this paper, we apply this method and propose an algorithm to derive a combination of normalization factor and offset value for a combined normalized and offset min-sum algorithm to further improve the decoding of LDPC codes. Simulations on the next-generation broadcasting standards, ATSC 3.0 LDPC codes, prove that a combined normalized and offset min-sum algorithm which takes the proposed coefficients as correction coefficients shows the best BER performance among other decoding algorithms.

A FPGA Design of High Speed LDPC Decoder Based on HSS (HSS 기반의 고속 LDPC 복호기 FPGA 설계)

  • Kim, Min-Hyuk;Park, Tae-Doo;Jung, Ji-Won
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.23 no.11
    • /
    • pp.1248-1255
    • /
    • 2012
  • LDPC decoder architectures are generally classified into serial, parallel and partially parallel architectures. Conventional method of LDPC decoding in general give rise to a large number of computation operations, mass power consumption, and decoding delay. It is necessary to reduce the iteration numbers and computation operations without performance degradation. This paper studies horizontal shuffle scheduling(HSS) algorithm and self-correction normalized min-sum(SC-NMS) algorithm. In the result, number of iteration is half than conventional algorithm and performance is almost same between sum-product(SP) and SC-NMS. Finally, This paper implements high-speed LDPC decoder based on FPGA. Decoding throughput is 816 Mbps.

Evaluation of Classifiers Performance for Areal Features Matching (면 객체 매칭을 위한 판별모델의 성능 평가)

  • Kim, Jiyoung;Kim, Jung Ok;Yu, Kiyun;Huh, Yong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.1
    • /
    • pp.49-55
    • /
    • 2013
  • In this paper, we proposed a good classifier to match different spatial data sets by applying evaluation of classifiers performance in data mining and biometrics. For this, we calculated distances between a pair of candidate features for matching criteria, and normalized the distances by Min-Max method and Tanh (TH) method. We defined classifiers that shape similarity is derived from fusion of these similarities by CRiteria Importance Through Intercriteria correlation (CRITIC) method, Matcher Weighting method and Simple Sum (SS) method. As results of evaluation of classifiers performance by Precision-Recall (PR) curve and area under the PR curve (AUC-PR), we confirmed that value of AUC-PR in a classifier of TH normalization and SS method is 0.893 and the value is the highest. Therefore, to match different spatial data sets, we thought that it is appropriate to a classifier that distances of matching criteria are normalized by TH method and shape similarity is calculated by SS method.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.