• Title/Summary/Keyword: 최우추정방법

Search Result 46, Processing Time 0.019 seconds

A SPEC-T Viterbi decoder implementation with reduced-comparison operation (비교 연산을 개선한 SPEC-T 비터비 복호기의 구현)

  • Bang, Seung-Hwa;Rim, Chong-Suck
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.7 s.361
    • /
    • pp.81-89
    • /
    • 2007
  • The Viterbi decoder, which employs the maximum likelihood decoding method, is a critical component in forward error correction for digital communication system. However, lowering power consumption on the Viterbi decoder is a difficult task since the number of paths calculated equals the number of distinctive states of the decoder and the Viterbi decoder utilizes trace-back method. In this paper, we propose a method which minimizes the number of operations performed on the comparator, deployed in the SPEC-T Viterbi decoder implementation. The proposed comparator was applied to the ACSU(Add-Compare-Select Unit) and MPMSU(Minimum Path Metric Search Unit) modules on the decoder. The proposed ACS scheme and MPMS scheme shows reduced power consumption by 10.7% and 11.5% each, compared to the conventional schemes. When compared to the SPEC-T schemes, the proposed ACS and MPMS schemes show 6% and 1.5% less power consumption. In both of the above experiments, the threshold value of 26 was applied.

Parameter Estimation and Analysis of Extreme Highest Tide Level in Marginal Seas around Korea (한국 연안 최극 고조위의 매개변수 추정 및 분석)

  • Jeong, Shin-Taek;Kim, Jeong-Dae;Ko, Dong-Hui;Yoon, Gil-Lim
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.20 no.5
    • /
    • pp.482-490
    • /
    • 2008
  • For a coastal or harbor structure design, one of the most important environmental factors is the appropriate extreme highest tide level condition. Especially, the information of extreme highest tide level distribution is essential for reliability design. In this paper, 23 set of extreme highest tide level data obtained from National Oceanographic Research Institute(NORI) were analyzed for extreme highest tide levels. The probability distributions considered in this research were Generalized Extreme Value(GEV), Gumbel, and Weibull distribution. For each of these distributions, three parameter estimation methods, i.e. the method of moments, maximum likelihood and probability weighted moments, were applied. Chi-square and Kolmogorov-Smirnov goodness-offit tests were performed, and the assumed distribution was accepted at the confidence level 95%. Gumbel distribution which best fits to the 22 tidal station was selected as the most probable parent distribution, and optimally estimated parameters and extreme highest tide level with various return periods were presented. The extreme values of Incheon, Cheju, Yeosu, Pusan, and Mukho, which estimated by Shim et al.(1992) are lower than that of this result.

Trace-Back Viterbi Decoder with Sequential State Transition Control (순서적 역방향 상태천이 제어에 의한 역추적 비터비 디코더)

  • 정차근
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.40 no.11
    • /
    • pp.51-62
    • /
    • 2003
  • This paper presents a novel survivor memeory management and decoding techniques with sequential backward state transition control in the trace back Viterbi decoder. The Viterbi algorithm is an maximum likelihood decoding scheme to estimate the likelihood of encoder state for channel error detection and correction. This scheme is applied to a broad range of digital communication such as intersymbol interference removing and channel equalization. In order to achieve the area-efficiency VLSI chip design with high throughput in the Viterbi decoder in which recursive operation is implied, more research is required to obtain a simple systematic parallel ACS architecture and surviver memory management. As a method of solution to the problem, this paper addresses a progressive decoding algorithm with sequential backward state transition control in the trace back Viterbi decoder. Compared to the conventional trace back decoding techniques, the required total memory can be greatly reduced in the proposed method. Furthermore, the proposed method can be implemented with a simple pipelined structure with systolic array type architecture. The implementation of the peripheral logic circuit for the control of memory access is not required, and memory access bandwidth can be reduced Therefore, the proposed method has characteristics of high area-efficiency and low power consumption with high throughput. Finally, the examples of decoding results for the received data with channel noise and application result are provided to evaluate the efficiency of the proposed method.

A Development of Regional Frequency Model Based on Hierarchical Bayesian Model (계층적 Bayesian 모형 기반 지역빈도해석 모형 개발)

  • Kwon, Hyun-Han;Kim, Jin-Young;Kim, Oon-Ki;Lee, Jeong-Ju
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.1
    • /
    • pp.13-24
    • /
    • 2013
  • The main objective of this study was to develop a new regional frequency analysis model based on hierarchical Bayesian model that allows us to better estimate and quantify model parameters as well as their associated uncertainties. A Monte-carlo experiment procedure has been set up to verify the proposed regional frequency analysis. It was found that the proposed hierarchical Bayesian model based regional frequency analysis outperformed the existing L-moment based regional frequency analysis in terms of reducing biases associated with the model parameters. Especially, the bias is remarkably decreased with increasing return period. The proposed model was applied to six weather stations in Jeollabuk-do, and compared with the existing L-moment approach. This study also provided shrinkage process of the model parameters that is a typical behavior in hierarchical Bayes models. The results of case study show that the proposed model has the potential to obtain reliable estimates of the parameters and quantitatively provide their uncertainties.

A probabilistic fragility evaluation method of a RC box tunnel subjected to earthquake loadings (지진하중을 받는 RC 박스터널의 확률론적 취약도 평가기법)

  • Huh, Jungwon;Le, Thai Son;Kang, Choonghyun;Kwak, Kiseok;Park, Inn-Joon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.2
    • /
    • pp.143-159
    • /
    • 2017
  • A probabilistic fragility assessment procedure is developed in this paper to predict risks of damage arising from seismic loading to the two-cell RC box tunnel. Especially, the paper focuses on establishing a simplified methodology to derive fragility curves which are an indispensable ingredient of seismic fragility assessment. In consideration of soil-structure interaction (SSI) effect, the ground response acceleration method for buried structure (GRAMBS) is used in the proposed approach to estimate the dynamic response behavior of the structures. In addition, the damage states of tunnels are identified by conducting the pushover analyses and Latin Hypercube sampling (LHS) technique is employed to consider the uncertainties associated with design variables. To illustrate the concepts described, a numerical analysis is conducted and fragility curves are developed for a large set of artificially generated ground motions satisfying a design spectrum. The seismic fragility curves are represented by two-parameter lognormal distribution function and its two parameters, namely the median and log-standard deviation, are estimated using the maximum likelihood estimates (MLE) method.

The Study for Performance Analysis of Software Reliability Model using Fault Detection Rate based on Logarithmic and Exponential Type (로그 및 지수형 결함 발생률에 따른 소프트웨어 신뢰성 모형에 관한 신뢰도 성능분석 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.306-311
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, reliability software cost model considering logarithmic and exponential fault detection rate based on observations from the process of software product testing was studied. Adding new fault probability using the Goel-Okumoto model that is widely used in the field of reliability problems presented. When correcting or modifying the software, finite failure non-homogeneous Poisson process model. For analysis of software reliability model considering the time-dependent fault detection rate, the parameters estimation using maximum likelihood estimation of inter-failure time data was made. The logarithmic and exponential fault detection model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.