• Title/Summary/Keyword: 로그 우도비

Search Result 19, Processing Time 0.025 seconds

Low-Complexity Soft-MIMO Detection Algorithm Based on Ordered Parallel Tree-Search Using Efficient Node Insertion (효율적인 노드 삽입을 이용한 순서화된 병렬 트리-탐색 기반 저복잡도 연판정 다중 안테나 검출 알고리즘)

  • Kim, Kilhwan;Park, Jangyong;Kim, Jaeseok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37A no.10
    • /
    • pp.841-849
    • /
    • 2012
  • This paper proposes an low-complexity soft-output multiple-input multiple-output (soft-MIMO) detection algorithm for achieving soft-output maximum-likelihood (soft-ML) performance under max-log approximation. The proposed algorithm is based on a parallel tree-search (PTS) applying a channel ordering by a sorted-QR decomposition (SQRD) with altered sort order. The empty-set problem that can occur in calculation of log-likelihood ratio (LLR) for each bit is solved by inserting additional nodes at each search level. Since only the closest node is inserted among nodes with opposite bit value to a selected node, the proposed node insertion scheme is very efficient in the perspective of computational complexity. The computational complexity of the proposed algorithm is approximately 37-74% of that of existing algorithms, and from simulation results for a $4{\times}4$ system, the proposed algorithm shows a performance degradation of less than 0.1dB.

LDPC-LDPC Product Code Using Modified Log-likelihood Ratio for Holographic Storage System (홀로그래픽 저장장치를 위한 수정된 로그-유사도비를 이용한 LDPC-LDPC 곱부호)

  • Jeong, Seongkwon;Lee, Jaejin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.6
    • /
    • pp.17-21
    • /
    • 2017
  • Since holographic data storage has the advantage of high recording density and data transfer rate, it is a candidate for the next generation storage systems. However, Holographic data storage system is affected by interpage interference and two dimensional intersymbol interference. Also, burst error occurs by physical impact. In this paper, we propose an LDPC product code using modified log-likelihood ratio and extrinsic information to correct burst error and improve performance of holographic data storage. The performance of proposed LDPC product code is 0.5dB better than that of the conventional LDPC code.

Improvement of Rating Curve Fitting Considering Variance Function with Pseudo-likelihood Estimation (의사우도추정법에 의한 분산함수를 고려한 수위-유량 관계 곡선 산정법 개선)

  • Lee, Woo-Seok;Kim, Sang-Ug;Chung, Eun-Sung;Lee, Kil-Seong
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.8
    • /
    • pp.807-823
    • /
    • 2008
  • This paper presents a technique for estimating discharge rating curve parameters. In typical practical applications, the original non-linear rating curve is transformed into a simple linear regression model by log-transforming the measurement without examining the effect of log transformation. The model of pseudo-likelihood estimation is developed in this study to deal with heteroscedasticity of residuals in the original non-linear model. The parameters of rating curves and variance functions of errors are simultaneously estimated by the pseudo-likelihood estimation(P-LE) method. Simulated annealing, a global optimization technique, is adapted to minimize the log likelihood of the weighted residuals. The P-LE model was then applied to a hypothetical site where stage-discharge data were generated by incorporating various errors. Results of the P-LE model show reduced error values and narrower confidence intervals than those of the common log-transform linear least squares(LT-LR) model. Also, the limit of water levels for segmentation of discharge rating curve is estimated in the process of P-LE using the Heaviside function. Finally, model performance of the conventional log-transformed linear regression and the developed model, P-LE are computed and compared. After statistical simulation, the developed method is then applied to the real data sets from 5 gauge stations in the Geum River basin. It can be suggested that this developed strategy is applied to real sites to successfully determine weights taking into account error distributions from the observed discharge data.

A Study of Option Pricing Using Variance Gamma Process (Variance Gamma 과정을 이용한 옵션 가격의 결정 연구)

  • Lee, Hyun-Eui;Song, Seong-Joo
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.1
    • /
    • pp.55-66
    • /
    • 2012
  • Option pricing models using L$\acute{e}$evy processes are suggested as an alternative to the Black-Scholes model since empirical studies showed that the Black-Sholes model could not reflect the movement of underlying assets. In this paper, we investigate whether the Variance Gamma model can reflect the movement of underlying assets in the Korean stock market better than the Black-Scholes model. For this purpose, we estimate parameters and perform likelihood ratio tests using KOSPI 200 data based on the density for the log return and the option pricing formula proposed in Madan et al. (1998). We also calculate some statistics to compare the models and examine if the volatility smile is corrected through regression analysis. The results show that the option price estimated under the Variance Gamma process is closer to the market price than the Black-Scholes price; however, the Variance Gamma model still cannot solve the volatility smile phenomenon.

Voice-Pishing Detection Algorithm Based on Minimum Classification Error Technique (최소 분류 오차 기법을 이용한 보이스 피싱 검출 알고리즘)

  • Lee, Kye-Hwan;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.138-142
    • /
    • 2009
  • We propose an effective voice-phishing detection algorithm based on discriminative weight training. The detection of voice phishing is performed based on a Gaussian mixture model (GMM) incorporaiting minimum classification error (MCE) technique. Actually, the MCE technique is based on log-likelihood from the decoding parameter of the SMV(Selectable Mode Vocoder) directly extracted from the decoding process in the mobile phone. According to the experimental result, the proposed approach is found to be effective for the voice phishing detection.

Robust Speech Endpoint Detection in Noisy Environments for HRI (Human-Robot Interface) (인간로봇 상호작용을 위한 잡음환경에 강인한 음성 끝점 검출 기법)

  • Park, Jin-Soo;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.147-156
    • /
    • 2013
  • In this paper, a new speech endpoint detection method in noisy environments for moving robot platforms is proposed. In the conventional method, the endpoint of speech is obtained by applying an edge detection filter that finds abrupt changes in the feature domain. However, since the feature of the frame energy is unstable in such noisy environments, it is difficult to accurately find the endpoint of speech. Therefore, a novel feature extraction method based on the twice-iterated fast fourier transform (TIFFT) and statistical models of speech is proposed. The proposed feature extraction method was applied to an edge detection filter for effective detection of the endpoint of speech. Representative experiments claim that there was a substantial improvement over the conventional method.

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on Logarithmic Learning Effects (대수형 학습효과에 근거한 소프트웨어 신뢰모형에 관한 통계적 공정관리 비교 연구)

  • Kim, Kyung-Soo;Kim, Hee-Cheul
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.319-326
    • /
    • 2013
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on infinite failure model and non-homogeneous Poisson Processes (NHPP). Statistical process control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, we proposed a control mechanism based on NHPP using mean value function of logarithmic hazard learning effects property.

On the Spectral Efficient Physical-Layer Network Coding Technique Based on Spatial Modulation (효율적 주파수사용을 위한 공간변조 물리계층 네트워크 코딩기법 제안)

  • Kim, Wan Ho;Lee, Woongsup;Jung, Bang Chul;Park, Jeonghong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.902-910
    • /
    • 2016
  • Recently, the volume of mobile data traffic increases exponentially due to the emergence of various mobile services. In order to resolve the problem of mobile traffic increase, various new technologies have been devised. Especially, two-way relay communication in which two nodes can transfer data simultaneously through relay node, has gained lots of interests due to its capability to improve spectral efficiency. In this paper, we analyze the SM-PNC which combines Physical-layer Network Coding (PNC) and Spatial Modulation (SM) under two-way relay communication environment. Log-Likelihood Ratio (LLR) is considered and both separate decoding and direct decoding have been taken into account in performance analysis. Through performance evaluation, we have found that the bit error rate of the proposed scheme is improved compared to that of the conventional PNC scheme, especially when SNR is high and the number of antennas is large.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.