• Title/Summary/Keyword: Number of Errors

Search Result 1,246, Processing Time 0.024 seconds

Error Control Scheme for High-Speed DVD Systems

  • Lee, Joon-Yun;Lee, Jae-Jin;Park, Tae-Geun
    • 정보저장시스템학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.103-110
    • /
    • 2005
  • We present a powerful error control decoder which can be used in all of the commercial DVD systems. The decoder exploits the error information from the modulation decoder in order to increase the error correcting capability. We can identify that the modulation decoder in DVD system can detect errors more than $60\%$ of total errors when burst errors are occurred. In results, fur a decoded block, error correcting capability of the proposed scheme is improved up to $25\%$ more than that of the original error control decoder. In addition, the more the burst error length is increased, the better the decoder performance. Also, a pipeline-balanced RSPC decoder with a low hardware complexity is designed to maximize the throughput. The maximum throughput of the RSPC decoder is 740Mbps@100MHz and the number of gate counts is 20.3K for RS (182, 172, 11) decoder and 30.7K for RS (208, 192, 17) decoder, respectively

  • PDF

Monotone Likelihood Ratio Property of the Poisson Signal with Three Sources of Errors in the Parameter

  • Kim, Joo-Hwan
    • Communications for Statistical Applications and Methods
    • /
    • v.5 no.2
    • /
    • pp.503-515
    • /
    • 1998
  • When a neutral particle beam(NPB) aimed at the object and receive a small number of neutron signals at the detector, it follows approximately Poisson distribution. Under the four assumptions in the presence of errors and uncertainties for the Poisson parameters, an exact probability distribution of neutral particles have been derived. The probability distribution for the neutron signals received by a detector averaged over the three sources of errors is expressed as a four-dimensional integral of certain data. Two of the four integrals can be evaluated analytically and thereby the integral is reduced to a two-dimensional integral. The monotone likelihood ratio(MLR) property of the distribution is proved by using the Cauchy mean value theorem for the univariate distribution and multivariate distribution. Its MLR property can be used to find a criteria for the hypothesis testing problem related to the distribution.

  • PDF

Common Errors in Endodontic treatment

  • Kim, Jin-Woo
    • Proceedings of the KACD Conference
    • /
    • 2001.05a
    • /
    • pp.257-257
    • /
    • 2001
  • Failures occur in dentistry as a result of manny factors some of which can be controlled by the operator whilst others are unavoidable. The long-term success rate of endodontic treatment has often been thought to be very high although studies reported in the literature do not support this perception. The number of failure can be reduced by adhereing to accepted treatment procedures and by avoiding 'short cut'. Endodontic disaters are usually related to operator errors and they mat have detrimental effects on the outcome of treatment in the long term, eventually becoming catastrophes. Endodontic disasters will require special techniques to salvage them whereas catastrophes usually result in loss of the tooth and every effort should be made to prevent such problems from occurring. This presentation will cover common errors in endodontic procedures especially access opening, canal negotiation, canal irrigation, canal preparation, canal filling, post preparation.ration.

  • PDF

Performance Evaluation of Bit Error Resilience for Pixel-domain Wyner-Ziv Video Codec with Frame Difference Residual Signal (화면 간 차이 신호에 대한 화소 영역 위너-지브 비디오 코덱의 비트 에러 내성 성능 평가)

  • Kim, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.8
    • /
    • pp.20-28
    • /
    • 2012
  • DVC(Distributed Video Coding) technique is a new paradigm, which is based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers not only flexible partitioning of the complexity between the encoder and decoder, but also robustness to channel errors due to intrinsic joint source-channel coding. Many conventional research works have been focused on the light video encoder and its rate-distortion performance improvement. However, in this paper, we propose a new DVC codec which is effectively applicable for error-prone environment. The proposed method adopts a quantiser without dead-zone and symmetric Gray code around zero value. Through computer simulations, the proposed method is evaluated by the bit errors position as well as the number of burst bit errors. Additionally, it is shown that the maximum and minimum transmission rate for the given application can be linearly determined by the number of bit errors.

Exact poisson distribution in the use of NPB with aiming errors

  • Kim, Joo-Hwan
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1995.04a
    • /
    • pp.967-973
    • /
    • 1995
  • A neutral particle beam(NPB) aimed at the object and receive a small number of neutron signals at the detector to estimate the mass of an object Since there is uncertainty about the location of the axis of the beam relative to the object, we could have aiming errors which may lead to incorrect information about the object. Under the two assumptions that neutral particle scattering distribution and aiming errors have a circular normal distribution respectively, we have derived an exact probability distribution of neutral particles. It becomes a Poison-power function distribution., We proved monotone likelihood ratio property of tlis distribution. This property can be used to find a criteria for the hypothesis testing problem.

  • PDF

A LOCAL-GLOBAL STEPSIZE CONTROL FOR MULTISTEP METHODS APPLIED TO SEMI-EXPLICIT INDEX 1 DIFFERENTIAL-ALGEBRAIC EUATIONS

  • Kulikov, G.Yu;Shindin, S.K.
    • Journal of applied mathematics & informatics
    • /
    • v.6 no.3
    • /
    • pp.697-726
    • /
    • 1999
  • In this paper we develop a now procedure to control stepsize for linear multistep methods applied to semi-explicit index 1 differential-algebraic equations. in contrast to the standard approach the error control mechanism presented here is based on monitoring and contolling both the local and global errors of multistep formulas. As a result such methods with the local-global stepsize control solve differential-algebraic equation with any prescribed accuracy (up to round-off errors). For implicit multistep methods we give the minimum number of both full and modified Newton iterations allowing the iterative approxima-tions to be correctly used in the procedure of the local-global stepsize control. We also discuss validity of simple iterations for high accuracy solving differential-algebraic equations. Numerical tests support the the-oretical results of the paper.

Large Eddy Simulation of Turbulent Channel Flow Using Inhomogeneous Filter (비균질 필터를 사용한 난류 채널 유동의 Large Eddy Simulation)

  • Lee, Sang-Hwan;Kim, Kwang-Jin
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.28 no.9
    • /
    • pp.1022-1031
    • /
    • 2004
  • The commutation errors by the filtering process in the large eddy simulation are considered. It is compared the conventional filter with the inhomogeneous filter that is devised to reduce the commutation errors. The weighting factor of the inhomogeneous filter suggested by Vasilyev is adopted. Also, using the optimizing function that estimates test filter width to eliminate the dissipations in the region excluding the vicinity of the wall, the flow patterns are analyzed. It is evaluated in simulations of the turbulent channel flow at Reynolds number of 1020, based on friction velocity and channel half height. Results show that the commutation errors can be significantly reduced by using the inhomogeneous filter and the optimized test filter width.

A Study for the Roundness Estimation (진원도 형상 추정 연구)

  • Kim, Soo-Kwang;Jun, Jae-Uhk
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.10 no.2
    • /
    • pp.38-45
    • /
    • 2011
  • The criteria for determining the elements are the minimum zone method(MZM) and the least squares method(LSM). The LSM is deterministic and simple but is limited at the measurements whose errors are significant compared with form errors. For the precise condition, minimum zone method(MZM) has been selected to determine the elements. The roundness is the fundamental problem in the evaluating form errors. In this paper, anew approach adapting the genius education concept is proposed to obtain an accurate results for the MZM and the LSM of the roundness. Its computational algorithm is studied on a set of measured sample data. To be of almost no account of the specification(the number and the standard deviation etc.) of the sanple data, the results shoqs excellent reliability and high accuracy in estimating the roundness.

Numerical Analysis in Heat Transfer of a Triangular Fin (삼각휜 열전달의 수치해석)

  • Chun, Sang-Myung;Kwon, Young-Pil
    • The Magazine of the Society of Air-Conditioning and Refrigerating Engineers of Korea
    • /
    • v.11 no.3
    • /
    • pp.52-57
    • /
    • 1982
  • One-dimensional approximation for fin problems is widely used in current texts and industrial practice. The errors caused by this approximation is analysed for a longitudinal triangular fin by the numerical solution of two-dimensional fin equation. Two-dimensional solution is obtained by the finite element method and com pared with the one-dimensional esact solution. The results show that total heat transfer and fin efficiency are overestimated by the one-dimensional approximation. The factors which cause these errors are the Biot number (Bi) and the ratio of fin length to half the thickness (L/a). When Bi is smaller than 1.0 these errors are smaller than $10\%$, but when Bi is larger than 5.0 they are a few ten percents. Fin efficiency obtaned by one-dimensional and long fin assumption is valid only then Bi is small and L/a is large.

  • PDF

Investigating the Impact of Random and Systematic Errors on GPS Precise Point Positioning Ambiguity Resolution

  • Han, Joong-Hee;Liu, Zhizhao;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.233-244
    • /
    • 2014
  • Precise Point Positioning (PPP) is an increasingly recognized precisely the GPS/GNSS positioning technique. In order to improve the accuracy of PPP, the error sources in PPP measurements should be reduced as much as possible and the ambiguities should be correctly resolved. The correct ambiguity resolution requires a careful control of residual errors that are normally categorized into random and systematic errors. To understand effects from two categorized errors on the PPP ambiguity resolution, those two GPS datasets are simulated by generating in locations in South Korea (denoted as SUWN) and Hong Kong (PolyU). Both simulation cases are studied for each dataset; the first case is that all the satellites are affected by systematic and random errors, and the second case is that only a few satellites are affected. In the first case with random errors only, when the magnitude of random errors is increased, L1 ambiguities have a much higher chance to be incorrectly fixed. However, the size of ambiguity error is not exactly proportional to the magnitude of random error. Satellite geometry has more impacts on the L1 ambiguity resolution than the magnitude of random errors. In the first case when all the satellites have both random and systematic errors, the accuracy of fixed ambiguities is considerably affected by the systematic error. A pseudorange systematic error of 5 cm is the much more detrimental to ambiguity resolutions than carrier phase systematic error of 2 mm. In the $2^{nd}$ case when only a portion of satellites have systematic and random errors, the L1 ambiguity resolution in PPP can be still corrected. The number of allowable satellites varies from stations to stations, depending on the geometry of satellites. Through extensive simulation tests under different schemes, this paper sheds light on how the PPP ambiguity resolution (more precisely L1 ambiguity resolution) is affected by the characteristics of the residual errors in PPP observations. The numerical examples recall the PPP data analysts that how accurate the error correction models must achieve in order to get all the ambiguities resolved correctly.