• Title/Summary/Keyword: error distribution

Search Result 2,036, Processing Time 0.029 seconds

The Minimum Dwell Time Algorithm for the Poisson Distribution and the Poisson-power Function Distribution

  • Kim, Joo-Hwan
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.1
    • /
    • pp.229-241
    • /
    • 1997
  • We consider discrimination curve and minimum dwell time for Poisson distribution and Poisson-power function distribution. Let the random variable X has Poisson distribution with mean .lambda.. For the hypothesis testing H$\_$0/:.lambda. = t vs. H$\_$1/:.lambda. = d (d$\_$0/ if X.leq.c. Since a critical value c can not be determined to satisfy both types of errors .alpha. and .beta., we considered discrimination curve that gives the maximum d such that it can be discriminated from t for a given .alpha. and .beta.. We also considered an algorithm to compute the minimum dwell time which is needed to discriminate at the given .alpha. and .beta. for the Poisson counts and proved its convergence property. For the Poisson-power function distribution, we reject H$\_$0/ if X.leq..'{c}.. Since a critical value .'{c}. can not be determined to satisfy both .alpha. and .beta., similar to the Poisson case we considered discrimination curve and computation algorithm to find the minimum dwell time for the Poisson-power function distribution. We prosent this algorithm and an example of computation. It is found that the minimum dwell time algorithm fails for the Poisson-power function distribution if the aiming error variance .sigma.$\^$2/$\_$2/ is too large relative to the variance .sigma.$\^$2/$\_$1/ of the Gaussian distribution of intensity. In other words, if .ell. is too small, we can not find the minimum dwell time for a given .alpha. and .beta..

  • PDF

Wigner-Ville Distribution Applying the Rotating Window and Its Characteristics (회전 창문함수를 적용한 위그너-빌 분포함수와 그 특성)

  • 박연규;김양한
    • Journal of KSNVE
    • /
    • v.7 no.5
    • /
    • pp.747-756
    • /
    • 1997
  • Wigner-Ville distribution which is a time-frequency analysis has a fatal drawback, when the signal has multiple components. This is the cross-talk and often causes a neagative value in the distribution. Wingner-Ville distriution is an expression of power, therefore the cross-talk must be avoided. Smoothing the Wigner-Ville distribution by convoluting it with a window, is most commonly used to reduce the cross-talk. There can be infinite number of distributions depending on the windows. But, the smoothing reduces resolution in time-frequency plane; this motives to design a more effective window in reducing cross-talk while remaining resolution. The domain in which the cross-talk and legitimate components can be easily distinguished, is the ambiguity function. In the ambiguity function domain, the legitimate components appear as linear lines passing through the orgine. But, the cross-talk is widely distributes in the ambiguity function plane. Based on the relative distributions of cross-talk and legitimate components, rotating window can be designed to minimize cross-talk. Applying the rotating window to the ambiguity function corresponds to smoothing the Wigner-Ville distribution. Therefore, the effects of rotating window is estimated in terms of the bias error due to smooting the Wigner-Ville distribution. By applying the rotating window, not only the Wigner-Ville distribution but also its properties are changed. The properties of the new distribution are checked, in order to complete analyzing the rotating window.

  • PDF

Inverse quantization of DCT coefficients using Laplacian pdf (Laplacian pdf를 적용한 DCT 계수의 역양자화)

  • 강소연;이병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.857-864
    • /
    • 2004
  • Many image compression standards such as JPEG, MPEG or H.263 are based on the discrete cosine transform (DCT) and quantization method. Quantization error. is the major source of image quality degradation. The current dequantization method assumes the uniform distribution of the DCT coefficients. Therefore the dequantization value is the center of each quantization interval. However DCT coefficients are regarded to follow Laplacian probability density function (pdf). The center value of each interval is not optimal in reducing squared error. We use mean of the quantization interval assuming Laplacian pdf, and show the effect of correction on image quality. Also, we compare existing quantization error to corrected quantization error in closed form. The effect of PSNR improvements due to the compensation to the real image is in the range of 0.2 ∼0.4 ㏈. The maximum correction value is 1.66 ㏈.

A Study on Thermal Diffusivity Measurement by Improvement of Laser Flash Uniformity Using an Optical Fiber (광섬유를 이용한 레이저섬광의 균일분포 증진효과에 따른 열확산계수 측정에 관한 고찰)

  • Lee, Won-Sik;Bae, Shin-Chul
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.22 no.8
    • /
    • pp.1073-1082
    • /
    • 1998
  • When thermal diffusivity is measured by laser flash method, the thermal diffusivity call be calculated front the assumption of the uniformly heated whole surface of the specimen. It has been known that the approximate 5% error is made by the non-uniform energy distribution on the specimen surface of laser pulse heat source. In this study, to obtain the highly-uniformed laser beam, which has both the low non-uniform heating error from non-uniform laser beam and the energy loss, research was carried out on no transmitting loss by optical fiber and high repetitions. In addition, heating error and thermal diffusivity were measured as the measuring positions were varied and compared with the results using the uniform and the non-uniform laser beams. In addition, dole to using the uniformalized laser beam, the whole surface of the specimen was heated uniformly and as a result, it was the thought that this was very effective to reduce the variations of the errors of the thermal diffusivity as the measuring positions were varied. It can be obtained that when the thermal diffusivity of POCO-AXM-5Q1 of SRM in NBS was measured with both the uniform and the non-uniform laser beams, the dispersion error of the former was from 2 to 2.5%, which was more improved than that of the latter.

A Study on Particle Filter based on KLD-Resampling for Wireless Patient Tracking

  • Ly-Tu, Nga;Le-Tien, Thuong;Mai, Linh
    • Industrial Engineering and Management Systems
    • /
    • v.16 no.1
    • /
    • pp.92-102
    • /
    • 2017
  • In this paper, we consider a typical health care system via the help of Wireless Sensor Network (WSN) for wireless patient tracking. The wireless patient tracking module of this system performs localization out of samples of Received Signal Strength (RSS) variations and tracking through a Particle Filter (PF) for WSN assisted by multiple transmit-power information. We propose a modified PF, Kullback-Leibler Distance (KLD)-resampling PF, to ameliorate the effect of RSS variations by generating a sample set near the high-likelihood region for improving the wireless patient tracking. The key idea of this method is to approximate a discrete distribution with an upper bound error on the KLD for reducing both location error and the number of particles used. To determine this bound error, an optimal algorithm is proposed based on the maximum gap error between the proposal and Sampling Important Resampling (SIR) algorithms. By setting up these values, a number of simulations using the health care system's data sets which contains the real RSSI measurements to evaluate the location error in term of various power levels and density nodes for all methods. Finally, we point out the effect of different power levels vs. different density nodes for the wireless patient tracking.

A Study on Optimization of Tooth Micro-geometry for a Helical Gear Pair (헬리컬 기어의 치형최적화에 관한 연구)

  • Zhang, Qi;Kang, Jae-Hwa;Lyu, Sung-Ki
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.10 no.4
    • /
    • pp.70-75
    • /
    • 2011
  • Nowadays, modern gearboxes are characterized by high torque load demands, low running noise and compact design. Also durability of gearbox is specially a major issue for the industry. For the gearbox which used in wind turbine, gear transmission error(T.E.) is the excitation that leads the tonal noise known as gear whine, and radiated gear whine is also the dominant source of noise in the whole gearbox. In this paper, tooth modification for the high speed stage is used to compensate for the deformation of the teeth due to load and to ensure a proper meshing to achieve an optimized tooth contact pattern. The gearbox is firstly modeled in Romax software, and then the various combination analysis of the tooth modification is presented by using Windows LDP software, and the prediction of transmission error under the loaded torque for the helical gear pair is investigated, the transmission error, contact stress, root stress and load distribution are also calculated and compared before and after tooth modification under one torque condition. The simulation result shows that the transmission error and stress under the loads can be minimized by the appropriate tooth modification.

Consumers' Abductive Inference Error as Cognitive Impairment

  • HAN, Woong-Hee
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.8
    • /
    • pp.747-752
    • /
    • 2020
  • This study examines cognitive impairment, which is one of the results from social exclusion and leads to logical reasoning disorders. This study also investigate how cognitive errors called abductive inference error occur due to cognitive impairment. Present study was performed with 81 college students. Participants were randomly assigned to the group who has experienced social exclusion or to the group who has not experience the social exclusion. We analyzed how the degree of error of abductive inference differs according to the social exclusion experience. The group who has experienced social exclusion showed a higher level of abductive inference error than the group who has not experience. The abductive condition inference value of the group who has experienced social exclusion was higher in the group with the deduction condition inference value of 90% than in the group with the deduction condition inference value of 10%, and the difference was also significant. This study extended the concepts of cognitive impairments, escape theory, cognitive narrowing which are used to explain addiction behavior to human cognitive bias. Also this study confirmed that social exclusion experience increased cognitive impairment and abductive inference error. Future research directions and implications were discussed and suggested.

Error Control Protocol and Data Encryption Mechanism in the One-Way Network (일방향 전송 네트워크에서의 오류 제어 프로토콜 및 데이터 암호화 메커니즘)

  • Ha, Jaecheol;Kim, Kihyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.3
    • /
    • pp.613-621
    • /
    • 2016
  • Since the error control problem is a critical and sensitive issue in the one-way network, we can adopt a forward error correction code method or data retransmission method based on the response of reception result. In this paper, we propose error control method and continuous data transmission protocol in the one-way network which has unidirectional data transmission channel and special channel to receive only the response of reception result. Furthermore we present data encryption and key update mechanism which is based on the pre-shared key distribution scheme and suggest some ASDU(Application Service Data Unit) formats to implement it in the one-way network.

Non-Controlling Interests and Proxy of Real Activities Manipulation in Stakeholder-Oriented Corporate Governance

  • FUJITA, Kento;YAMADA, Akihiro
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.10
    • /
    • pp.105-113
    • /
    • 2022
  • The purpose of this paper is to analyze the relationship between the ratio of non-controlling shareholder interests (minority equity ratio, MER) and the measurement error in real activities manipulation (RM) proxy for Japanese firms. Many Japanese firms have practiced stakeholder-oriented corporate governance systems. Previous studies suggest that the higher the MER, the more Japanese businesses tend to employ management techniques for the group's sales growth while also reallocating resources inside the group to reduce principal-principal conflicts. Such differences in management strategies by firms could lead to measurement error in the RM proxy. The analysis uses 16,450 firm-years listed on the Tokyo Stock Exchange. The results of our analysis show that there is a positive relationship between MER and the RM proxy, and high persistence of RM proxies, suggesting that the RM proxies may contain measurement error. We also find that MER is correlated with variables associated with management strategy and that controlling for these variables can reduce the measurement error of RM proxy in firms with large MER. This study extends previous research on measurement error in RM proxy by relating them to ownership structure and corporate governance. This paper would contribute to researchers examining issues related to RM.

Analysis of Checkpointing Model with Instantaneous Error Detection (즉각적 오류 감지가 가능한 경우의 체크포인팅 모형 분석)

  • Lee, Yutae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.170-175
    • /
    • 2022
  • Reactive failure management techniques are required to mitigate the impact of errors in high performance computing. Checkpoint is the standard recovery technique for coping with errors. An application employing checkpoints periodically saves its state, so that when an error occurs while some task is executing, the application is rolled back to its last checkpointed task and resumes execution from that task onward. In this paper, assuming the time-to-errors are independent each other and generally distributed, we analyze the checkpointing model with instantaneous error detection. The conventional assumption that two or more errors do not take place between two consecutive checkpoints is removed. Given the checkpointing time, down-time, and recovery time, we derive the reliability of the checkpointing model. When the time-to-error follows an exponential distribution, we obtain the optimal checkpointing interval to achieve the maximum reliability.