• Title/Summary/Keyword: Limit Error

Search Result 554, Processing Time 0.026 seconds

Error Rate for the Limiting Poisson-power Function Distribution

  • Joo-Hwan Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.1
    • /
    • pp.243-255
    • /
    • 1996
  • The number of neutron signals from a neutral particle beam(NPB) at the detector, without any errors, obeys Poisson distribution, Under two assumptions that NPB scattering distribution and aiming errors have a circular Gaussian distribution respectively, an exact probability distribution of signals becomes a Poisson-power function distribution. In this paper, we show that the error rate in simple hypothesis testing for the limiting Poisson-power function distribution is not zero. That is, the limit of ${\alpha}+{\beta}$ is zero when Poisson parameter$\kappa\rightarro\infty$, but this limit is not zero (i.e., $\rho\ell$>0)for the Poisson-power function distribution. We also give optimal decision algorithms for a specified error rate.

  • PDF

Analysis of V2V Broadcast Performance Limit for WAVE Communication Systems Using Two-Ray Path Loss Model

  • Song, Yoo-Seung;Choi, Hyun-Kyun
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.213-221
    • /
    • 2017
  • The advent of wireless access in vehicular environments (WAVE) technology has improved the intelligence of transportation systems and enabled generic traffic problems to be solved automatically. Based on the IEEE 802.11p standard for vehicle-to-anything (V2X) communications, WAVE provides wireless links with latencies less than 100 ms to vehicles operating at speeds up to 200 km/h. To date, most research has been based on field test results. In contrast, this paper presents a numerical analysis of the V2X broadcast throughput limit using a path loss model. First, the maximum throughput and minimum delay limit were obtained from the MAC frame format of IEEE 802.11p. Second, the packet error probability was derived for additive white Gaussian noise and fading channel conditions. Finally, the maximum throughput limit of the system was derived from the packet error rate using a two-ray path loss model for a typical highway topology. The throughput was analyzed for each data rate, which allowed the performance at the different data rates to be compared. The analysis method can be easily applied to different topologies by substituting an appropriate target path loss model.

A Repair-Time Limit Replacement Model with Imperfect Repair (불완전 수리에서의 수리시간한계를 가진 교체모형)

  • Chung, Il Han;Yun, Won Young
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.4
    • /
    • pp.233-238
    • /
    • 2013
  • This article concerns a profit model in a repair limit replacement problem with imperfect repair. If a system fails, we should decide whether we repair the failed system (repair option) or replace it by new one (replacement option with a lead time). We assume that repair times are random variables and can be estimated before repair with estimation error. If the estimated repair time is less than the specified limit (repair time limit), the failed unit is repaired but the unit after repair is different from the new one (imperfect repair). Otherwise, we order a new unit to replace the failed unit. The long run average profit (expected profit rate) is used as an optimization criterion and the optimal repair time limit maximizes the expected profit rate. Some special cases are derived.

Development and Assessment of Real-Time Quality Control Algorithm for PM10 Data Observed by Continuous Ambient Particulate Monitor (부유분진측정기(PM10) 관측 자료 실시간 품질관리 알고리즘 개발 및 평가)

  • Kim, Sunyoung;Lee, Hee Choon;Ryoo, Sang-Boom
    • Atmosphere
    • /
    • v.26 no.4
    • /
    • pp.541-551
    • /
    • 2016
  • A real-time quality control algorithm for $PM_{10}$ concentration measured by Continuous Ambient Particulate Monitor (FH62C14, Thermo Fisher Scientific Inc.) has been developed. The quality control algorithm for $PM_{10}$ data consists of five main procedures. The first step is valid value check. The values should be within the acceptable range limit. Upper ($5,000{\mu}g\;m^{-3}$) and lower ($0{\mu}g\;m^{-3}$) values of instrument detectable limit have to be eliminated as being unrealistic. The second step is valid error check. Whenever unusual condition occurs, the instrument will save error code. Value having an error code is eliminated. The third step is persistence check. This step checks on a minimum required variability of data during a certain period. If the $PM_{10}$ data do not vary over the past 60 minutes by more than the specific limit ($0{\mu}g\;m^{-3}$) then the current 5-minute value fails the check. The fourth step is time continuity check, which is checked to eliminate gross outlier. The last step is spike check. The spikes in the time series are checked. The outlier detection is based on the double-difference time series, using the median. Flags indicating normal and abnormal are added to the raw data after quality control procedure. The quality control algorithm is applied to $PM_{10}$ data for Asian dust and non-Asian dust case at Seoul site and dataset for the period 2013~2014 at 26 sites in Korea.

Analysis on the optimized depth of 3D displays without an accommodation error

  • Choi, Hee-Jin;Kim, Joo-Hwan;Park, Jae-Byung;Lee, Byoung-Ho
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1811-1814
    • /
    • 2007
  • Accommodation error is one of the main factors that degrade the comfort while watching stereoscopic 3D images. We analyze the limit of the expressible 3D depth without an accommodation error using the human factor information and wave optical calculation under Fresnel approximation.

  • PDF

Inscribed Approximation based Adaptive Tessellation of Catmull-Clark Subdivision Surfaces

  • Lai, Shuhua;Cheng, Fuhua(Frank)
    • International Journal of CAD/CAM
    • /
    • v.6 no.1
    • /
    • pp.139-148
    • /
    • 2006
  • Catmull-Clark subdivision scheme provides a powerful method for building smooth and complex surfaces. But the number of faces in the uniformly refined meshes increases exponentially with respect to subdivision depth. Adaptive tessellation reduces the number of faces needed to yield a smooth approximation to the limit surface and, consequently, makes the rendering process more efficient. In this paper, we present a new adaptive tessellation method for general Catmull-Clark subdivision surfaces. Different from previous control mesh refinement based approaches, which generate approximate meshes that usually do not interpolate the limit surface, the new method is based on direct evaluation of the limit surface to generate an inscribed polyhedron of the limit surface. With explicit evaluation of general Catmull-Clark subdivision surfaces becoming available, the new adaptive tessellation method can precisely measure error for every point of the limit surface. Hence, it has complete control of the accuracy of the tessellation result. Cracks are avoided by using a recursive color marking process to ensure that adjacent patches or subpatches use the same limit surface points in the construction of the shared boundary. The new method performs limit surface evaluation only at points that are needed for the final rendering process. Therefore it is very fast and memory efficient. The new method is presented for the general Catmull-Clark subdivision scheme. But it can be used for any subdivision scheme that has an explicit evaluation method for its limit surface.

Error Wire Locating Technology with Breadth-first Search Algorithm (Breadth-first 검색 알고리즘을 이용한 와이어 오류 검출에 관한 연구)

  • Jian, Xu;Lee, Jeung-Pyo;Lee, Jae-Chul;Kim, Eal-Goo;Park, Jae-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.258-260
    • /
    • 2007
  • Nowadays the automotive circuit design becomes more complicated a practical modern car circuit usually contains thousands of wires. So the error connection between connector and pins becomes more difficult to be located. This paper proposes a general way to locate all error wires in an automotive circuit design. Firstly, we give an exact definition of error wire to guide our job. This definition also composes the core part of our algorithm. Then we limit the area of the error wires by several steps. During these steps, we apply breadth-first search method to step over all wires under consideration of reducing time cost. In addition, we apply bidirectional stack technique to organize the data structure for algorithm optimization. This algorithm can get a result with all error wires and doubtful wires in a very efficient way. The analysis of this algorithm shows that the complexity is linear. We also discuss some possible improvement of this algorithm.

  • PDF

Limit speeds and stresses in power law functionally graded rotating disks

  • Madan, Royal;Saha, Kashinath;Bhowmick, Shubhankar
    • Advances in materials Research
    • /
    • v.9 no.2
    • /
    • pp.115-131
    • /
    • 2020
  • Limit elastic speed analysis of Al/SiC-based functionally graded annular disk of uniform thickness has been carried out for two cases, namely: metal-rich and ceramic rich. In the present study, the unknown field variable for radial displacement is solved using variational method wherein the solution was obtained by Galerkin's error minimization principle. One of the objectives was to identify the variation of induced stress in a functionally graded disk of uniform thickness at limit elastic speed using modified rule of mixture by comparing the induced von-Mises stress with the yield stress along the disk radius, thereby locating the yield initiation. Furthermore, limit elastic speed has been reported for a combination of varying grading index (n) and aspect ratios (a/b).Results indicate, limit elastic speed increases with an increase in grading indices. In case of an increase in aspect ratio, limit elastic speed increases up to a critical value beyond which it recedes. Also, the objective was to look at the variation of yield stress corresponding to volume fraction variation within the disk which later helps in material tailoring. The study reveals the qualitative variation of yield stress for FG disk with volume fraction, resulting in the possibility of material tailoring from the processing standpoint, in practice.

Laser imager의 성능관리에 대한 연구

  • Lee, Hyeong-Jin;In, Gyeong-Hwan;Lee, Won-Hong;Kim, Geon-Jung
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.3 no.1
    • /
    • pp.126-132
    • /
    • 1997
  • Purpose : To apply to Program of Auto processor quality control after comparison of Film density variations with amendments to Auto density by using Check density program and Adjust density program of calibration mode into the Laser imager. Methods : Observe Check and Adjust density variations on the Control chart with standard step and value during seven months from December, 1995 to June, 1956 extending twice a week. (1) Measure density value on the steps after printing out 17-step sensitometric pattern of the Check density program. (2) In the same way, measure density values after amending density by using Adjust density program If they are exceeding allowable error limit. Results : In case of Check density program, the exceeding limit rates of Density difference(DD) and Middle density(MD) are: FL-IM3543 DD=75%. MD=72.5%, FL-IMD DD=0%. MD=30.8%(14.5%) After amending density by using Adjust density program, the exceeding limit rates of all both Laser imager were zero percent. The standard deviations are show lower FL-IM D than FL-IM3543 on the Check density control chart, but higher on the Adjust density control chart. Conclusion : (1) Check density variations by printingout sensitometric pattern extending once a week at least for quality control of the Laser imager. (2) In case of a dusty place, check the Laser beam transmission after cleaning Laser optical unit extending once a month. (3) Be sure to measure and check density values by using adjust density program if they are exceeding allowable error limit. (4) Maintain much better film density by performing the adjust density program even if check density values are existed within normal limit.

  • PDF

A Study on Development of Structural Health Monitoring System for Steel Beams Using Strain Gauges (변형률계를 이용한 강재보의 건전도 평가 시스템 개발에 관한 연구)

  • Hahn, Hyun Gyu;Ahn, Hyung Joon
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.16 no.1
    • /
    • pp.99-109
    • /
    • 2012
  • This study aimed to develop a Structural Health Monitoring System for steel beams in the manner of suggesting and verifying a theoretical formula for displacement estimation using strain gauges, and estimating the loading points and magnitude. According to the results of this study, it was found that when a load of 160kN (56% of the yield load) was applied, the error rate of the deflection obtained with a strain gauge at the point of maximum deflection compared to the deflection measured with a displacement meter was within 2%, and that the estimates of the magnitude and points of load application also showed the error rate of not more than 1%. This suggests that the displacement and load of steel beams can be measured with strain gauges and further, it will enable more cost-effective sensor designing without displacement meter or load cell. The Structural Health Monitoring System program implemented in Lab VIEW gave graded warnings whenever the measured data exceeds the specified range (strength limit state, serviceability limit state, yield strain), and both the serviceability limit state and strength limit state could be simultaneously monitored with strain gauge alone.