• Title/Summary/Keyword: 2 of 2 Runs Rule

Search Result 16, Processing Time 0.021 seconds

Statistical design of Shewhart control chart with runs rules (런 규칙이 혼합된 슈와르트 관리도의 통계적 설계)

  • Kim, Young-Bok;Hong, Jung-Sik;Lie, Chang-Hoon
    • Journal of Korean Society for Quality Management
    • /
    • v.36 no.3
    • /
    • pp.34-44
    • /
    • 2008
  • This research proposes a design method based on the statistical characteristics of the Shewhart control chart incorporated with 2 of 2 and 2 of 3 runs rules respectively. A Markov chain approach is employed in order to calculate the in-control and out-of-control average run lengths(ARL). Two different control limit coefficients for the Shewhart scheme and the runs rule scheme are derived simultaneously to minimize the out-of-control average run length subject to the reasonable in-control average run length. Numerical examples show that the statistical performance of the hybrid control scheme are superior to that of the original Shewhart control chart.

$\bar{X}$ Control Chart with Runs Rules: A Review (규칙을 가진 $\bar{X}$ 관리도에 관한 통람)

  • Park, Jin-Young;Seo, Sun-Keun
    • Journal of Korean Society for Quality Management
    • /
    • v.40 no.2
    • /
    • pp.176-185
    • /
    • 2012
  • After a work of Derman and Ross(1997) that considered simple main runs rules and derived ARL (Average Run Length) using Markov chain modeling, $\bar{X}$ control chart based on diverse alternative main and supplementary runs rules that is the most popular control chart for monitoring the mean of a process are proposed. This paper reviews and discusses the-state-of-art researches for these runs rules and classifies according to several properties of runs rules. ARL derivation for a proposed runs rule is also illustrated.

Economic Analysis for Detection of Out-of-Control of Process Using 2 of 2 Runs Rules (2중 2 런규칙을 사용한 공정이상 감지방법의 경제성 분석)

  • Kim, Young Bok;Hong, Jung Sik;Lie, Chang Hoon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.34 no.3
    • /
    • pp.308-317
    • /
    • 2008
  • This research investigates economic characteristics of 2 of 2 runs rules under the Shewhart $\bar{X}$ control chart scheme. A Markov chain approach is employed in order to calculate the in-control average run length (ARL) and the average length of analysis cycle. States of the process are defined according to the process conditions at sampling time and transition probabilities are derived from the state definitions. A steady state cost function is constructed based on the Lorezen and Vance(1986) model. Numerical examples show that 2 of 2 runs rules are economically superior to the Shewhart $\bar{X}$ chart in many cases.

Statistical Design of X Control Chart with Improved 2-of-3 Main and Supplementary Runs Rules (개선된 3 중 2 주 및 보조 런 규칙을 가진 X관리도의 통계적 설계)

  • Park, Jin-Young;Seo, Sun-Keun
    • Journal of Korean Society for Quality Management
    • /
    • v.40 no.4
    • /
    • pp.467-480
    • /
    • 2012
  • Purpose: This paper introduces new 2-of-3 main and supplementary runs rules to increase the performance of the classical $\bar{X}$ control chart for detecting small process shifts. Methods: The proposed runs rules are compared with other competitive runs rules by numerical experiments. Nonlinear optimization problem to minimize the out-of-control ARL at a specified shift of process mean for determining action and warning limits at a time is formulated and a procedure to find two limits is illustrated with a numerical example. Results: The proposed 2-of-3 main and supplementary runs rules demonstrate an improved performance over other runs rules in detecting a sudden shift of process mean by simultaneous changes of mean and standard deviation. Conclusion: To increase the performance in the detection of small to moderate shifts, the proposed runs rules will be used with $\bar{X}$ control charts.

X Control Charts under the Second Order Autoregressive Process

  • Baik, Jai-Wook
    • Journal of Korean Society for Quality Management
    • /
    • v.22 no.1
    • /
    • pp.82-95
    • /
    • 1994
  • When independent individual measurements are taken both $S/c_4$ and $\bar{R}/d_2$ are unbiased estimators of the process standard deviation. However, with dependent data $\bar{R}/d_2$ is not an unbiased estimator of the process standard deviation. On the other hand $S/c_4$ is an asymptotic unbiased estimator. If there exists correlation in the data, positive(negative) correlation tends to increase(decrease) the ARL. The effect of using $\bar{R}/d_2$ is greater than $S/c_4$ if the assumption of independence is invalid. Supplementary runs rule shortens the ARL of X control charts dramatically in the presence of correlation in the data.

  • PDF

Classification Rule for Optimal Blocking for Nonregular Factorial Designs

  • Park, Dong-Kwon;Kim, Hyoung-Soon;Kang, Hee-Kyoung
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.3
    • /
    • pp.483-495
    • /
    • 2007
  • In a general fractional factorial design, the n-levels of a factor are coded by the $n^{th}$ roots of the unity. Pistone and Rogantin (2007) gave a full generalization to mixed-level designs of the theory of the polynomial indicator function using this device. This article discusses the optimal blocking scheme for nonregular designs. According to hierarchical principle, the minimum aberration (MA) has been used as an important criterion for selecting blocked regular fractional factorial designs. MA criterion is mainly based on the defining contrast groups, which only exist for regular designs but not for nonregular designs. Recently, Cheng et al. (2004) adapted the generalized (G)-MA criterion discussed by Tang and Deng (1999) in studying $2^p$ optimal blocking scheme for nonregular factorial designs. The approach is based on the method of replacement by assigning $2^p$ blocks the distinct level combinations in the column with different blocks. However, when blocking level is not a power of two, we have no clue yet in any sense. As an example, suppose we experiment during 3 days for 12-run Plackett-Burman design. How can we arrange the 12-runs into the three blocks? To solve the problem, we apply G-MA criterion to nonregular mixed-level blocked scheme via the mixed-level indicator function and give an answer for the question.

Travelling Salesman Problem Based on Area Division and Connection Method (외판원 문제의 지역 분할-연결 기법)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.211-218
    • /
    • 2015
  • This paper introduces a 'divide-and-conquer' algorithm to the travelling salesman problem (TSP). Top 10n are selected beforehand from a pool of n(n-1) data which are sorted in the ascending order of each vertex's distance. The proposed algorithm then firstly selects partial paths that are interconnected with the shortest distance $r_1=d\{v_i,v_j\}$ of each vertex $v_i$ and assigns them as individual regions. For $r_2$, it connects all inter-vertex edges within the region and inter-region edges are connected in accordance with the connection rule. Finally for $r_3$, it connects only inter-region edges until one whole Hamiltonian cycle is constructed. When tested on TSP-1(n=26) and TSP-2(n=42) of real cities and on a randomly constructed TSP-3(n=50) of the Euclidean plane, the algorithm has obtained optimal solutions for the first two and an improved one from that of Valenzuela and Jones for the third. In contrast to the brute-force search algorithm which runs in n!, the proposed algorithm runs at most 10n times, with the time complexity of $O(n^2)$.

Simple Online Multiple Human Tracking based on LK Feature Tracker and Detection for Embedded Surveillance

  • Vu, Quang Dao;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.6
    • /
    • pp.893-910
    • /
    • 2017
  • In this paper, we propose a simple online multiple object (human) tracking method, LKDeep (Lucas-Kanade feature and Detection based Simple Online Multiple Object Tracker), which can run in fast online enough on CPU core only with acceptable tracking performance for embedded surveillance purpose. The proposed LKDeep is a pragmatic hybrid approach which tracks multiple objects (humans) mainly based on LK features but is compensated by detection on periodic times or on necessity times. Compared to other state-of-the-art multiple object tracking methods based on 'Tracking-By-Detection (TBD)' approach, the proposed LKDeep is faster since it does not have to detect object on every frame and it utilizes simple association rule, but it shows a good object tracking performance. Through experiments in comparison with other multiple object tracking (MOT) methods using the public DPM detector among online state-of-the-art MOT methods reported in MOT challenge [1], it is shown that the proposed simple online MOT method, LKDeep runs faster but with good tracking performance for surveillance purpose. It is further observed through single object tracking (SOT) visual tracker benchmark experiment [2] that LKDeep with an optimized deep learning detector can run in online fast with comparable tracking performance to other state-of-the-art SOT methods.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Analysis on the Driving Safety and Investment Effect using Severity Model of Fatal Traffic Accidents (대형교통사고 심각도 모형에 의한 주행안전성 및 투자효과 분석)

  • Lim, Chang-Sik;Choi, Yang-Won
    • Journal of Korean Society of Transportation
    • /
    • v.29 no.3
    • /
    • pp.103-114
    • /
    • 2011
  • In this study, we discuss a fatal accident severity model obtained from the analysis of 112 crash sites collected since 2000, and the resulting relationship between fatal accidents and roadway geometry design. From the 720 times computer simulations for improving driving safety, we then reached the following conclusions:. First, the result of cross and frequency-analyses on the car accident sites showed that 43.7% of the accidents occurred on the curved roads, 60.7% on the vertical curve section, 57.2% on the roadways with radius of curvature of 0 to 24m, 83.9% on the roads with superelevation of 0.1 to 2.0% and 49.1% on the one-way 2-lane roads; vehicle types involved are passenger vehicles (33.0%), trucks (20.5%) and buses (14.3%) in order of frequency. The results also show that the superelevation is the most influencing factor for the fatal accidents. Second, employing the Ordered Probit Model (OPM), we developed a severity model for fatal accidents being a function of on various road conditions so as to the damages can be predicted. The proposed model possibly assists the practitioners to predict dangerous roadway segments, and to take appropriate measures in advance. Third, computer simulation runs show that providing adequate superelevation on the segment where a fatal accident occurred could reduce similar fatal accidents by at least 85%. This result indicates that the regulations specified in the Rule for Road Structure and Facility Standard (description and guidelines) should be enhanced to include more specific requirement for providing the superelevation.