• Title/Summary/Keyword: a Markov chain

Search Result 769, Processing Time 0.025 seconds

A practice on performance testing for web-based systems Hyperlink testing for web-based system

  • Chang, Wen-Kui;Ron, Shing-Kai
    • International Journal of Quality Innovation
    • /
    • v.1 no.1
    • /
    • pp.64-74
    • /
    • 2000
  • This paper investigates the issue of performance testing on web browsing environments. Among the typical non-functional characteristics, index of link validity will be deeply explored. A framework to certify link correctness in web site is proposed. All possible navigation paths are first formulated to represent a usage model with the Markov chain property, which is then used to generate test script file statistically. With collecting any existing failure information followed by tracing these testing browsed paths, certification analysis may be performed by applying Markov chain theory. The certification result will yield some significant information such as: test coverage, reliability measure, confidence interval, etc. The proposed mechanism may provide not only completed but also systemic methodologies to find any linking errors and other web technologies errors. Besides, an actual practice of the proposed approach to a web-based system will be demonstrated quantitatively through a certification tool.

  • PDF

An Algorithm for Computing the Fundamental Matrix of a Markov Chain

  • Park, Jeong-Soo;Gho, Geon
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.22 no.1
    • /
    • pp.75-85
    • /
    • 1997
  • A stable algorithm for computing the fundamental matrix (I-Q)$^{-1}$ of a Markov chain is proposed, where Q is a substochastic matrix. The proposed algorithm utilizes the GTH algorithm (Grassmann, Taskar and Heyman, 1985) which is turned out to be stable for finding the steady state distribution of a finite Markov chain. Our algorithm involves no subtractions and therefore loss of significant digits due to concellation is ruled out completely while Gaussian elimination involves subtractions and thus may lead to loss of accuracy due to cancellation. We present numerical evidence to show that our algorithm achieves higher accuracy than the ordinagy Gaussian elimination.

  • PDF

Stochastic simulation based on copula model for intermittent monthly streamflows in arid regions

  • Lee, Taesam;Jeong, Changsam;Park, Taewoong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.488-488
    • /
    • 2015
  • Intermittent streamflow is common phenomenon in arid and semi-arid regions. To manage water resources of intermittent streamflows, stochactic simulation data is essential; however the seasonally stochastic modeling for intermittent streamflow is a difficult task. In this study, using the periodic Markov chain model, we simulate intermittent monthly streamflow for occurrence and the periodic gamma autoregressive and copula models for amount. The copula models were tested in a previous study for the simulation of yearly streamflow, resulting in successful replication of the key and operational statistics of historical data; however, the copula models have never been tested on a monthly time scale. The intermittent models were applied to the Colorado River system in the present study. A few drawbacks of the PGAR model were identified, such as significant underestimation of minimum values on an aggregated yearly time scale and restrictions of the parameter boundaries. Conversely, the copula models do not present such drawbacks but show feasible reproduction of key and operational statistics. We concluded that the periodic Markov chain based the copula models is a practicable method to simulate intermittent monthly streamflow time series.

  • PDF

Economic Adjustment Design For $\bar{X}$ Control Chart: A Markov Chain Approach

  • Yang, Su-Fen
    • International Journal of Quality Innovation
    • /
    • v.2 no.2
    • /
    • pp.136-144
    • /
    • 2001
  • The Markov Chain approach is used to develop an economic adjustment model of a process whose quality can be affected by a single special cause, resulting in changes of the process mean by incorrect adjustment of the process when it is operating according to its capability. The $\bar{X}$ control chart is thus used to signal the special cause. It is demonstrated that the expressions for the expected cycle time and the expected cycle cost are easier to obtain by the proposed approach than by adopting that in Collani, Saniga and Weigang (1994). Furthermore, this approach would be easily extended to derive the expected cycle cost and the expected cycle time for the case of multiple special causes or multiple control charts. A numerical example illustrates the proposed method and its application.

  • PDF

Improving learning outcome prediction method by applying Markov Chain (Markov Chain을 응용한 학습 성과 예측 방법 개선)

  • Chul-Hyun Hwang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.595-600
    • /
    • 2024
  • As the use of artificial intelligence technologies such as machine learning increases in research fields that predict learning outcomes or optimize learning pathways, the use of artificial intelligence in education is gradually making progress. This research is gradually evolving into more advanced artificial intelligence methods such as deep learning and reinforcement learning. This study aims to improve the method of predicting future learning performance based on the learner's past learning performance-history data. Therefore, to improve prediction performance, we propose conditional probability applying the Markov Chain method. This method is used to improve the prediction performance of the classifier by allowing the learner to add learning history data to the classification prediction in addition to classification prediction by machine learning. In order to confirm the effectiveness of the proposed method, a total of more than 30 experiments were conducted per algorithm and indicator using empirical data, 'Teaching aid-based early childhood education learning performance data'. As a result of the experiment, higher performance indicators were confirmed in cases using the proposed method than in cases where only the classification algorithm was used in all cases.

Daily Rainfall Simulation by Rainfall Frequency and State Model of Markov Chain (강우 빈도와 마코프 연쇄의 상태모형에 의한 일 강우량 모의)

  • Jung, Young-Hun;Kim, Buyng-Sik;Kim, Hung Soo;Shim, Myung-Pil
    • Journal of Wetlands Research
    • /
    • v.5 no.2
    • /
    • pp.1-13
    • /
    • 2003
  • In Korea, most of the rainfalls have been concentrated in the flood season and the flood study has received more attention than low flow analysis. One of the reasons that the analysis of low flows has less attention is the lacks of the required data like daily rainfall and so we have used the stochastic processes such as pulse noise, exponential distribution, and state model of Markov chain for the rainfall simulation in short term such as daily. Especially this study will pay attention to the state model of Markov chain. The previous study had performed the simulation study by the state model without considerations of the flood and non-flood periods and without consideration of the frequency of rainfall for the period of a state. Therefore this study considers afore mentioned two cases and compares the results with the known state model. As the results, the RMSEs of the suggested and known models represent the similar results. However, the PRE(relative percentage error) shows the suggested model is better results.

  • PDF

다수의 동일한 입력원을 갖는 ATM Multiplexer의 정확한 셀 손실 확률 분석

  • Choi, Woo-Yong;Jun, Chi-Hyuck
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1995.04a
    • /
    • pp.435-444
    • /
    • 1995
  • We propose a new approach to the calculation of the exact cells loss probability in a shared buffer ATM multiplexer, which is loaded with homogeneous discrete-time ON-OFF sources. Renewal cycles are identified in regard to the state of input sources and the buffer state on each renewal circle is modelled as a K(shared buffer size)-state Markov chain. We also analyze the behavior of queue build-up at the shared buffer whose distribution together with the steady-state probabilities of the Markov chain leads to the exact cell loss probability. Our approach to obtaining the exact cell loss probability seems to be more efficient than most of other existing ones since our underlying Markov chain has less number of states.

  • PDF

Priority MAC based on Multi-parameters for IEEE 802.15.7 VLC in Non-saturation Environments

  • Huynh, Vu Van;Le, Le Nam-Tuan;Jang, Yeong-Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.3C
    • /
    • pp.224-232
    • /
    • 2012
  • Priority MAC is an important issue in every communication system when we consider differentiated service applications. In this paper, we propose a mechanism to support priority MAC based on multi-parameters for IEEE 802.15.7 visible light communication (VLC). By using three parameters such as number of backoff times (NB), backoff exponent (BE) and contention window (CW), we provide priority for multi-level differentiated service applications. We consider beacon-enabled VLC personal area network (VPAN) mode with slotted version for random access algorithm in this paper. Based on a discrete-time Markov chain, we analyze the performance of proposed mechanism under non-saturation environments. By building a Markov chain model for multi-parameters, this paper presents the throughput and transmission delay time for VLC system. Numerical results show that we can apply three parameters to control the priority for VLC MAC protocol.

Approximating Exact Test of Mutual Independence in Multiway Contingency Tables via Stochastic Approximation Monte Carlo

  • Cheon, Soo-Young
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.5
    • /
    • pp.837-846
    • /
    • 2012
  • Monte Carlo methods have been used in exact inference for contingency tables for a long time; however, they suffer from ergodicity and the ability to achieve a desired proportion of valid tables. In this paper, we apply the stochastic approximation Monte Carlo(SAMC; Liang et al., 2007) algorithm, as an adaptive Markov chain Monte Carlo, to the exact test of mutual independence in a multiway contingency table. The performance of SAMC has been investigated on real datasets compared to with existing Markov chain Monte Carlo methods. The numerical results are in favor of the new method in terms of the quality of estimates.

Markov Chain Approach to Forecast in the Binomial Autoregressive Models

  • Kim, Hee-Young;Park, You-Sung
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.3
    • /
    • pp.441-450
    • /
    • 2010
  • In this paper we consider the problem of forecasting binomial time series, modelled by the binomial autoregressive model. This paper considers proposed by McKenzie (1985) and is extended to a higher order by $Wei{\ss}$(2009). Since the binomial autoregressive model is a Markov chain, we can apply the earlier work of Bu and McCabe (2008) for integer valued autoregressive(INAR) model to the binomial autoregressive model. We will discuss how to compute the h-step-ahead forecast of the conditional probabilities of $X_{T+h}$ when T periods are used in fitting. Then we obtain the maximum likelihood estimator of binomial autoregressive model and use it to derive the maximum likelihood estimator of the h-step-ahead forecast of the conditional probabilities of $X_{T+h}$. The methodology is illustrated by applying it to a data set previously analyzed by $Wei{\ss}$(2009).