• 제목/요약/키워드: Markov process

검색결과 610건 처리시간 0.021초

ARIMA(0,1,1)모형에서 통계적 공정탐색절차의 MARKOV연쇄 표현 (A Markov Chain Representation of Statistical Process Monitoring Procedure under an ARIMA(0,1,1) Model)

  • 박창순
    • 응용통계연구
    • /
    • 제16권1호
    • /
    • pp.71-85
    • /
    • 2003
  • 일정 시간간격으로 품질을 측정하는 공정관리절차의 경제적 설계에서는 그 특성의 규명이 측정시점의 이산성 (discreteness) 때문에 복잡하고 어렵다. 이 논문에서는 공정 탐색 절차를 Markov 연쇄(chain)로 표현하는 과정을 개발하였고, 공정분포가 공정주기 내에서 발생하는 잡음과 이상원인의 효과를 설명할 수 있는 ARIMA(0,1,1) 모형을 따를 때에 Markov 연쇄의 표현을 이용하여 공정탐색절차의 특성을 도출하였다. Markov 연쇄의 특성은 전이행렬에 따라 달라지며, 전이행렬은 관리절차와 공정분포에 의해 결정된다. 이 논문에서 도출된 Markov 연쇄의 표현은 많은 다른 형태의 관리절차나 공정분포에서도 그에 해당하는 전이행렬을 구하면 쉽게 적용될 수 있다.

Markov 과정의 최초통과시간을 이용한 지수가중 이동평균 관리도의 평균런길이의 계산 (Average run length calculation of the EWMA control chart using the first passage time of the Markov process)

  • 박창순
    • 응용통계연구
    • /
    • 제30권1호
    • /
    • pp.1-12
    • /
    • 2017
  • 많은 확률과정이 Markov 특성을 만족하거나 근사적으로 만족하는 것으로 가정된다. Markov 과정에서 특히 관심을 끄는 것은 최초통과시간이다. 최초통과시간에 대한 연구는 Wald의 축차분석에서 시작하여 근사적 특성에 대한 많은 연구가 되어왔고 컴퓨터의 발달로 통계계산적 방법이 사용되면서 근사적 결과가 참값에 가까운 값을 계산할 수 있게 되었다. 이 논문은 Markov 과정의 예로서 지수가중 이동평균 관리도를 사용할 때 평균런길이를 계산하는 과정과 계산상의 주의점, 문제점 등을 연구하였다. 이 결과는 다른 모든 Markov 과정에 적용될 수 있으며 특히 Markov 연쇄로의 근사는 확률과정의 특성의 연구에 유용하고 계산적 접근을 용이하게 한다.

Markov Process 기반 RAM 모델에 대한 파라미터 민감도 분석 (Parametric Sensitivity Analysis of Markov Process Based RAM Model)

  • 김영석;허장욱
    • 시스템엔지니어링학술지
    • /
    • 제14권1호
    • /
    • pp.44-51
    • /
    • 2018
  • The purpose of RAM analysis in weapon systems is to reduce life cycle costs, along with improving combat readiness by meeting RAM target value. We analyzed the sensitivity of the RAM analysis parameters to the use of the operating system by using the Markov Process based model (MPS, Markov Process Simulation) developed for RAM analysis. A Markov process-based RAM analysis model was developed to analyze the sensitivity of parameters (MTBF, MTTR and ALDT) to the utility of the 81mm mortar. The time required for the application to reach the steady state is about 15,000H, which is about 2 years, and the sensitivity of the parameter is highest for ALDT. In order to improve combat readiness, there is a need for continuous improvement in ALDT.

SOME LIMIT THEOREMS FOR POSITIVE RECURRENT AGE-DEPENDENT BRANCHING PROCESSES

  • Kang, Hye-Jeong
    • 대한수학회지
    • /
    • 제38권1호
    • /
    • pp.25-35
    • /
    • 2001
  • In this paper we consider an age dependent branching process whose particles move according to a Markov process with continuous state space. The Markov process is assumed to the stationary with independent increments and positive recurrent. We find some sufficient conditions for he Markov motion process such that the empirical distribution of the positions converges to the limiting distribution of the motion process.

  • PDF

Waiting Times in Polling Systems with Markov-Modulated Poisson Process Arrival

  • Kim, D. W.;W. Ryu;K. P. Jun;Park, B. U.;H. D. Bae
    • Journal of the Korean Statistical Society
    • /
    • 제26권3호
    • /
    • pp.355-363
    • /
    • 1997
  • In queueing theory, polling systems have been widely studied as a way of serving several stations in cyclic order. In this paper we consider Markov-modulated Poisson process which is useful for approximating a superposition of heterogeneous arrivals. We derive the mean waiting time of each station in a polling system where the arrival process is modeled by a Markov-modulated Poisson process.

  • PDF

마코프 누적 프로세스에서의 확률적 콘벡스성 (Stochastic convexity in markov additive processes)

  • 윤복식
    • 한국경영과학회:학술대회논문집
    • /
    • 대한산업공학회/한국경영과학회 1991년도 춘계공동학술대회 발표논문 및 초록집; 전북대학교, 전주; 26-27 Apr. 1991
    • /
    • pp.147-159
    • /
    • 1991
  • Stochastic convexity(concvity) of a stochastic process is a very useful concept for various stochastic optimization problems. In this study we first establish stochastic convexity of a certain class of Markov additive processes through the probabilistic construction based on the sample path approach. A Markov additive process is obtained by integrating a functional of the underlying Markov process with respect to time, and its stochastic convexity can be utilized to provide efficient methods for optimal design or for optimal operation schedule of a wide range of stochastic systems. We also clarify the conditions for stochatic monotonicity of the Markov process, which is required for stochatic convexity of the Markov additive process. This result shows that stochastic convexity can be used for the analysis of probabilistic models based on birth and death processes, which have very wide application area. Finally we demonstrate the validity and usefulness of the theoretical results by developing efficient methods for the optimal replacement scheduling based on the stochastic convexity property.

  • PDF

Markov 과정(過程)의 수리적(數理的) 구조(構造)와 그 축차결정과정(逐次決定過程) (On The Mathematical Structure of Markov Process and Markovian Sequential Decision Process)

  • 김유송
    • 품질경영학회지
    • /
    • 제11권2호
    • /
    • pp.2-9
    • /
    • 1983
  • As will be seen, this paper is tries that the research on the mathematical structure of Markov process and Markovian sequential decision process (the policy improvement iteration method,) moreover, that it analyze the logic and the characteristic of behavior of mathematical model of Markov process. Therefore firstly, it classify, on research of mathematical structure of Markov process, the forward equation and backward equation of Chapman-kolmogorov equation and of kolmogorov differential equation, and then have survey on logic of equation systems or on the question of uniqueness and existence of solution of the equation. Secondly, it classify, at the Markovian sequential decision process, the case of discrete time parameter and the continuous time parameter, and then it explore the logic system of characteristic of the behavior, the value determination operation and the policy improvement routine.

  • PDF

임무수행을 위한 개선된 강화학습 방법 (An Improved Reinforcement Learning Technique for Mission Completion)

  • 권우영;이상훈;서일홍
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제52권9호
    • /
    • pp.533-539
    • /
    • 2003
  • Reinforcement learning (RL) has been widely used as a learning mechanism of an artificial life system. However, RL usually suffers from slow convergence to the optimum state-action sequence or a sequence of stimulus-response (SR) behaviors, and may not correctly work in non-Markov processes. In this paper, first, to cope with slow-convergence problem, if some state-action pairs are considered as disturbance for optimum sequence, then they no to be eliminated in long-term memory (LTM), where such disturbances are found by a shortest path-finding algorithm. This process is shown to let the system get an enhanced learning speed. Second, to partly solve a non-Markov problem, if a stimulus is frequently met in a searching-process, then the stimulus will be classified as a sequential percept for a non-Markov hidden state. And thus, a correct behavior for a non-Markov hidden state can be learned as in a Markov environment. To show the validity of our proposed learning technologies, several simulation result j will be illustrated.

Performance Evaluation of the WiMAX Network Based on Combining the 2D Markov Chain and MMPP Traffic Model

  • Saha, Tonmoy;Shufean, Md. Abu;Alam, Mahbubul;Islam, Md. Imdadul
    • Journal of Information Processing Systems
    • /
    • 제7권4호
    • /
    • pp.653-678
    • /
    • 2011
  • WiMAX is intended for fourth generation wireless mobile communications where a group of users are provided with a connection and a fixed length queue. In present literature traffic of such network is analyzed based on the generator matrix of the Markov Arrival Process (MAP). In this paper a simple analytical technique of the two dimensional Markov chain is used to obtain the trajectory of the congestion of the network as a function of a traffic parameter. Finally, a two state phase dependent arrival process is considered to evaluate probability states. The entire analysis is kept independent of modulation and coding schemes.