• Title/Summary/Keyword: Markov process

Search Result 610, Processing Time 0.028 seconds

System Replacement Policy for A Partially Observable Markov Decision Process Model

  • Kim, Chang-Eun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.16 no.2
    • /
    • pp.1-9
    • /
    • 1990
  • The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop a new jump algorithm which has the potential for solving system problems dealing with continuous state space Markov chains.

  • PDF

Reliability Analysis of Multi-Component System Considering Preventive Maintenance: Application of Markov Chain Model (예방정비를 고려한 복수 부품 시스템의 신뢰성 분석: 마코프 체인 모형의 응용)

  • Kim, Hun Gil;Kim, Woo-Sung
    • Journal of Applied Reliability
    • /
    • v.16 no.4
    • /
    • pp.313-322
    • /
    • 2016
  • Purpose: We introduce ways to employ Markov chain model to evaluate the effect of preventive maintenance process. While the preventive maintenance process decreases the failure rate of each subsystems, it increases the downtime of the system because the system can not work during the maintenance process. The goal of this paper is to introduce ways to analyze this trade-off. Methods: Markov chain models are employed. We derive the availability of the system consisting of N repairable subsystems by the methods under various maintenance policies. Results: To validate our methods, we apply our models to the real maintenance data reports of military truck. The error between the model and the data was about 1%. Conclusion: The models developed in this paper fit real data well. These techniques can be applied to calculate the availability under various preventive maintenance policies.

Network Security Situation Assessment Method Based on Markov Game Model

  • Li, Xi;Lu, Yu;Liu, Sen;Nie, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2414-2428
    • /
    • 2018
  • In order to solve the problem that the current network security situation assessment methods just focus on the attack behaviors, this paper proposes a kind of network security situation assessment method based on Markov Decision Process and Game theory. The method takes the Markov Game model as the core, and uses the 4 levels data fusion to realize the evaluation of the network security situation. In this process, the Nash equilibrium point of the game is used to determine the impact on the network security. Experiments show that the results of this method are basically consistent with the expert evaluation data. As the method takes full account of the interaction between the attackers and defenders, it is closer to reality, and can accurately assess network security situation.

LIMIT THEOREMS FOR MARKOV PROCESSES GENERATED BY ITERATIONS OF RANDOM MAPS

  • Lee, Oe-Sook
    • Journal of the Korean Mathematical Society
    • /
    • v.33 no.4
    • /
    • pp.983-992
    • /
    • 1996
  • Let p(x, dy) be a transition probability function on $(S, \rho)$, where S is a complete separable metric space. Then a Markov process $X_n$ which has p(x, dy) as its transition probability may be generated by random iterations of the form $X_{n+1} = f(X_n, \varepsilon_{n+1})$, where $\varepsilon_n$ is a sequence of independent and identically distributed random variables (See, e.g., Kifer(1986), Bhattacharya and Waymire(1990)).

  • PDF

Optimal control of stochastic continuous discrete systems applied to FMS

  • Boukas, E.K.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.733-743
    • /
    • 1989
  • This paper deals with the control of system with controlled jump Markov disturbances. A such formulation was used by Boukas to model the planning production and maintenance of a FMS with failure machines. The optimal control problem of systems with controlled jump Markov process is addressed. This problem describes the planning production and preventive maintenance of production systems. The optimality conditions in both cases finite and infinite horizon, are derived. A numerical example is presented to validate the proposed results.

  • PDF

Development of Stochastic Markov Process Model for Maintenance of Armor Units of Rubble-Mound Breakwaters (경사제 피복재의 유지관리를 위한 추계학적 Markov 확률모형의 개발)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.2
    • /
    • pp.52-62
    • /
    • 2013
  • A stochastic Markov process (MP) model has been developed for evaluating the probability of failure of the armor unit of rubble-mound breakwaters as a function of time. The mathematical MP model could have been formulated by combining the counting process or renewal process (CP/RP) on the load occurrences with the damage process (DP) on the cumulative damage events, and applied to the armor units of rubble-mound breakwaters. Transition probabilities have been estimated by Monte-Carlo simulation (MCS) technique with the definition of damage level of armor units, and very well satisfies some conditions constrained in the probabilistic and physical views. The probabilities of failure have been also compared and investigated in process of time which have been calculated according to the variations of return period and safety factor being the important variables related to design of armor units of rubble-mound breakwater. In particular, it can be quantitatively found how the prior damage levels can effect on the sequent probabilities of failure. Finally, two types of methodology have been in this study proposed to evaluate straightforwardly the repair times which are indispensable to the maintenance of armor units of rubble-mound breakwaters and shown several simulation results including the cost analyses.

CHAIN DEPENDENCE AND STATIONARITY TEST FOR TRANSITION PROBABILITIES OF MARKOV CHAIN UNDER LOGISTIC REGRESSION MODEL

  • Sinha Narayan Chandra;Islam M. Ataharul;Ahmed Kazi Saleh
    • Journal of the Korean Statistical Society
    • /
    • v.35 no.4
    • /
    • pp.355-376
    • /
    • 2006
  • To identify whether the sequence of observations follows a chain dependent process and whether the chain dependent or repeated observations follow stationary process or not, alternative procedures are suggested in this paper. These test procedures are formulated on the basis of logistic regression model under the likelihood ratio test criterion and applied to the daily rainfall occurrence data of Bangladesh for selected stations. These test procedures indicate that the daily rainfall occurrences follow a chain dependent process, and the different types of transition probabilities and overall transition probabilities of Markov chain for the occurrences of rainfall follow a stationary process in the Mymensingh and Rajshahi areas, and non-stationary process in the Chittagong, Faridpur and Satkhira areas.

Partially Observable Markov Decision Processes (POMDPs) and Wireless Body Area Networks (WBAN): A Survey

  • Mohammed, Yahaya Onimisi;Baroudi, Uthman A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.5
    • /
    • pp.1036-1057
    • /
    • 2013
  • Wireless body area network (WBAN) is a promising candidate for future health monitoring system. Nevertheless, the path to mature solutions is still facing a lot of challenges that need to be overcome. Energy efficient scheduling is one of these challenges given the scarcity of available energy of biosensors and the lack of portability. Therefore, researchers from academia, industry and health sectors are working together to realize practical solutions for these challenges. The main difficulty in WBAN is the uncertainty in the state of the monitored system. Intelligent learning approaches such as a Markov Decision Process (MDP) were proposed to tackle this issue. A Markov Decision Process (MDP) is a form of Markov Chain in which the transition matrix depends on the action taken by the decision maker (agent) at each time step. The agent receives a reward, which depends on the action and the state. The goal is to find a function, called a policy, which specifies which action to take in each state, so as to maximize some utility functions (e.g., the mean or expected discounted sum) of the sequence of rewards. A partially Observable Markov Decision Processes (POMDP) is a generalization of Markov decision processes that allows for the incomplete information regarding the state of the system. In this case, the state is not visible to the agent. This has many applications in operations research and artificial intelligence. Due to incomplete knowledge of the system, this uncertainty makes formulating and solving POMDP models mathematically complex and computationally expensive. Limited progress has been made in terms of applying POMPD to real applications. In this paper, we surveyed the existing methods and algorithms for solving POMDP in the general domain and in particular in Wireless body area network (WBAN). In addition, the papers discussed recent real implementation of POMDP on practical problems of WBAN. We believe that this work will provide valuable insights for the newcomers who would like to pursue related research in the domain of WBAN.

A Study on the Parameter Estimation for the Bit Synchronization Using the Gauss-Markov Estimator (Gauss-Markov 추정기를 이용한 비트 동기화를 위한 파라미터 추정에 관한 연구)

  • Ryu, Heung-Gyoon;Ann, Sou-Guil
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.3
    • /
    • pp.8-13
    • /
    • 1989
  • The parameters of bipolar random square-wave signal process, amplitude and phase with unknown probability distribution are shown to be simultaneously estimated by using Gauss-Markov estimator so that transmitted digital data can be recovered under the additive Gaussinan noise environment. However, we see that the preprocessing stage using the correlator composed of the multiplier and the running integrator is needed to convert the received process into the sampled sequences and to obtain the observed data vectors, which can be used for Gauss-Markov estimation.

  • PDF

Comparison of Perturbation Analysis Estimate and Forward Difference Estimate in a Markov Renewal Process

  • Park, Heung-sik
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.871-884
    • /
    • 2000
  • Using simulation, we compare the perturbation analysis estimate and the forward difference estimate for the first and second derivatives of performance measures in a Markov renewal process. We find the perturbation analysis estimate has much les mean squared error than the traditional forward difference estimate.

  • PDF