• Title/Summary/Keyword: 포아송모형

Search Result 228, Processing Time 0.024 seconds

Analysis of an M/M/1 Queue with an Attached Continuous-type (s,S)-inventory ((s,S)-정책하의 연속형 내부재고를 갖는 M/M/1 대기행렬모형 분석)

  • Park, Jinsoo;Lee, Hyeon Geun;Kim, Jong Hyeon;Yun, Eun Hyeuk;Baek, Jung Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.5
    • /
    • pp.19-32
    • /
    • 2018
  • This study focuses on an M/M/1 queue with an attached continuous-type inventory. The customers arrive into the system according to the Poisson process, and are served in their arrival order; i.e., first-come-first-served. The service times are assumed to be independent and identically distributed exponential random variable. At a service completion epoch, the customer consumes a random amount of inventory. The inventory is controlled by the traditional (s, S)-inventory policy with a generally distributed lead time. A customer that arrives during a stock-out period assumed to be lost. For the number of customers and the inventory size, we derive a product-form stationary joint probability distribution and provide some numerical examples. Besides, an operational strategy for the inventory that minimizes the long-term cost will also be discussed.

Comparative Analysis on the Performance of NHPP Software Reliability Model with Exponential Distribution Characteristics (지수분포 특성을 갖는 NHPP 소프트웨어 신뢰성 모형의 성능 비교 분석)

  • Park, Seung-Kyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.641-648
    • /
    • 2022
  • In this study, the performance of the NHPP software reliability model with exponential distribution (Exponential Basic, Inverse Exponential, Lindley, Rayleigh) characteristics was comparatively analyzed, and based on this, the optimal reliability model was also presented. To analyze the software failure phenomenon, the failure time data collected during system operation was used, and the parameter estimation was solved by applying the maximum likelihood estimation method (MLE). Through various comparative analysis (mean square error analysis, true value predictive power analysis of average value function, strength function evaluation, and reliability evaluation applied with mission time), it was found that the Lindley model was an efficient model with the best performance. Through this study, the reliability performance of the distribution with the characteristic of the exponential form, which has no existing research case, was newly identified, and through this, basic design data that software developers could use in the initial stage can be presented.

Developing the Traffic Accident Prediction Model using Classification And Regression Tree Analysis (CART분석을 이용한 교통사고예측모형의 개발)

  • Lee, Jae-Myung;Kim, Tae-Ho;Lee, Yong-Taeck;Won, Jai-Mu
    • International Journal of Highway Engineering
    • /
    • v.10 no.1
    • /
    • pp.31-39
    • /
    • 2008
  • Preventing the traffic accident by accurately predicting it in advance can greatly improve road traffic safety. The accurate traffic accident prediction model requires not only understanding of the factors that cause the accident but also having the transferability of the model. So, this paper suggest the traffic accident diagram using CART(Classification And Regression Tree) analysis, developed Model is compared with the existing accident prediction models in order to test the goodness of fit. The results of this study are summarized below. First, traffic accident prediction model using CART analysis is developed. Second, distance(D), pedestrian shoulder(m) and traffic volume among the geometrical factors are the most influential to the traffic accident. Third. CART analysis model show high predictability in comparative analysis between models. This study suggest the basic ideas to evaluate the investment priority for the road design and improvement projects of the traffic accident blackspots.

  • PDF

Extreme Quantile Estimation of Losses in KRW/USD Exchange Rate (원/달러 환율 투자 손실률에 대한 극단분위수 추정)

  • Yun, Seok-Hoon
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.5
    • /
    • pp.803-812
    • /
    • 2009
  • The application of extreme value theory to financial data is a fairly recent innovation. The classical annual maximum method is to fit the generalized extreme value distribution to the annual maxima of a data series. An alterative modern method, the so-called threshold method, is to fit the generalized Pareto distribution to the excesses over a high threshold from the data series. A more substantial variant is to take the point-process viewpoint of high-level exceedances. That is, the exceedance times and excess values of a high threshold are viewed as a two-dimensional point process whose limiting form is a non-homogeneous Poisson process. In this paper, we apply the two-dimensional non-homogeneous Poisson process model to daily losses, daily negative log-returns, in the data series of KBW/USD exchange rate, collected from January 4th, 1982 until December 31 st, 2008. The main question is how to estimate extreme quantiles of losses such as the 10-year or 50-year return level.

An optimal policy for an infinite dam with exponential inputs of water (비의 양이 지수분포를 따르는 경우 무한 댐의 최적 방출정책 연구)

  • Kim, Myung-Hwa;Baek, Jee-Seon;Choi, Seung-Kyoung;Lee, Eui-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.6
    • /
    • pp.1089-1096
    • /
    • 2011
  • We consider an infinite dam with inputs formed by a compound Poisson process and adopt a $P^M_{\lambda}$-policy to control the level of water, where the water is released at rate M when the level of water exceeds threshold ${\lambda}$. We obtain interesting stationary properties of the level of water, when the amount of each input independently follows an exponential distribution. After assigning several managing costs to the dam, we derive the long-run average cost per unit time and show that there exist unique values of releasing rate M and threshold ${\lambda}$ which minimize the long-run average cost per unit time. Numerical results are also illustrated by using MATLAB.

The Comparative Study of Software Optimal Release Time of Finite NHPP Model Considering Log Linear Learning Factor (로그선형 학습요인을 이용한 유한고장 NHPP모형에 근거한 소프트웨어 최적방출시기 비교 연구)

  • Cheul, Kim Hee;Cheul, Shin Hyun
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.3-10
    • /
    • 2012
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. When correcting or modifying the software, finite failure non-homogeneous Poisson process model, considering learning factor, presented and propose release policies of the life distribution, log linear type model which used to an area of reliability because of various shape and scale parameter. In this paper, discuss optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement. In a numerical example, the parameters estimation using maximum likelihood estimation of failure time data, make out estimating software optimal release time.

A Software Release Policy with Testing Time and the Number of Corrected Errors (시험시간과 오류수정개수를 고려한 소프트웨어 출시 시점결정)

  • Yoo, Young Kwan
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.7 no.4
    • /
    • pp.49-54
    • /
    • 2012
  • In this paper, a software policy considering testing time and the number of errors corrected is presented. The software is tested until a specified testing time or the time to a specified number of errors are corrected, whichever comes first. The model includes the cost of error correction and software testing during the testing time, and the cost of error correction during operation. It is assumed that the length of software life cycle has no bounds, and the error correction follows an non-homogeneous Poisson process. An expression for the total cost under the policy is derived. It is shown that the model includes the previous models as special cases.

  • PDF

Capacity Analysis of Base Stations in CDMA Mobile Communications Systems in the Subway Environment (지하철 환경에서 CDMA 이동통신시스템의 기지국 용량 분석)

  • Yang, Won-Seok;Yang, Eun-Saem;Park, Hyun-Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.7B
    • /
    • pp.789-794
    • /
    • 2011
  • We analyze the capacity of CDMA base stations in the subway environment. We investigate the characteristics of multipath fading, cell structures, and propagation environment in the subway, analyze signal to noise ratio, sectorization gain, path-loss exponent, frequency reuse factor, and obtain the link capacity of a base station in the subway. We measure the peakedness factor and reveal that base stations in the subway have peaked traffic. We use Neal-Wilkinson model to obtain the Erlang capacity instead of Erlang-B model based on Poisson traffic.

(Continuous-Time Queuing Model and Approximation Algorithm of a Packet Switch under Heterogeneous Bursty Traffic) (이질적 버스트 입력 트래픽 환경에서 패킷 교환기의 연속 시간 큐잉 모델과 근사 계산 알고리즘)

  • 홍석원
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.3
    • /
    • pp.416-423
    • /
    • 2003
  • This paper proposes a continuous-time queuing model of a shared-buffer packet switch and an approximate algorithm. N arrival processes have heterogeneous busty traffic characteristics. The arrival processes are modeled by Coxian distribution with order 2 that is equivalent to Interruped Poisson Process. The service time is modeled by Erlang distribution with r stages. First the approximate algorithm performs the aggregation of N arrival processes as a single state variable. Next the algorithm discompose the queuing system into N subsystems which are represented by aggregated state variables. And the balance equations based on these aggregated state variables are solved for by iterative method. Finally the algorithm is validated by comparing the results with those of simulation.

Using the corrected Akaike's information criterion for model selection (모형 선택에서의 수정된 AIC 사용에 대하여)

  • Song, Eunjung;Won, Sungho;Lee, Woojoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.119-133
    • /
    • 2017
  • Corrected Akaike's information criterion (AICc) is known to have better finite sample properties. However, Akaike's information criterion (AIC) is still widely used to select an optimal prediction model among several candidate models due to of a lack of research on benefits obtained using AICc. In this paper, we compare the performance of AIC and AICc through numerical simulations and confirm the advantage of using AICc. In addition, we also consider the performance of quasi Akaike's information criterion (QAIC) and the corrected quasi Akaike's information criterion (QAICc) for binomial and Poisson data under overdispersion phenomenon.