• Title/Summary/Keyword: Poisson problem

Search Result 201, Processing Time 0.022 seconds

The Comparative Study for Property of Learning Effect based on Truncated time and Delayed S-Shaped NHPP Software Reliability Model (절단고정시간과 지연된 S-형태 NHPP 소프트웨어 신뢰모형에 근거한 학습효과특성 비교연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.4
    • /
    • pp.25-34
    • /
    • 2012
  • In this study, in the process of testing before the release of the software products designed, software testing manager in advance should be aware of the testing-information. Therefore, the effective learning effects perspective has been studied using the NHPP software. The finite failure nonhomogeneous Poisson process models presented and applied property of learning effect based on truncated time and delayed S-shaped software reliability. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than autonomous errors-detected factor that is generally efficient model can be confirmed. This paper, a failure data analysis was performed, using time between failures, according to the small sample and large sample sizes. The parameter estimation was carried out using maximum likelihood estimation method. Model selection was performed using the mean square error and coefficient of determination, after the data efficiency from the data through trend analysis was performed.

Analysis of Determinants of Export of Korean Laver and Tuna: Using the Gravity Model (우리나라 김과 참치의 수출 결정요인 분석 : 중력모형을 이용하여)

  • Kim, Eun-Ji;Kim, Bong-Tae
    • The Journal of Fisheries Business Administration
    • /
    • v.51 no.4
    • /
    • pp.85-96
    • /
    • 2020
  • The purpose of this study is to find out the determinants of export in Korean fishery products. For the analysis, laver and tuna, which account for almost half of seafood exports, were selected, and a gravity model widely used in trade analysis was applied. As explanatory variables, GDP, number of overseas Koreans, exchange rate, FTA, and WTO were applied, and fixed effect terms were included to take into account multilateral resistance that hinders trade. The analysis period is from 2000 to 2018, and the Poisson Pseudo Maximum Likelihood (PPML) method was applied to solve the problem of zero observation and heteroscedasticity inherent in trade data. As a result of the analysis, GDP was found to have a significant positive effect on both laver and tuna. The number of overseas Koreans was significant in canned tuna exports, but not in laver and the other tuna products. As the exchange rate increased, the export of laver and tuna for sashimi increased. The impacts of the FTA were confirmed in the exports of dried laver and raw tuna, which supports the results of the previous study. WTO was not significant for laver and tuna. Based on these results, it is necessary to find a way to make good use of the FTA to expand exports of seafood.

A study on Application of the Rate Quality Control Method of Over-dispersed Traffic Crash Data (과분산된 교통사고자료에 대한 한계사고율법의 적용에 관한 연구)

  • Sung, Nak-Moon
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.1 s.72
    • /
    • pp.63-72
    • /
    • 2004
  • In conducting traffic safety programs, it is very important to identify hazardous sites in appropriate manner. The rate qualify control method is generally used in identifying hazardous sites since it can interpret the sites in the statistic aspects. The rate qualify control method is based on the assumption that the occurrences of traffic crashes follow the Poisson's distribution in which the expected value of traffic crashes equals the variance of those. However, there is greater variability than expected statistically, we call this phenomenon over dispersion. This study analyzed the problem related to the rate quality control method under the over dispersed data, and established a methodology to solve the problem. As a result of test on the basis of the field data, the new approach produced more reasonable results than those of the Poisson based rate quality control method.

Determining Checkpoint Intervals of Non-Preemptive Rate Monotonic Scheduling Using Probabilistic Optimization (확률 최적화를 이용한 비선점형 Rate Monotonic 스케줄링의 체크포인트 구간 결정)

  • Kwak, Seong-Woo;Yang, Jung-Min
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.120-127
    • /
    • 2011
  • Checkpointing is one of common methods of realizing fault-tolerance for real-time systems. This paper presents a scheme to determine checkpoint intervals using probabilistic optimization. The considered real-time systems comprises multiple tasks in which transient faults can happen with a Poisson distribution. Also, multi-tasks are scheduled by the non-preemptive Rate Monotonic (RM) algorithm. In this paper, we present an optimization problem where the probability of task completion is described by checkpoint numbers. The solution to this problem is the optimal set of checkpoint numbers and intervals that maximize the probability. The probability computation includes schedulability test for the non-preemptive RM algorithm with respect to given numbers of checkpoint re-execution. A case study is given to show the applicability of the proposed scheme.

Joint Optimization Algorithm Based on DCA for Three-tier Caching in Heterogeneous Cellular Networks

  • Zhang, Jun;Zhu, Qi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2650-2667
    • /
    • 2021
  • In this paper, we derive the expression of the cache hitting probability with random caching policy and propose the joint optimization algorithm based on difference of convex algorithm (DCA) in the three-tier caching heterogeneous cellular network assisted by macro base stations, helpers and users. Under the constraint of the caching capacity of caching devices, we establish the optimization problem to maximize the cache hitting probability of the network. In order to solve this problem, a convex function is introduced to convert the nonconvex problem to a difference of convex (DC) problem and then we utilize DCA to obtain the optimal caching probability of macro base stations, helpers and users for each content respectively. Simulation results show that when the density of caching devices is relatively low, popular contents should be cached to achieve a good performance. However, when the density of caching devices is relatively high, each content ought to be cached evenly. The algorithm proposed in this paper can achieve the higher cache hitting probability with the same density.

Canonical Transformations for Time-Dependent Harmonic Oscillators

  • Park, Tae-Jun
    • Bulletin of the Korean Chemical Society
    • /
    • v.25 no.2
    • /
    • pp.285-288
    • /
    • 2004
  • A canonical transformation changes variables such as coordinates and momenta to new variables preserving either the Poisson bracket or the commutation relations depending on whether the problem is classical or quantal respectively. Classically canonical transformations are well established as a powerful tool for solving differential equations. Quantum canonical transformations have been defined and used relatively recently because of the non-commutativeness of the quantum variables. Three elementary canonical transformations and their composite transformations have quantum implementations. Quantum canonical transformations have been mostly used in time-independent Schrodinger equations and a harmonic oscillator with time-dependent angular frequency is probably the only time-dependent problem solved by these transformations. In this work, we apply quantum canonical transformations to a harmonic oscillator in which both angular frequency and equilibrium position are time-dependent.

EFFICIENT PARAMETERS OF DECOUPLED DUAL SINGULAR FUNCTION METHOD

  • Kim, Seok-Chan;Pyo, Jae-Hong
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.13 no.4
    • /
    • pp.281-292
    • /
    • 2009
  • The solution of the interface problem or Poisson problem with concave corner has singular perturbation at the interface corners or singular corners. The decoupled dual singular function method (DDSFM) which exploits the singular representations of the solutions was suggested in [3, 9] and estimated optimal accuracy in [10]. The convergence rates consist with theoretical results even for the problems with very strong singularity, with the efficiency depending on parameters used in the methods. Furthermore the errors in $L^2$ and $L^\infty$-spaces display some oscillation, in the cases with meshsize not small enough. In this paper, we present an answer to remove the oscillation via numerical experiments. We observe the effects of parameters in DDSFM, and show the consisting efficiency of the method over the strong singularity.

  • PDF

Fuzzy Reasoning on Computational Fluid Dynamics - Feasibility of Fuzzy Control for Iterative Method - (CFD에로의 Fuzzy 추론 응용에 관한 연구 - 반복계산을 위한 퍼지제어의 유효성 -)

  • Lee, Y.W.;Jeong, Y.O.;Park, W.C.;Lee, D.H.;Bae, D.S.
    • Journal of Power System Engineering
    • /
    • v.2 no.3
    • /
    • pp.21-26
    • /
    • 1998
  • Numerical simulations for various fluid flows require enormous computing time during iterations. In order to solve this problem, several techniques have been proposed. A SOR method is one of the effective methods for solving elliptic equations. However, it is very difficult to find the optimum relaxation factor, the value of this factor for practical problems used to be estimated on the basis of expertise. In this paper, the implication of the relaxation factor are translated into fuzzy control rules on the basis of the expertise of numerical analysers, and fuzzy controller incorporated into a numerical algorithm. From two cases of study, Poisson equation and cavity flow problem, we confirmed the possibility of computational acceleration with fuzzy logic and qualitative reasoning in numerical simulations. Numerical experiments with the fuzzy controller resulted in generating a good performance.

  • PDF

Analysis of stiffened plates composed by different materials by the boundary element method

  • Fernandes, Gabriela R.;Neto, Joao R.
    • Structural Engineering and Mechanics
    • /
    • v.56 no.4
    • /
    • pp.605-623
    • /
    • 2015
  • A formulation of the boundary element method (BEM) based on Kirchhoff's hypothesis to analyse stiffened plates composed by beams and slabs with different materials is proposed. The stiffened plate is modelled by a zoned plate, where different values of thickness, Poisson ration and Young's modulus can be defined for each sub-region. The proposed integral representations can be used to analyze the coupled stretching-bending problem, where the membrane effects are taken into account, or to analyze the bending and stretching problems separately. To solve the domain integrals of the integral representation of in-plane displacements, the beams and slabs domains are discretized into cells where the displacements have to be approximated. As the beams cells nodes are adopted coincident to the elements nodes, new independent values arise only in the slabs domain. Some numerical examples are presented and compared to a wellknown finite element code to show the accuracy of the proposed model.

Iterative search for a combined pricing and (S-1,S) inventory policy in a two-echelon supply chain with lost sales allowed

  • Sung Chang Sup;Park Sun Hoo
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2003.05a
    • /
    • pp.8-13
    • /
    • 2003
  • This paper considers a continuous-review two-echelon inventory control problem with one-to-one replenishment policy incorporated and with lost sales allowed where demand arrives In a stationary Poisson process The problem Is formulated using METRIC-approximation in a combined approach of pricing and (S-1.S) Inventory policy, for which an iterative solution algorithm is derived with respect to the corresponding one-warehouse multi-retailor supply chain. Specifically, decisions on retail pricing and warehouse inventory policies are made in integration to maximize total profit in the supply chain. The objective function of the model consists of sub-functions of revenue and cost (holding cost and penalty cost). To test the effectiveness and efficiency of the proposed algorithm, numerical experiments are performed The computational results show that the proposed algorithm is efficient and derives quite good decisions

  • PDF