• 제목/요약/키워드: Initial distribution

검색결과 1,858건 처리시간 0.028초

개량박막 유한요소법에 의한 두가지 블랭크로부터의 사각컵 딥드로잉 성형해석 (Analysis of Square Cup Deep Drawing from two Types of Blanks with a Modified Membrane Finite Element Method)

  • 허훈;한수식
    • 대한기계학회논문집
    • /
    • 제18권10호
    • /
    • pp.2653-2663
    • /
    • 1994
  • The design of sheet metal working processes is based on the knowledge about the deformation mechanism and the influence of the process parameters. The typical geometric process parameters are the die geometry, the initial sheet thickness, the initial blank shape, and so on. The initial blank shape is of vital importance in the most sheet metal forming operations, especially in the deep drawing process, since the forming load and the strain distribution are significantly affected by the shape of an initial blank. The influence of the initial blank shape on a square cup deep drawing process is investigated by the numerical simulation and the experiment. The numerical simulation is carried out by a modified membrane finite element method which takes bending deformation into account. The numerical and experi-mental results show that the initial blank shape have strong influence on the forming load and the strain distribution. The numerical results are compared with the experimental results and other numerical results which are calculated with the membrane theory.

외래이용빈도 분석의 모형과 기법 (A Ppoisson Regression Aanlysis of Physician Visits)

  • 이영조;한달선;배상수
    • 보건행정학회지
    • /
    • 제3권2호
    • /
    • pp.159-176
    • /
    • 1993
  • The utilization of outpatient care services involves two steps of sequential decisions. The first step decision is about whether to initiate the utilization and the second one is about how many more visits to make after the initiation. Presumably, the initiation decision is largely made by the patient and his or her family, while the number of additional visits is decided under a strong influence of the physician. Implication is that the analysis of the outpatient care utilization requires to specify each of the two decisions underlying the utilization as a distinct stochastic process. This paper is concerned with the number of physician visits, which is, by definition, a discrete variable that can take only non-negative integer values. Since the initial visit is considered in the analysis of whether or not having made any physician visit, the focus on the number of visits made in addition to the initial one must be enough. The number of additional visits, being a kind of count data, could be assumed to exhibit a Poisson distribution. However, it is likely that the distribution is over dispersed since the number of physician visits tends to cluster around a few values but still vary widely. A recently reported study of outpatient care utilization employed an analysis based upon the assumption of a negative binomial distribution which is a type of overdispersed Poisson distribution. But there is an indication that the use of Poisson distribution making adjustments for over-dispersion results in less loss of efficiency in parameter estimation compared to the use of a certain type of distribution like a negative binomial distribution. An analysis of the data for outpatient care utilization was performed focusing on an assessment of appropriateness of available techniques. The data used in the analysis were collected by a community survey in Hwachon Gun, Kangwon Do in 1990. It was observed that a Poisson regression with adjustments for over-dispersion is superior to either an ordinary regression or a Poisson regression without adjustments oor over-dispersion. In conclusion, it seems the most approprite to assume that the number of physician visits made in addition to the initial visist exhibits an overdispersed Poisson distribution when outpatient care utilization is studied based upon a model which embodies the two-part character of the decision process uderlying the utilization.

  • PDF

THE EVOLUTION OF A SPIRAL GALAXY: THE GALAXY

  • Lee, See-Woo;Park, Byeong-Gon;Kang, Yong-Hee;Ann, Hong-Bae
    • 천문학회지
    • /
    • 제24권1호
    • /
    • pp.25-53
    • /
    • 1991
  • The evolution of the Galaxy is examined by the halo-disk model, using the time-dependent bimodal IMF and contraints such as cumulative metallicity distribution, differential metallicity distribution and PDMF of main sequence stars. The time scale of the Galactic halo formation is about 3Gyr during which the most of halo stars and metal abundance are formed and ${\sim}95%$ of the initial halo mass falls to the disk. The G-dwarf problem could be explained by the time-dependent bimodal IMF which is suppressed for low mass stars at the early phase (t < 1Gyr) of the disk evolution. However, the importance of this problem is much weakened by the Pagel's differential metallicity distribution which leads to less initial metal enrichment and many long-lived metal-poor stars with Z < $1/3Z_{\odot}$ The observational distribution of abundance ratios of C, N, O elements with respect to [Fe/H] could be reproduced by the halo-disk model, including the contribution of iron product by SNIs of intermediate mass stars. The initial enrichment of elements in the disk could be explained by the halo-disk model, resulting in the slight decrease and then the increase in the slopes of the [N/Fe]- and [C/Fe]-distributions with increasing [Fe/H] in the range of [Fe/H] < -1.

  • PDF

하천유역의 유사량의 비교연구 (Comparison of Sediment Yield by IUSG and Tank Model in River Basin)

  • 이영화
    • 한국환경과학회지
    • /
    • 제18권1호
    • /
    • pp.1-7
    • /
    • 2009
  • In this study a sediment yield is compared by IUSG, IUSG with Kalman filter, tank model and tank model with Kalman filter separately. The IUSG is the distribution of sediment from an instantaneous burst of rainfall producing one unit of runoff. The IUSG, defined as a product of the sediment concentration distribution (SCD) and the instantaneous unit hydrograph (IUH), is known to depend on the characteristics of the effective rainfall. In the IUSG with Kalman filter, the state vector of the watershed sediment yield system is constituted by the IUSG. The initial values of the state vector are assumed as the average of the IUSG values and the initial sediment yield estimated from the average IUSG. A tank model consisting of three tanks was developed for prediction of sediment yield. The sediment yield of each tank was computed by multiplying the total sediment yield by the sediment yield coefficients; the yield was obtained by the product of the runoff of each tank and the sediment concentration in the tank. A tank model with Kalman filter is developed for prediction of sediment yield. The state vector of the system model represents the parameters of the tank model. The initial values of the state vector were estimated by trial and error.

A Study on the Efficiency of a Two Stage Shrinkage Testimator for the Mean of an Exponential Distribution

  • Myung-Sang Moon
    • Communications for Statistical Applications and Methods
    • /
    • 제5권1호
    • /
    • pp.231-238
    • /
    • 1998
  • A two stage shrinkage testimator for the mean of an exponential distribution is considered with the assumption that an initial estimate of the mean is available. Mean squared error(MSE) of testimator and its relative efficiency (to usual single sample mean) are briefly reviewed. It is shown that relative efficiency depends only on the ratio of true mean value and its initial estimate.

  • PDF

Application of In Situ Measurement for Site Remediation and Final Status Survey of Decommissioning KRR Site

  • Hong, Sang Bum;Nam, Jong Soo;Choi, Yong Suk;Seo, Bum Kyoung;Moon, Jei Kwon
    • Journal of Radiation Protection and Research
    • /
    • 제41권2호
    • /
    • pp.173-178
    • /
    • 2016
  • Background: In situ gamma spectrometry has been used to measure environmental radiation, assumptions are usually made about the depth distribution of the radionuclides of interest in the soil. The main limitation of in situ gamma spectrometry lies in determining the depth distribution of radionuclides. The objective of this study is to develop a method for subsurface characterization by in situ measurement. Materials and Methods: The peak to valley method based on the ratio of counting rate between the photoelectric peak and Compton region was applied to identify the depth distribution. The peak to valley method could be applied to establish the relation between the spectrally derived coefficients (Q) with relaxation mass per unit area (${\beta}$) for various depth distribution in soil. The in situ measurement results were verified by MCNP simulation and calculated correlation equation. In order to compare the depth distributions and contamination levels in decommissioning KRR site, in situ measurement and sampling results were compared. Results and Discussion: The in situ measurement results and MCNP simulation results show a good correlation for laboratory measurement. The simulation relationship between Q and source burial for the source layers have exponential relationship for a variety depth distributions. We applied the peak to valley method to contaminated decommissioning KRR site to determine a depth distribution and initial activity without sampling. The observed results has a good correlation, relative error between in situ measurement with sampling result is around 7% for depth distribution and 4% for initial activity. Conclusion: In this study, the vertical activity distribution and initial activity of $^{137}Cs$ could be identifying directly through in situ measurement. Therefore, the peak to valley method demonstrated good potential for assessment of the residual radioactivity for site remediation in decommissioning and contaminated site.

유한요소법에 의한 초고압변압기권선의 충격파전위분포설계에 관한 연구 (Study on Surge Voltage Distribution Design for UHV Transformer Windings by Finite Element Method)

  • 황영문;이일천
    • 전기의세계
    • /
    • 제28권11호
    • /
    • pp.45-51
    • /
    • 1979
  • Finte element methods are developed for the initial distribution problems which contain the surge potential circuits of high voltage transformer windings. The initial distribution of surge voltages in transformer windings are useful to the work to a practical engineering basis. However, the conventional methods of analyzing them so far are much complicated for practical designs. In this paper, the ability to solve surge potential field problems underlies the development of descreting methods to a lodal capacitive distribution-coefficients for determing the surge voltage relationship among a set of transformer coils. A practical example-the modeling of an antioscillation shield coil winding and hisercap winding is used to illustrate and evaluate these methods.

  • PDF

용량성배전변압기에 관한 연구 (A study on capacitive transformer)

  • 이승원
    • 전기의세계
    • /
    • 제18권2호
    • /
    • pp.7-14
    • /
    • 1969
  • From the first customer located right at the substation to the last customer at the end of the line, voltage must be held within close limits, so the voltage regulation is more important than the thermal limit. On a typical distribution system during the peak load period, the voltage drop may be serious enough to cause unsatisfactory operation of home appliances in the residential area, and present many problems to manufacturing industries, where the voltage must be maintained within close limits to insure smooth operation. Among all the factors contributing to voltage drop in the distribution system, the voltage drop in the distribution transformer may account for 30% of this figure. If we can eliminate this factor, the power companies can provide better quality electricity to more customers with the existing distribution facilities, thus saving on initial investment costs. Taking all these problems into consideration, the author undertook the design of a capacitive transformer which would give zero voltage drop at rated load and at 80% lagging power factor while incorporating overload features to withstand 400% overload for at least 100 seconds. The following are the results obtained through design, manufacture and test of an initial experimental transformer built with these specific purposes.

  • PDF

Analytical Formulation for the Everett Function

  • Hong, Sun-Ki;Kim, Hong-Kyu;Jung, Hyun-Kyo
    • Journal of Magnetics
    • /
    • 제2권3호
    • /
    • pp.105-109
    • /
    • 1997
  • The Preisach model neds a density function or Everett function for the hysterisis operator to simulate the hysteresis phenomena. To obtain the function, many experimental data for the first order transition curves are required. However, it needs so much efforts to measure the curves, especially for the hard magnetic materials. By the way, it is well known that the density function has the Gaussian distribution for the interaction axis on the Preisach plane. In this paper, we propose a simple technique to determine the distribution function or Everett function analytically. The initial magnetization curve is used for the distribution of the Everett function for the coercivity axis. A major, minor loop and the initial curve are used to get the Everett function for the interaction axis using the Gaussian distribution function and acceptable results were obtained.

  • PDF

A NEW WAY FOR SOLVING TRANSPORTATION ISSUES BASED ON THE EXPONENTIAL DISTRIBUTION AND THE CONTRAHARMONIC MEAN

  • M. AMREEN;VENKATESWARLU B
    • Journal of applied mathematics & informatics
    • /
    • 제42권3호
    • /
    • pp.647-661
    • /
    • 2024
  • This study aims to determine the optimal solution to transportation problems. We proposed a novel approach for tackling the initial basic feasible solution. This is a critical step toward achieving an optimal or near-optimal solution. The transportation issue is an issue of distributing goods from several sources to several destinations. The literature demonstrates many ways to improve IBFS. In this work, to solve the IBFS, we suggested a new method based on the statistical formula called cumulative distribution function (CDF) in exponential distribution, and inverse contra-harmonic mean (ICHM). The spreadsheet converts transportation cost values into exponential cost cell values. The stepping-stone method is used to identify an optimum solution. The results are compared with other existing methodologies, the suggested method incorporates balanced, and unbalanced, maximizing the profits, random values, and case studies which produce more effective outcomes.