• Title/Summary/Keyword: cumulative normal distribution

Search Result 63, Processing Time 0.029 seconds

The Gains To Bidding Firms' Stock Returns From Merger (기업합병의 성과에 영향을 주는 요인에 대한 실증적 연구)

  • Kim, Yong-Kap
    • Management & Information Systems Review
    • /
    • v.23
    • /
    • pp.41-74
    • /
    • 2007
  • In Korea, corporate merger activities were activated since 1980, and nowadays(particuarly since 1986) the changes in domestic and international economic circumstances have made corporate managers have strong interests in merger. Korea and America have different business environments and it is easily conceivable that there exists many differences in motives, methods, and effects of mergers between the two countries. According to recent studies on takeover bids in America, takeover bids have information effects, tax implications, and co-insurance effects, and the form of payment(cash versus securities), the relative size of target and bidder, the leverage effect, Tobin's q, number of bidders(single versus multiple bidder), the time period (before 1968, 1968-1980, 1981 and later), and the target firm reaction (hostile versus friendly) are important determinants of the magnitude of takeover gains and their distribution between targets and bidders at the announcement of takeover bids. This study examines the theory of takeover bids, the status quo and problems of merger in Korea, and then investigates how the announcement of merger are reflected in common stock returns of bidding firms, finally explores empirically the factors influencing abnormal returns of bidding firms' stock price. The hypotheses of this study are as follows ; Shareholders of bidding firms benefit from mergers. And common stock returns of bidding firms at the announcement of takeover bids, shows significant differences according to the condition of the ratio of target size relative to bidding firm, whether the target being a member of the conglomerate to which bidding firm belongs, whether the target being a listed company, the time period(before 1986, 1986, and later), the number of bidding firm's stock in exchange for a stock of the target, whether the merger being a horizontal and vertical merger or a conglomerate merger, and the ratios of debt to equity capital of target and bidding firm. The data analyzed in this study were drawn from public announcements of proposals to acquire a target firm by means of merger. The sample contains all bidding firms which were listed in the stock market and also engaged in successful mergers in the period 1980 through 1992 for which there are daily stock returns. A merger bid was considered successful if it resulted in a completed merger and the target firm disappeared as a separate entity. The final sample contains 113 acquiring firms. The research hypotheses examined in this study are tested by applying an event-type methodology similar to that described in Dodd and Warner. The ordinary-least-squares coefficients of the market-model regression were estimated over the period t=-135 to t=-16 relative to the date of the proposal's initial announcement, t=0. Daily abnormal common stock returns were calculated for each firm i over the interval t=-15 to t=+15. A daily average abnormal return(AR) for each day t was computed. Average cumulative abnormal returns($CART_{T_1,T_2}$) were also derived by summing the $AR_t's$ over various intervals. The expected values of $AR_t$ and $CART_{T_1,T_2}$ are zero in the absence of abnormal performance. The test statistics of $AR_t$ and $CAR_{T_1,T_2}$ are based on the average standardized abnormal return($ASAR_t$) and the average standardized cumulative abnormal return ($ASCAR_{T_1,T_2}$), respectively. Assuming that the individual abnormal returns are normal and independent across t and across securities, the statistics $Z_t$ and $Z_{T_1,T_2}$ which follow a unit-normal distribution(Dodd and Warner), are used to test the hypotheses that the average standardized abnormal returns and the average cumulative standardized abnormal returns equal zero.

  • PDF

Statistical analysis and probabilistic modeling of WIM monitoring data of an instrumented arch bridge

  • Ye, X.W.;Su, Y.H.;Xi, P.S.;Chen, B.;Han, J.P.
    • Smart Structures and Systems
    • /
    • v.17 no.6
    • /
    • pp.1087-1105
    • /
    • 2016
  • Traffic load and volume is one of the most important physical quantities for bridge safety evaluation and maintenance strategies formulation. This paper aims to conduct the statistical analysis of traffic volume information and the multimodal modeling of gross vehicle weight (GVW) based on the monitoring data obtained from the weigh-in-motion (WIM) system instrumented on the arch Jiubao Bridge located in Hangzhou, China. A genetic algorithm (GA)-based mixture parameter estimation approach is developed for derivation of the unknown mixture parameters in mixed distribution models. The statistical analysis of one-year WIM data is firstly performed according to the vehicle type, single axle weight, and GVW. The probability density function (PDF) and cumulative distribution function (CDF) of the GVW data of selected vehicle types are then formulated by use of three kinds of finite mixed distributions (normal, lognormal and Weibull). The mixture parameters are determined by use of the proposed GA-based method. The results indicate that the stochastic properties of the GVW data acquired from the field-instrumented WIM sensors are effectively characterized by the method of finite mixture distributions in conjunction with the proposed GA-based mixture parameter identification algorithm. Moreover, it is revealed that the Weibull mixture distribution is relatively superior in modeling of the WIM data on the basis of the calculated Akaike's information criterion (AIC) values.

Size Distribution and Temperature Dependence of Magnetic Anisotropy Constant in Ferrite Nanoparticles

  • Yoon, Sunghyun
    • Proceedings of the Korean Magnestics Society Conference
    • /
    • 2012.11a
    • /
    • pp.104-105
    • /
    • 2012
  • The temperature dependence of the effective magnetic anisotropy constant K(T) of ferrite nanoparticles is obtained based on the measurements of SQUID magnetometry. For this end, a very simple but intuitive and direct method for determining the temperature dependence of anisotropy constant K(T) in nanoparticles is introduced in this study. The anisotropy constant at a given temperature is determined by associating the particle size distribution f(r) with the anisotropy energy barrier distribution $f_A(T)$. In order to estimate the particle size distribution f(r), the first quadrant part of the hysteresis loop is fitted to the classical Langevin function weight-averaged with the log?normal distribution, slightly modified from the original Chantrell's distribution function. In order to get an anisotropy energy barrier distribution $f_A(T)$, the temperature dependence of magnetization decay $M_{TD}$ of the sample is measured. For this measurement, the sample is cooled from room temperature to 5 K in a magnetic field of 100 G. Then the applied field is turned off and the remanent magnetization is measured on stepwise increasing the temperature. And the energy barrier distribution $f_A(T)$ is obtained by differentiating the magnetization decay curve at any temperature. It decreases with increasing temperature and finally vanishes when all the particles in the sample are unblocked. As a next step, a relation between r and $T_B$ is determined from the particle size distribution f(r) and the anisotropy energy barrier distribution $f_A(T)$. Under the simple assumption that the superparamagnetic fraction of cumulative area in particle size distribution at a temperature is equal to the fraction of anisotropy energy barrier overcome at that temperature in the anisotropy energy barrier distribution, we can get a relation between r and $T_B$, from which the temperature dependence of the magnetic anisotropy constant was determined, as is represented in the inset of Fig. 1. Substituting the values of r and $T_B$ into the $N{\acute{e}}el$-Arrhenius equation with the attempt time fixed to $10^{-9}s$ and measuring time being 100 s which is suitable for conventional magnetic measurement, the anisotropy constant K(T) is estimated as a function of temperature (Fig. 1). As an example, the resultant effective magnetic anisotropy constant K(T) of manganese ferrite decreases with increasing temperature from $8.5{\times}10^4J/m^3$ at 5 K to $0.35{\times}10^4J/m^3$ at 125 K. The reported value for K in the literatures is $0.25{\times}10^4J/m^3$. The anisotropy constant at low temperature region is far more than one order of magnitude larger than that at 125 K, indicative of the effects of inter?particle interaction, which is more pronounced for smaller particles.

  • PDF

A Study on an Electrical Biosignal Detection System for the Microbiochip (마이크로바이오칩의 전기신호검출 시스템에 관한 연구)

  • Park Jeong Yeon;Park Jae Jun;Kwon Ki Hwan;Cho Nahm Gyoo;Ahn Yoo Min;Lee Seoung Hwan;Hwang Seung Yong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.4
    • /
    • pp.181-187
    • /
    • 2005
  • In this study, a microchip system fabricated with MEMS technology was developed to detect bioelectrical signals. The developed microchip using the conductivity of gold nanoparticles could detect the biopotential with a high sensitivity. For designing the microchip, simulations were performed to understand the effects of the size and number of nanoparticles, and the sensing width between electrodes on the detection of biosignals. Then, a series of experiment was performed to validate the simulation results and understand the feasibility of the proposed microchip design. Both simulation and experimental results showed that as the sensing width between electrodes increased the conductivity decreased. Also, the conductivity increased as the density of gold nanoparticles increased. In addition, it was found that the conductivity that changes with the nanoparticles density could be approximated by a cumulative normal distribution function. The developed microchip system could effectively apply when a biosignals should be measured with a high sensitivity.

Regional Drought Frequency Analysis of Monthly Precipitation with L-Moments Method in Nakdong River Basin (L-Moments법에 의한 낙동강유역 월강우량의 지역가뭄빈도해석)

  • 김성원
    • Journal of Environmental Science International
    • /
    • v.8 no.4
    • /
    • pp.431-441
    • /
    • 1999
  • In this study, the regional frequency analysis is used to determine each subbasin drought frequency with reliable monthly precipitation and the L-Moments method which is almost unbiased and has very nearly a normal distribution is used for the parameter estimation of monthly precipitation time series in Nakdong river basin. As the result of this study, the duration of '93-'94 is most severe drought year than any other water year and the drought frequency is established as compared the regional frequency analysis result of cumulative precipitation of 12th duration months in each subbasin with that of 12th duration months in the major drought duration. The Linear regression equation is induced according to linear regression analysis of drought frequency between Nakdong total basin and each subbasin of the same drought duration. Therefore, as the foundation of this study, it can be applied proposed method and procedure of this study to the water budget analysis considering safety standards for the design of impounding facilities large-scale river basin and for this purpose, above all, it is considered that expansion of reliable preciptation data is needed in watershed rainfall station.

  • PDF

Structure Reliability Analysis using 3rd Order Polynomials Approximation of a Limit State Equation (한계상태식의 3차 다항식 근사를 통한 구조물 신뢰도 평가)

  • Lee, Seung Gyu;Kim, Sung Chan;Kim, Tea Uk
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.26 no.3
    • /
    • pp.183-189
    • /
    • 2013
  • In this paper, uncertainties and failure criteria of structure are mathematically expressed by random variables and a limit state equation. A limit state equation is approximated by Fleishman's 3rd order polynomials and the theoretical moments of an approximated limit state equation are calculated. Fleishman introduced a 3rd order polynomial in terms of only standard normal distiribution random variables. But, in this paper, Fleishman's polynomial is extended to various random variables including beta, gamma, uniform distributions. Cumulants and a normalized limit state equation are used to calculate a theoretical moments of a limit state equation. A cumulative distribution function of a normalized limit state equation is approximated by a Pearson system.

Statistical Life Prediction of Fatigue Crack Growth for SiC Whisker Reinforced Aluminium Composite (SiC 휘스커 보강 Al6061 복합재료의 통계학적 피로균열진전 수명예측)

  • 권재도;안정주;김상태
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.19 no.2
    • /
    • pp.475-485
    • /
    • 1995
  • In this study, statistical analysis of fatigue data which had obtained from respective 24 fatigue crack, was examined for SiC whisker reinforced aluminium 6061 composite alloy (SiC$_{w}$/A16061) and aluminium 6061 alloy. SiC volume fraction in composite alloy is 25%. The analysis results stress intensity factor range and 0.1 mm fatigue crack initiation life for SiC$_{w}$/A16061 composite & A16061 matrix are the log-normal distribution. And regression analysis by linear model, exponential model and multiplicative model were performed to find out the relationship between fatigue crack growth rate(da/dN) and stress intensity for find out the relationship between fatigue crack growth rate(da/dN) and stress intensity factor range(.DELTA.K) in the SiC$_{w}$/A16061 composite and examine the applicability of Paris' equation to SiC$_{w}$A16061 composite. Also computer simulation was performed for fatigue life prediction of SiC$_{w}$/A16061 composite using the statistical results of this study.udy.

A statistical framework with stiffness proportional damage sensitive features for structural health monitoring

  • Balsamo, Luciana;Mukhopadhyay, Suparno;Betti, Raimondo
    • Smart Structures and Systems
    • /
    • v.15 no.3
    • /
    • pp.699-715
    • /
    • 2015
  • A modal parameter based damage sensitive feature (DSF) is defined to mimic the relative change in any diagonal element of the stiffness matrix of a model of a structure. The damage assessment is performed in a statistical pattern recognition framework using empirical complementary cumulative distribution functions (ECCDFs) of the DSFs extracted from measured operational vibration response data. Methods are discussed to perform probabilistic structural health assessment with respect to the following questions: (a) "Is there a change in the current state of the structure compared to the baseline state?", (b) "Does the change indicate a localized stiffness reduction or increase?", with the latter representing a situation of retrofitting operations, and (c) "What is the severity of the change in a probabilistic sense?". To identify a range of normal structural variations due to environmental and operational conditions, lower and upper bound ECCDFs are used to define the baseline structural state. Such an approach attempts to decouple "non-damage" related variations from damage induced changes, and account for the unknown environmental/operational conditions of the current state. The damage assessment procedure is discussed using numerical simulations of ambient vibration testing of a bridge deck system, as well as shake table experimental data from a 4-story steel frame.

An Approximation Approach for Solving a Continuous Review Inventory System Considering Service Cost (서비스 비용을 고려한 연속적 재고관리시스템 해결을 위한 근사법)

  • Lee, Dongju;Lee, Chang-Yong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.2
    • /
    • pp.40-46
    • /
    • 2015
  • The modular assembly system can make it possible for the variety of products to be assembled in a short lead time. In this system, necessary components are assembled to optional components tailor to customers' orders. Budget for inventory investments composed of inventory and purchasing costs are practically limited and the purchasing cost is often paid when an order is arrived. Service cost is assumed to be proportional to service level and it is included in budget constraint. We develop a heuristic procedure to find a good solution for a continuous review inventory system of the modular assembly system with a budget constraint. A regression analysis using a quadratic function based on the exponential function is applied to the cumulative density function of a normal distribution. With the regression result, an efficient heuristics is proposed by using an approximation for some complex functions that are composed of exponential functions only. A simple problem is introduced to illustrate the proposed heuristics.

Comparison of Statistical Models for Analysis of Fatigue Life of Cable (케이블 피로 수명 해석 통계 모델 비교)

  • Suh, Jeong-In;Yoo, Sung-Won
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.7 no.4
    • /
    • pp.129-137
    • /
    • 2003
  • The cable in the cable-supported structures is long, therefore it can be reasonable to apply the different models, compared with those used for general steel elements. This paper compares the statistical models with existing cable fatigue data, after deriving the cdf(cumulative distibution function) with modifying the log-normal distribution, the existing extremal distributions so as to include length effect. The paper presents the appropriate model for analyzing and assessing the fatigue behavior of cable which is being used for actual structures.