• Title/Summary/Keyword: Choice prediction

Search Result 153, Processing Time 0.029 seconds

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Development of Vehicle Arrival Time Prediction Algorithm Based on a Demand Volume (교통수요 기반의 도착예정시간 산출 알고리즘 개발)

  • Kim, Ji-Hong;Lee, Gyeong-Sun;Kim, Yeong-Ho;Lee, Seong-Mo
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.107-116
    • /
    • 2005
  • The information on travel time in providing the information of traffic to drivers is one of the most important data to control a traffic congestion efficiently. Especially, this information is the major element of route choice of drivers, and based on the premise that it has the high degree of confidence in real situation. This study developed a vehicle arrival time prediction algorithm called as "VAT-DV" for 6 corridors in total 6.1Km of "Nam-san area trffic information system" in order to give an information of congestion to drivers using VMS, ARS, and WEB. The spatial scope of this study is 2.5km~3km sections of each corridor, but there are various situations of traffic flow in a short period because they have signalized intersections in a departure point and an arrival point of each corridor, so they have almost characteristics of interrupted and uninterrupted traffic flow. The algorithm uses the information on a demand volume and a queue length. The demand volume is estimated from density of each points based on the Greenburg model, and the queue length is from the density and speed of each point. In order to settle the variation of the unit time, the result of this algorithm is strategically regulated by importing the AVI(Automatic Vehicle Identification), one of the number plate matching methods. In this study, the AVI travel time information is composed by Hybrid Model in order to use it as the basic parameter to make one travel time in a day using ILD to classify the characteristics of the traffic flow along the queue length. According to the result of this study, in congestion situation, this algorithm has about more than 84% degree of accuracy. Specially, the result of providing the information of "Nam-san area traffic information system" shows that 72.6% of drivers are available.

New Development of Methods for Environmental Impact Assessment Facing Uncertainty and Cumulative Environmental Impacts (불확실성과 누적환경영향하에서의 환경영향평가를 위한 방법론의 새로운 개발)

  • Pietsch, Jurgen
    • Journal of Environmental Impact Assessment
    • /
    • v.4 no.3
    • /
    • pp.87-94
    • /
    • 1995
  • At both international and national levels, such as in the Rio Declaration and the EU's Fifth Environmental Action Plan, governments have committed themselves to the adoption of the precautionary principle (UNCED 1992, CEC 1992). These commitments mean that the existence of uncertainty in appraising policies and proposals for development should be acknowledged. Uncertainty arise in both the prediction of impacts and in the evaluation of their significance, particularly of those cumulative impacts which are individually insignificant but cumulatively damaging. The EC network of EIA experts, stated at their last meeting in Athens that indirect effects and the treatment of uncertainty are one of the main deficiencies of current EIA practice. Uncertainties in decision-making arise where choices have been made in the development of the policy or proposal, such as the selection of options, the justification for that choice, and the selection of different indicators to comply with different regulatory regimes. It is also likely that a weighting system for evaluating significance will have been used which may be implicit rather than explicit. Those involved in decision-making may employ different tolerances of uncertainty than members of the public, for instance over the consideration of the worst-case scenario. Possible methods for dealing with these uncertainties include scenarios, sensitivity analysis, showing points of view, decision analysis, postponing decisions and graphical methods. An understanding of the development of cumulative environmental impacts affords not only ecologic but also socio-economic investigations. Since cumulative impacts originate mainly in centres of urban or industrial development, in particular an analysis of future growth effects that might possibly be induced by certain development impacts. Not least it is seen as an matter of sustainability to connect this issue with ecological research. The serious attempt to reduce the area of uncertainty in environmental planning is a challenge and an important step towards reliable planning and sustainable development.

  • PDF

Analysis of Spatial Variability for Infiltration Rate of Field Soils II. Kriging (토양중(土壤中) 물의 침투속도(浸透速度)의 공간변이성(空間變異性) 분석(分析) II. Kriging)

  • Park, Chang-Seo;Kim, Jai-Joung;Cho, Seong-Jin
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.17 no.1
    • /
    • pp.18-23
    • /
    • 1984
  • Spatial variability of 96 laboratory-measured infiltration rates on the Hwadong SiCL was studied using geostatistical concepts. The measurement was made at the nodes of the regular grid consisting of 12 rows and 8 columns. Sample spacing within rows and columns was 3 and 2m, respectively. Kriging was a means of spacial prediction that can be used for the infiltration rate. It was optimal in the sense that it provided estimates at unrecorded places without bias and with minimum and known variance. An attempt has been made with original data to verily the validity of all assumptions (Stationarity, Variogram models, etc.) by Jack-knifing procedure and frequency distribution. Variogram models were not different from other models, such as linear in calculation of both kriged values and variances in justification of its choice for simplicity. Correlation coefficient for a one-to-one relationship between measured and kriged values was found to be 0.308, which was not significantly different at 1% significance level.

  • PDF

Analysis of Elementary Textbooks and Guidebook for Teacher regarding the Classification of Angles and Triangles in the Constructivist Perspective (구성주의 관점에서 각과 삼각형의 분류에 관한 초등 교과서 및 교사용지도서 분석)

  • Roh, Eun Hwan;Kang, Jeong Gi
    • Communications of Mathematical Education
    • /
    • v.29 no.3
    • /
    • pp.313-330
    • /
    • 2015
  • The classification is an important activity that is directly related to concept formation. Thus it will need to be made meaningful learning to classification through learner-centered teaching. But we doubts weather teaching and learning to the classification are reflected in the constructivist philosophy of 'learner-centered' well or not. The purpose of this study was to analyze critically the content of elementary textbooks and guidebook for teachers relating to the classification of angles and triangles in terms of constructivism. As a result, there is a problem in the classification of angles that are not provided a reasonable chance to set criteria by agreement of the communities. There is a problem in the classification of triangles that has the characteristics of radical development in terms of diversity. In addition, response of students was predicted like anyone who already acquired knowledge. And it has the shortcomings that the opportunity to have a choice and a discussion to hierarchical and partition classification are not provided. The followings are proposed based on such features; faithful reflection of 'Learner-centered' principle, careful prediction of student response, teaching that focus on process than results.

Prediction of Texture Evolution of Aluminum Extrusion Processes using Rigid-Plastic Finite Element Method based on Rate-Independent Crystal Plasticity (강소성 유한 요소 해석에 연계한 Rate-Independent 결정소성학을 이용한 3차원 알루미늄 압출재에서의 변형 집합 조직 예측)

  • Kim K.J.;Yang D.Y.;Yoon J.W.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.485-488
    • /
    • 2005
  • Most metals are polycrystalline material whose deformation is dominated by the slip system. During the deformation process, orientation of slip systems is rearranged with preferred orientations, leading to deformation-induced crystallographic texture which is called deformation texture. Depending on the texture development, the property of material can be changed. The rate-independent crystal plasticity which is based on the Schmid law as a yield function causes a non-uniqueness in the choice of active slip systems. In this work, to avoid the slip system ambiguity problem, rate-independent crystal plasticity model based on the smooth yield surface with rounded-off corners is adopted. In order to simulate the polycrystalline material under plastic deformation, we employ the Taylor model of polycrystal behavior that all the grains are assumed to be subjected to the macroscopic velocity gradient. Rigid-plastic finite element program based on this rate-independent crystal plasticity is developed to predict the grain-level deformation behavior of FCC metals during metal forming processes. In the finite element calculation, one integration point is considered as a crystalline aggregate which has a number of crystals. Macroscopic behavior of material can be deduced from the behavior of aggregates. As applications, the extrusion processes are simulated and the changes of mechanical properties are predicted.

  • PDF

A simplified method for estimating the fundamental period of masonry infilled reinforced concrete frames

  • Jiang, Rui;Jiang, Liqiang;Hu, Yi;Ye, Jihong;Zhou, Lingyu
    • Structural Engineering and Mechanics
    • /
    • v.74 no.6
    • /
    • pp.821-832
    • /
    • 2020
  • The fundamental period is an important parameter for seismic design and seismic risk assessment of building structures. In this paper, a simplified theoretical method to predict the fundamental period of masonry infilled reinforced concrete (RC) frame is developed based on the basic theory of engineering mechanics. The different configurations of the RC frame as well as masonry walls were taken into account in the developed method. The fundamental period of the infilled structure is calculated according to the integration of the lateral stiffness of the RC frame and masonry walls along the height. A correction coefficient is considered to control the error for the period estimation, and it is determined according to the multiple linear regression analysis. The corrected formula is verified by shaking table tests on two masonry infilled RC frame models, and the errors between the estimated and test period are 2.3% and 23.2%. Finally, a probability-based method is proposed for the corrected formula, and it allows the structural engineers to select an appropriate fundamental period with a certain safety redundancy. The proposed method can be quickly and flexibly used for prediction, and it can be hand-calculated and easily understood. Thus it would be a good choice in determining the fundamental period of RC frames infilled with masonry wall structures in engineering practice instead of the existing methods.

The Hybrid Multi-layer Inference Architectures and Algorithms of FPNN Based on FNN and PNN (FNN 및 PNN에 기초한 FPNN의 합성 다층 추론 구조와 알고리즘)

  • Park, Byeong-Jun;O, Seong-Gwon;Kim, Hyeon-Gi
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.7
    • /
    • pp.378-388
    • /
    • 2000
  • In this paper, we propose Fuzzy Polynomial Neural Networks(FPNN) based on Polynomial Neural Networks(PNN) and Fuzzy Neural Networks(FNN) for model identification of complex and nonlinear systems. The proposed FPNN is generated from the mutually combined structure of both FNN and PNN. The one and the other are considered as the premise part and consequence part of FPNN structure respectively. As the consequence part of FPNN, PNN is based on Group Method of Data Handling(GMDH) method and its structure is similar to Neural Networks. But the structure of PNN is not fixed like in conventional Neural Networks and self-organizing networks that can be generated. FPNN is available effectively for multi-input variables and high-order polynomial according to the combination of FNN with PNN. Accordingly it is possible to consider the nonlinearity characteristics of process and to get better output performance with superb predictive ability. As the premise part of FPNN, FNN uses both the simplified fuzzy inference as fuzzy inference method and error back-propagation algorithm as learning rule. The parameters such as parameters of membership functions, learning rates and momentum coefficients are adjusted using genetic algorithms. And we use two kinds of FNN structure according to the division method of fuzzy space of input variables. One is basic FNN structure and uses fuzzy input space divided by each separated input variable, the other is modified FNN structure and uses fuzzy input space divided by mutually combined input variables. In order to evaluate the performance of proposed models, we use the nonlinear function and traffic route choice process. The results show that the proposed FPNN can produce the model with higher accuracy and more robustness than any other method presented previously. And also performance index related to the approximation and prediction capabilities of model is evaluated and discussed.

  • PDF

Demand and Supply Forecast of Milk and the Consumer's Attitude for Milk Purchase (우유수급예측(牛乳需給豫測)과 소비자(消費者)의 우유구매태도(牛乳購買態度))

  • Park, Chong Soo;Ra, Chung Hee
    • Korean Journal of Agricultural Science
    • /
    • v.16 no.1
    • /
    • pp.71-83
    • /
    • 1989
  • The purpose of this research are to forecast the demand and supply of milk in Korea, and to obtain information for attitudes affecting milk consumption, which is necessary to make a plan for increasing milk consumption in Korea. The estimation of the milk demand and production was made by the multiplicative decomposition method and the statistical function. Data on consumer were collected from 737 students who were attending primary school, middle school and university in Daejeon during the period of July 11 to July 21, 1988. The results obtained are as follows; 1. The prediction results showed that the production for milk will over supply 21,900 tons in 1,990, 70,800 tons in 1,995 by the multiplicative decomposition method and 45,400 tons in 1990, -51,500 tons in 1995 by the statistical function. 2. It was found that almost all the students awared milk as essential food-stuff of common food stuff for the Koreans. 3. Quite a few students were apt to believe that milk processors added water into fluid milk. 4. Most students showed obtaining information about the nutritional value of milk by school education and advertising of TV, Radio, and Printed media. 5. However, it was found that the advertising by TV, Radio, and Printed media did hardly give to consumers influences on the choice of a particular milk brand. Accordingly, the conclusions are as follows; 1. Need to provide consumers with well planned education programs on the nutritional value of milk. 2. Heavy brand advertising for fluid milk may mislead the understanding of consumer, since city milk is not much differentiated in Korea. Therefore the milk processors should put more efforts in generic milk promotion by reducing brand advertizement. 3. The milk processors should provide major portion of financing for generic milk promotion program.

  • PDF

Stock-based Managerial Compensation and Risk-taking in Bank (은행 임원의 주식기준 보상과 위험추구)

  • Yeo, Eunjung;Yoon, Kyoung-Soo;Lee, Hojun
    • KDI Journal of Economic Policy
    • /
    • v.33 no.2
    • /
    • pp.41-79
    • /
    • 2011
  • This study examines the compensation scheme for the executives and risk-taking behavior in the Korean banks. Theoretically, shareholders prefer risky asset choice to the optimal one due to the limited liability feature of reward, and stock-based executive compensation may induce choices favorable to the shareholder. We empirically test this risk-taking hypothesis using Korean banks' data. Since only the stock option data is available under the current disclosure system, we limit our analysis to examine the relationship between the compensation through stock option and the risk of banks. The result provides no evidence that stock option compensations increase the risk of banks, which is contrary to the theoretical prediction and preceding studies in the US. This may be due to any factor that the executive reward data omit, or regulation effects on the bank management.

  • PDF