• Title/Summary/Keyword: Logit Models

Search Result 211, Processing Time 0.022 seconds

A Study on Commercial Power of Traditional Market

  • Baik, Key-Young;Youn, Myoung-Kil
    • East Asian Journal of Business Economics (EAJBE)
    • /
    • v.4 no.2
    • /
    • pp.1-11
    • /
    • 2016
  • This study investigated commercial power theory of traditional market through the analysis of literature review. Consumers' store selection models are made up a theory based on normative hypothesis, theory of mutual reaction, utility function estimation model, and cognitive-behavioral model. Detailed models are as follows. Normative hypothesis based theory is divided into Reilly's retail gratification theory and Converse's revised retail g ratification theory. Interaction theory is composed of Huff's probability gratification theory, MCI model and Multi-nominal Logit Model (MNL model). There are four models in retail organization position theory such as central place theories, single store position theory, multi store position - assign model, and retail growth potential model. In case of single store position theory, theoretical and empirical techniques have developed for a decision to optimum single store position. Those are like these, a check list, the most simple and systematic method, analogy, and microanalysis technique. Aforementioned models are theoretical and mathematical commercial power measurement and/or model. The study has rather limitations because the variation factors included in formula are only a part of actual commercial power. Therefore, further study shall be made continuously to commercial power areas and variables.

Bankruptcy Prediction Model with AR process (AR 프로세스를 이용한 도산예측모형)

  • 이군희;지용희
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.26 no.1
    • /
    • pp.109-116
    • /
    • 2001
  • The detection of corporate failures is a subject that has been particularly amenable to cross-sectional financial ratio analysis. In most of firms, however, the financial data are available over past years. Because of this, a model utilizing these longitudinal data could provide useful information on the prediction of bankruptcy. To correctly reflect the longitudinal and firm-specific data, the generalized linear model with assuming the first order AR(autoregressive) process is proposed. The method is motivated by the clinical research that several characteristics are measured repeatedly from individual over the time. The model is compared with several other predictive models to evaluate the performance. By using the financial data from manufacturing corporations in the Korea Stock Exchange (KSE) list, we will discuss some experiences learned from the procedure of sampling scheme, variable transformation, imputation, variable selection, and model evaluation. Finally, implications of the model with repeated measurement and future direction of research will be discussed.

  • PDF

Patterns of Delinquent Behavior Trajectory and Their Effect Factors (비행행동의 발달궤적 및 영향요인)

  • Kim, Se-Won;Lee, Bong-Joo
    • Korean Journal of Child Studies
    • /
    • v.30 no.5
    • /
    • pp.103-117
    • /
    • 2009
  • This study examined patterns of delinquent behavior trajectory from late childhood to early adolescence and examined relationships between patterns of trajectory and individual, family, and school factors. Youth delinquent behavior trajectories were examined by mixed growth models using data from the 2nd to 5th year surveys of the Seoul Panel Study of Children. Relationships between patterns and effect factors were examined by multinominal logit models. Four patterns emerged: non-delinquency (80%); rapidly accelerating delinquency (3.3%); decelerating delinquency (6.0%); and moderately accelerating (10.7%) groups. Contacts with a delinquent peer group had persistent effects on more serious delinquent behavior trajectories. Increased levels of self-esteem and school achievement prevented increase in delinquent behaviors; close relationships with parents and parental supervision caused decrease in delinquent behaviors.

  • PDF

Forecasting Future Market Share between Online-and Offline-Shopping Behavior of Korean Consumers with the Application of Double-Cohort and Multinomial Logit Models (생잔효과와 다중로짓모형으로 분석한 구매형태별 시장점유율 예측)

  • Lee, Seong-Woo;Yun, Seong-Do
    • Journal of Distribution Research
    • /
    • v.14 no.1
    • /
    • pp.45-65
    • /
    • 2009
  • As a number of people using the internet for their shopping steadily rises, it is increasingly important for retailers to understand why consumers decide to buy products via online or offline. The main purpose of this study is to develop and test a model that enhance our understanding of how consumers respond future online and offline channels for their purchasing. Rather than merely adopting statistical models like most other studies in this field, the present study develops a model that combines double-cohort method with multinomial logit model. It is desirable if one can adopt an overall encompassing criterion in the study of consumer behaviors form diverse sales channels. This study uses the concept of cohort or aging to enable this comparison. It enables us to analyze how consumers respond to online and offline channels as people aged by measuring their shopping behavior for an online and offline retailers and their subsequent purchase intentions. Based on some empirical findings, this study concludes with policy implications and some necessary fields of future studies desirable.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Solution Algorithms for Logit Stochastic User Equilibrium Assignment Model (확률적 로짓 통행배정모형의 해석 알고리듬)

  • 임용택
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.2
    • /
    • pp.95-105
    • /
    • 2003
  • Because the basic assumptions of deterministic user equilibrium assignment that all network users have perfect information of network condition and determine their routes without errors are known to be unrealistic, several stochastic assignment models have been proposed to relax this assumption. However. it is not easy to solve such stochastic assignment models due to the probability distribution they assume. Also. in order to avoid all path enumeration they restrict the number of feasible path set, thereby they can not preciously explain the travel behavior when the travel cost is varied in a network loading step. Another problem of the stochastic assignment models is stemmed from that they use heuristic approach in attaining optimal moving size, due to the difficulty for evaluation of their objective function. This paper presents a logit-based stochastic assignment model and its solution algorithm to cope with the problems above. We also provide a stochastic user equilibrium condition of the model. The model is based on path where all feasible paths are enumerated in advance. This kind of method needs a more computing demand for running the model compared to the link-based one. However, there are same advantages. It could describe the travel behavior more exactly, and too much computing time does not require than we expect, because we calculate the path set only one time in initial step Two numerical examples are also given in order to assess the model and to compare it with other methods.

Parameter estimation for the imbalanced credit scoring data using AUC maximization (AUC 최적화를 이용한 낮은 부도율 자료의 모수추정)

  • Hong, C.S.;Won, C.H.
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.2
    • /
    • pp.309-319
    • /
    • 2016
  • For binary classification models, we consider a risk score that is a function of linear scores and estimate the coefficients of the linear scores. There are two estimation methods: one is to obtain MLEs using logistic models and the other is to estimate by maximizing AUC. AUC approach estimates are better than MLEs when using logistic models under a general situation which does not support logistic assumptions. This paper considers imbalanced data that contains a smaller number of observations in the default class than those in the non-default for credit assessment models; consequently, the AUC approach is applied to imbalanced data. Various logit link functions are used as a link function to generate imbalanced data. It is found that predicted coefficients obtained by the AUC approach are equivalent to (or better) than those from logistic models for low default probability - imbalanced data.

A case of corporate failure prediction

  • Shin, Kyung-Shik;Jo, Hongkyu;Han, Ingoo
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.10a
    • /
    • pp.199-202
    • /
    • 1996
  • Although numerous studies demonstrate that one technique outperforms the others for a given data set, there is often no way to tell a priori which of these techniques will be most effective to solve a specific problem. Alternatively, it has been suggested that a better approach to classification problem might be to integrate several different forecasting techniques by combining their results. The issues of interest are how to integrate different modeling techniques to increase the prediction performance. This paper proposes the post-model integration method, which means integration is performed after individual techniques produce their own outputs, by finding the best combination of the results of each method. To get the optimal or near optimal combination of different prediction techniques. Genetic Algorithms (GAs) are applied, which are particularly suitable for multi-parameter optimization problems with an objective function subject to numerous hard and soft constraints. This study applied three individual classification techniques (Discriminant analysis, Logit and Neural Networks) as base models to the corporate failure prediction context. Results of composite prediction were compared to the individual models. Preliminary results suggests that the use of integrated methods will offer improved performance in business classification problems.

  • PDF

The Hybrid Systems for Credit Rating

  • Goo, Han-In;Jo, Hong-Kyuo;Shin, Kyung-Shik
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.22 no.3
    • /
    • pp.163-173
    • /
    • 1997
  • Although numerous studies demonstrate that one technique outperforms the others for a given data set, it is hard to tell a priori which of these techniques will be the most effective to solve a specific problem. It has been suggested that the better approach to classification problem might be to integrate several different forecasting techniques by combining their results. The issues of interest are how to integrate different modeling techniques to increase the predictive performance. This paper proposes the post-model integration method, which tries to find the best combination of the results provided by individual techniques. To get the optimal or near optimal combination of different prediction techniques, Genetic Algorithms (GAs) are applied, which are particularly suitable for multi-parameter optimization problems with an object function subject to numerous hard and soft constraints. This study applies three individual classification techniques (Discriminant analysis, Logit model and Neural Networks) as base models for the corporate failure prediction. The results of composite predictions are compared with the individual models. Preliminary results suggests that the use of integrated methods improve the performance of business classification.

  • PDF

Tree Size Distribution Modelling: Moving from Complexity to Finite Mixture

  • Ogana, Friday Nwabueze;Chukwu, Onyekachi;Ajayi, Samuel
    • Journal of Forest and Environmental Science
    • /
    • v.36 no.1
    • /
    • pp.7-16
    • /
    • 2020
  • Tree size distribution modelling is an integral part of forest management. Most distribution yield systems rely on some flexible probability models. In this study, a simple finite mixture of two components two-parameter Weibull distribution was compared with complex four-parameter distributions in terms of their fitness to predict tree size distribution of teak (Tectona grandis Linn f) plantations. Also, a system of equation was developed using Seemingly Unrelated Regression wherein the size distributions of the stand were predicted. Generalized beta, Johnson's SB, Logit-Logistic and generalized Weibull distributions were the four-parameter distributions considered. The Kolmogorov-Smirnov test and negative log-likelihood value were used to assess the distributions. The results show that the simple finite mixture outperformed the four-parameter distributions especially in stands that are bimodal and heavily skewed. Twelve models were developed in the system of equation-one for predicting mean diameter, seven for predicting percentiles and four for predicting the parameters of the finite mixture distribution. Predictions from the system of equation are reasonable and compare well with observed distributions of the stand. This simplified mixture would allow for wider application in distribution modelling and can also be integrated as component model in stand density management diagram.