• Title/Summary/Keyword: Decision Method

Search Result 5,536, Processing Time 0.034 seconds

SSP Climate Change Scenarios with 1km Resolution Over Korean Peninsula for Agricultural Uses (농업분야 활용을 위한 한반도 1km 격자형 SSP 기후변화 시나리오)

  • Jina Hur;Jae-Pil Cho;Sera Jo;Kyo-Moon Shim;Yong-Seok Kim;Min-Gu Kang;Chan-Sung Oh;Seung-Beom Seo;Eung-Sup Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.1-30
    • /
    • 2024
  • The international community adopts the SSP (Shared Socioeconomic Pathways) scenario as a new greenhouse gas emission pathway. As part of efforts to reflect these international trends and support for climate change adaptation measure in the agricultural sector, the National Institute of Agricultural Sciences (NAS) produced high-resolution (1 km) climate change scenarios for the Korean Peninsula based on SSP scenarios, certified as a "National Climate Change Standard Scenario" in 2022. This paper introduces SSP climate change scenario of the NAS and shows the results of the climate change projections. In order to produce future climate change scenarios, global climate data produced from 18 GCM models participating in CMIP6 were collected for the past (1985-2014) and future (2015-2100) periods, and were statistically downscaled for the Korean Peninsula using the digital climate maps with 1km resolution and the SQM method. In the end of the 21st century (2071-2100), the average annual maximum/minimum temperature of the Korean Peninsula is projected to increase by 2.6~6.1℃/2.5~6.3℃ and annual precipitation by 21.5~38.7% depending on scenarios. The increases in temperature and precipitation under the low-carbon scenario were smaller than those under high-carbon scenario. It is projected that the average wind speed and solar radiation over the analysis region will not change significantly in the end of the 21st century compared to the present. This data is expected to contribute to understanding future uncertainties due to climate change and contributing to rational decision-making for climate change adaptation.

The Relationship between Financial Constraints and Investment Activities : Evidenced from Korean Logistics Firms (우리나라 물류기업의 재무제약 수준과 투자활동과의 관련성에 관한 연구)

  • Lee, Sung-Yhun
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.2
    • /
    • pp.65-78
    • /
    • 2024
  • This study investigates the correlation between financial constraints and investment activities in Korean logistics firms. A sample of 340 companies engaged in the transportation sector, as per the 2021 KSIC, was selected for analysis. Financial data obtained from the DART were used to compile a panel dataset spanning from 1996 to 2021, totaling 6,155 observations. The research model was validated, and tests for heteroscedasticity and autocorrelation in the error terms were conducted considering the panel data structure. The relationship between investment activities in the previous period and current investment activities was analyzed using panel Generalized Method of Moments(GMM). The validation results of the research indicate that Korean logistics firms tend to increase investment activities as their level of financial constraints improves. Specifically, a positive relationship between the level of financial constraints and investment activities was consistently observed across all models. These findings suggest that investment decision-making varies based on the financial constraints faced by companies, aligning with previous research indicating that investment activities of constrained firms are subdued. Moreover, while the results from the model examining whether investment activities in the previous period affect current investment activities indicated an influence of investment activities from the previous period on current investment activities, the investment activities from two periods ago did not show a significant relationship with current investment activities. Among the control variables, firm size and cash flow variables exhibited positive relationships, while debt size and asset diversification variables showed negative relationships. Thus, larger firm size and smoother cash flows were associated with more proactive investment activities, while high debt levels and extensive asset diversification appeared to constrain investment activities in logistics companies. These results interpret that under financial constraints, internal funding sources such as cash flows exhibit positive relationships, whereas external capital sources such as debt demonstrate negative relationships, consistent with empirical findings from previous research.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.

An Exploratory Study on the Components of Visual Merchandising of Internet Shopping Mall (인터넷쇼핑몰의 VMD 구성요인에 대한 탐색적 연구)

  • Kim, Kwang-Seok;Shin, Jong-Kuk;Koo, Dong-Mo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.19-45
    • /
    • 2008
  • This study is to empirically examine the primary dimensions of visual merchandising (VMD) of internet shopping mall, namely store design, merchandise, and merchandising cues, to be a attractive virtual store to the shoppers. The authors reviewed the literature related to the major components of VMD from the perspective of the AIDA model, which has been mainly applied to the offline store settings. The major purposes of the study are as follows; first, tries to derive the variables related with the components of visual merchandising through reviewing the existing literatures, establish the hypotheses, and test it empirically. Second, examines the relationships between the components of VMD and the attitude toward the VMD, however, putting more emphasis on finding out the component structure of the VMD. VMD needs to be examined with the perspective that an online shopping mall is a virtual self-service or clerkless store, which could reduce the number of employees, help the shoppers search, evaluate and purchase for themselves, and to be explored in terms of the in-store persuasion processes of customers. This study reviewed the literatures related to store design, merchandise, and merchandising cues which might be relevant to the store, product, and promotion respectively. VMD is a total communication tool, and AIDA model could explain the in-store consumer behavior of online shopping. Store design has to do with triggering a consumer attention to the online mall, merchandise with a product related interest, and merchandising cues with promotions such as recommendation and links that induce the desire to pruchase. These three steps might be seen as the processes for purchase actions. The theoretical rationale for the relationship between VMD and AIDA could be found in Tyagi(2005) that the three steps of consumer-oriented merchandising are a store, a product assortment, and placement, in Omar(1999) that three types of interior display are a architectural design display, commodity display, and point-of-sales(POS) display, and in Davies and Ward(2005) that the retail store interior image is related to an atmosphere, merchandise, and in-store promotion. Lee et al(2000) suggested as the web merchandising components a merchandising cues, a shopping metaphor which is an assistant tool for search, a store design, a layout(web design), and a product assortment. The store design which includes differentiation, simplicity and navigation is supposed to be related to the attention to the virtual store. Second, the merchandise dimensions comprising product assortments, visual information and product reputation have to do with the interest in the product offerings. Finally, the merchandising cues that refer to merchandiser(MD)'s recommendation of products and providing the hyperlinks to relevant goods for the shopper is concerned with attempt to induce the desire to purchase. The questionnaire survey was carried out to collect the data about the consumers who would shop at internet shopping malls frequently. To select the subject malls, the mall ranking data announced by a mall rating agency was used to differentiate the most popular and least popular five mall each. The subjects was instructed to answer the questions after navigating the designated mall for five minutes. The 300 questionnaire was distributed to the consumers, 166 samples were used in the final analysis. The empirical testing focused on identifying and confirming the dimensionality of VMD and its subdimensions using a structural equation modeling method. The confirmatory factor analysis for the endogeneous and exogeneous variables was carried out in four parts. The second-order factor analysis was done for a store design, a merchandise, and a merchandising cues, and first-order confirmatory factor analysis for the attitude toward the VMD. The model test results shows that the chi-square value of structural equation is 144.39(d.f 49), significant at 0.01 level which means the proposed model was rejected. But, judging from the ratio of chi-square value vs. degree of freedom, the ratio was 2.94 which smaller than an acceptable level of 3.0, RMR is 0.087 which is higher than a generally acceptable level of 0.08. GFI and AGFI is turned out to be 0.90 and 0.84 respectively. Both NFI and NNFI is 0.94, and CFI 0.95. The major test results are as follows; first, the second-order factor analysis and structural equational modeling reveals that the differentiation, simplicity and ease of identifying current status of the transaction are confirmed to be subdimensions of store design and to be a significant predictors of the dependent variable. This result implies that when designing an online shopping mall, it is necessary to differentiate visually from other malls to improve the effectiveness of the communications of store design. That is, the differentiated store design raise the contrast stimulus to sensory organs to promote the memory of the store and to have a favorable attitude toward the VMD of a store. The results that navigation which means the easiness of identifying current status of shopping affects the attitude to VMD could be interpreted that the navigating processes via the hyperlinks which is characteristics of an internet shopping is a complex and cognitive process and shoppers are likely to lack the sense of overall structure of the store. Consequently, shoppers are likely to be alost amid shopping not knowing where to go. The orientation tool enhance the accessibility of information to raise the perceptive power about the store environment.(Titus & Everett 1995) Second, the primary dimension of merchandise and its subdimensions was confirmed to be unidimensional respectively, have a construct validity, and nomological validity which the VMD dimensions supposed to have a positive correlation with the dependent variable. The subdimensions of product assortment, brand fame and information provision proved to have a positive effect on the attitude toward the VMD. It could be interpreted that the more plentiful the product and brand assortment of the mall is, the more likely the shoppers to favor it. Brand fame and information provision as well affect the VMD attitude, which means that the more famous the brand, the more likely the shoppers would trust and feel familiar with the mall, and the plentifully and visually presented information could have the shopper have a favorable attitude toward the store VMD. Third, it turned out to be that merchandising cue of product recommendation and hyperlinks affect the VMD attitude. This could be interpreted that recommended products could reduce the uncertainty related with the purchase decision, and the hyperlinks to relevant products would help the shopper save the cognitive effort exerted into the information search and gathering, which could lead to a favorable attitude to the VMD. This study tried to sheds some new light on the VMD of online store by reviewing the variables mentioned to be relevant with offline VMD in the existing literatures, and tried to link the VMD components from the perspective of AIDA model. The effect size of the VMD dimensions on the attitude was in the order of the merchandise, the store design and the merchandising cues.It is said that an internet has an unlimited place for display, however, the virtual store is not unlimited since the consumer has a limited amount of cognitive ability to process the external information and internal memory. Particularly, the shoppers are likely to face some difficulties in decision making on account of too many alternative and information overloads. Therefore, the internet shopping mall manager should take into consideration the cost of information search on the part of the consumer, to establish the optimal product placements and search routes. An efficient store composition would be possible by reducing the psychological burdens and cognitive efforts exerted to information search and alternatives evaluation. The store image is in most part determined by the product category and its brand it deals in. The results of this study support this proposition that the merchandise is most important to the VMD attitude than other components, the manager is required to take a strategic approach to VMD. The internet users are getting more accustomed and more knowledgeable about the internet media and more likely to accept the internet as a shopping channel as the period of time during which they use the internet to shop become longer. The web merchandiser should be aware that the product introduction using a moving pictures and a bulletin board become more important in order to present the interactive product information visually and communicate with customers more actively, therefore leading to making the quantity and quality of product information more rich.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The detection of collapsible airways contributing to airflow limitation (기류 제한에 영향을 미치는 허탈성 기도의 분석)

  • Kim, Yun Seong;Park, Byung Gyu;Lee, Kyong In;Son, Seok Man;Lee, Hyo Jin;Lee, Min Ki;Son, Choon Hee;Park, Soon Kew
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.4
    • /
    • pp.558-570
    • /
    • 1996
  • Background : The detection of Collapsible airways has important therapeutic implications in chronic airway disease and bronchial asthma. The distinction of a purely collapsible airways disease from that of asthma is important because the treatment of the dormer may include the use of pursed lip breathing or nasal positive pressure ventilation whereas in the latter, pharmacologic approaches are used. One form of irreversible airflow limitation is collapsible airways, which has been shown to be a Component of asthma or to emphysema, it can be assessed by the volume difference between what exits the lung as determined by a spirometer and the volume compressed as measured by the plethysmography. Method : To investigate whether volume difference between slow and forced vital Capacity(SVC-FVC) by spirometry may be used as a surrogate index of airway collapse, we examined pulmonary function parameters before and after bronchodilator agent inhalation by spirometry and body plethysmography in 20 cases of patients with evidence of airflow limitation(chronic obstructive pulmonary disease 12 cases, stable bronchial asthma 7 cases, combined chronic obstructive pulmonary disease with asthma 1 case) and 20 cases of normal subjects without evidence of airflow limitation referred to the Pusan National University Hospital pulmonary function laboratory from January 1995 to July 1995 prospectively. Results : 1) Average and standard deviation of age, height, weight of patients with airflow limitation was $58.3{\pm}7.24$(yr), $166{\pm}8.0$(cm), $59.0{\pm}9.9$(kg) and those of normal subjects was $56.3{\pm}12.47$(yr), $165.9{\pm}6.9$(cm), $64.4{\pm}10.4$(kg), respectively. The differences of physical characteristics of both group were not significant statistically and male to female ratio was 14:6 in both groups. 2) The difference between slow vital capacity and forced vital capacity was $395{\pm}317ml$ in patients group and $154{\pm}176ml$ in normal group and there was statistically significance between two groups(p<0.05). Sensitivity and specificity were most higher when the cut-off value was 208ml. 3) After bronchodilator inhalation, reversible airway obstructions were shown in 16 cases of patients group, 7 cases of control group(p<0.05) by spirometry or body plethysmography d the differences of slow vital capacity and forced vital capacity in bronchodilator response group and nonresponse group were $300.4{\pm}306ml$, $144.7{\pm}180ml$ and this difference was statistically significant. 4) The difference between slow vital capacity and forced vital capacity before bronchodilator inhalation was correlated with airway resistance before bronchodilator(r=0.307 p=0.05), and the difference between slow vital capacity and forced vital capacity after bronchodilator was correlated with difference between slow vital capacity and forced vital capacity(r=0.559 p=0.0002), thoracic gas volume(r=0.488 p=0.002) before bronchodilator and airway resistance(r=0.583 p=0.0001), thoracic gas volume(r=0.375 p=0.0170) after bronchodilator, respectively. 5) The difference between slow vital capacity and forced vital capacity in smokers and nonsmokers was $257.5{\pm}303ml$, $277.5{\pm}276ml$, respectively and this difference did not reach statistical significance(p>0.05). Conclusion : The difference between slow vital capacity and forced vital capacity by spirometry may be useful for the detection of collapsible airway and may help decision making of therapeutic plans.

  • PDF