• Title/Summary/Keyword: hit ratio

Search Result 280, Processing Time 0.035 seconds

A Performance Improvement of Linux TCP/IP Stack based on Flow-Level Parallelism in a Multi-Core System (멀티코어 시스템에서 흐름 수준 병렬처리에 기반한 리눅스 TCP/IP 스택의 성능 개선)

  • Kwon, Hui-Ung;Jung, Hyung-Jin;Kwak, Hu-Keun;Kim, Young-Jong;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.113-124
    • /
    • 2009
  • With increasing multicore system, much effort has been put on the performance improvement of its application. Because multicore system has multiple processing devices in one system, its processing power increases compared to the single core system. However in many cases the advantages of multicore can not be exploited fully because the existing software and hardware were designed to be suitable for single core. When the existing software runs on multicore, its performance improvement is limited by the bottleneck of sharing resources and the inefficient use of cache memory on multicore. Therefore, according as the number of core increases, it doesn't show performance improvement and shows performance drop in the worst case. In this paper we propose a method of performance improvement of multicore system by applying Flow-Level Parallelism to the existing TCP/IP network application and operating system. The proposed method sets up the execution environment so that each core unit operates independently as much as possible in network application, TCP/IP stack on operating system, device driver, and network interface. Moreover it distributes network traffics to each core unit through L2 switch. The proposed method allows to minimize the sharing of application data, data structure, socket, device driver, and network interface between each core. Also it allows to minimize the competition among cores to take resources and increase the hit ratio of cache. We implemented the proposed methods with 8 core system and performed experiment. Experimental results show that network access speed and bandwidth increase linearly according to the number of core.

Empirical Analysis on Agent Costs against Ownership Structure in Accordance with Verification of Suitability of the Model (모형의 적합성 검증에 따른 소유구조대비 대리인 비용의 실증분석)

  • Kim, Dae-Lyong;Lim, Kee-Soo;Sung, Sang-Hyeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.8
    • /
    • pp.3417-3426
    • /
    • 2012
  • This study aims to determine how ownership structure (share-holding ratio of insiders, foreigners) affects agent costs (the portion of asset efficiency or non-operating expenses) through empirical analysis. However, as existing studies on correlations between ownership structure and agent costs adopted Pooled OLS Model, this study focused on additionally formulating Fixed Effect Model and Random Effect Model aimed to reflect the time of data formation and corporate effects as study models based on verification results on the suitability of Pooled-OLS Model before comparative analysis for the purpose of improvement of credibility and statistical validity of the results of empirical analysis based on the premise that the Pooled OLS Model is not reliable enough to verify massive panel data. The data has been accumulated over 10 years from 1998 to 2007 after the IMF crisis hit the nation, from a subject 331 companies except for financial institutions. As a result of the empirical analysis, verification of the suitability of model has determined that the Random Effect Model is appropriate in terms of asset efficiency among agent costs items. On the other hand, the Fixed Effect Model is appropriate in terms of non-operating costs. As a result of the empirical analysis according to the appropriate model, no hypothesis adopted in the Pooled OLS Model has been accepted. This suggests that developing an appropriate model is more important than other factors for the purpose of generating statistically significant empirical results by showing that different empirical results are produced according to the type of empirical analysis.

Correlation Between the Parameters of Radiosensitivity in Human Cancer Cell Lines (인체 암세포주에서 방사선감수성의 지표간의 상호관계)

  • Park, Woo-Yoon;Kim, Won-Dong;Min, Kyung-Soo
    • Radiation Oncology Journal
    • /
    • v.16 no.2
    • /
    • pp.99-106
    • /
    • 1998
  • Purpose : We conducted clonogenic assay using human cancer cell lines (MKN-45, PC-14, Y-79, HeLa) to investigate a correlation between the parameters of radiosensitivity. Materials and Methods : Human cancer cell lines were irradiated with single doses of 1, 2, 3, 5, 7 and 10Gy for the study of radiosensitivity and subrethal damage repair capacity was assessed with two fractions of 5Gy separated with a time interval of 0, 1, 2, 3, 4, 6 and 24 hours. Surviving fraction was assessed with clonogenic assay using $Sperman-H\"{a}rbor$ method and mathematical analysis of survival curves was done with linear-quadratic (LQ) , multitarget-single hit(MS) model and mean inactivation dose$(\v{D})$. Results : Surviving fractions at 2Gy(SF2) were variable among the cell lines, ranged from 0.174 to 0.85 The SF2 of Y-79 was lowest and that of PC-14 was highest(p<0.05, t-test). LQ model analysis showed that the values of $\alpha$ for Y-79, MKN-45, HeLa and PC-14 were 0.603, 0.356, 0.275 and 0.102 respectively, and those of $\beta$ were 0.005, 0.016, 0.025 and 0.027 respectively. Fitting to MS model showed that the values of Do for Y-79. MKN-45, HeLa and PC-14 were 1.59. 1.84. 1.88 and 2.52 respectively, and those of n were 0.97, 1.46, 1.52 and 1 69 respectively. The $\v{D}s$ calculated by Gauss-Laguerre method were 1.62, 2.37, 2,01 and 3.95 respectively So the SF2 was significantly correlated with $\alpha$, Do and $\v{D}$. Their Pearson correlation coefficiencics were -0.953 and 0,993. 0.999 respectively(p<0.05). Sublethal damage repair was saturated around 4 hours and recovery ratios (RR) at plateau phase ranged from 2 to 3.79. But RR was not correlated with SF2, ${\alpha}$, ${\beta}$, Do, $\v{D}$. Conclusion : The intrinsic radiosensitivity was very different among the tested human cell lines. Y-79 was the most sensitive and PC-l4 was the least sensitive. SF2 was well correlated with ${\alpha}$, Do, and $\v{D}$. RR was high for MKN-45 and HeLa but had nothing to do with radiosensitivity parameters. These basic parameters can be used as baseline data for various in vitro radiobiological experiments.

  • PDF

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Anticarcinogenic Effects of Sargassum fulvellum Fractions on Several Human Cancer Cell Lines in vitro (모자반 분획물의 in vitro에서의 항발암효과)

  • 배송자
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.33 no.3
    • /
    • pp.480-486
    • /
    • 2004
  • Despite many therapeutic advances in the understanding of the processes in carcinogenesis, overall mortality statistics are unlikely to change until there is reorientation of the concepts for the use of natural products as new anticarcinogenic agents. In this study, we investigated the anticarcinogenic activity, antioxidant and DPPH scavenging activity of Sargassum fulvellum (SF). SF was extracted with methanol, which was further fractionated into five different types: hexane (SFMH), ethylether (SFMEE), ethyl acetate (SFMEA), butanol (SFMB) and aqueous (SFMA) partition layers. We determined the cytotoxic effect of these layers on human cancer cells by MTT assay. Among various partition layers of SF, at starting concentration of 100 $\mu\textrm{g}$/mL, SFMEE showed very high cytotoxicity which were 92, 90 and 84% and kept high throughout 5 concentration levels sparsed by 100 $\mu\textrm{g}$/mL against all three human cancer cell lines: HepG2, HT-29 and HeLa. SFMEA showed a low cytotoxicity at the beginning concentration level, but as the concentration became denser, growth inhibition effect of cancer cell lines started to increase and at 500 $\mu\textrm{g}$/mL, it hit the highest, which were 91, 96 and 98% against the same three cell lines as above. We observed QR induced effect in all fraction layers of SF. SFMEE showed similar tendensy of QR induced effect as did against cytotoxicity. The QR induced effect of SFMEE on HepG2 cells at 25 $\mu\textrm{g}$/mL concentration indicated 3 times higher than the control value of 1.0 and SFMH tended to be concentration-dependent on HepG2 cells. At 100 $\mu\textrm{g}$/mL, the QR induced effects resulted a ratio, which was 2.5 times higher than the control value. In search for antioxidation effects of SF extract and partition layer, the reducing activity on the 1, 1-diphenyl-2-picryl hydrazyl (DPPH) radical scavenging potential was sequentially screened. The SFM has similar antioxidant activity as to BHT and vitamin C groups.

Evaluation of Biological Characteristics of Neutron Beam Generated from MC50 Cyclotron (MC50 싸이클로트론에서 생성되는 중성자선의 생물학적 특성의 평가)

  • Eom, Keun-Yong;Park, Hye-Jin;Huh, Soon-Nyung;Ye, Sung-Joon;Lee, Dong-Han;Park, Suk-Won;Wu, Hong-Gyun
    • Radiation Oncology Journal
    • /
    • v.24 no.4
    • /
    • pp.280-284
    • /
    • 2006
  • $\underline{Purpose}$: To evaluate biological characteristics of neutron beam generated by MC50 cyclotron located in the Korea Institute of Radiological and Medical Sciences (KIRAMS). $\underline{Materials\;and\;Methods}$: The neutron beams generated with 15 mm Beryllium target hit by 35 MeV proton beam was used and dosimetry data was measured before in-vitro study. We irradiated 0, 1, 2, 3, 4 and 5 Gy of neutron beam to EMT-6 cell line and surviving fraction (SF) was measured. The SF curve was also examined at the same dose when applying lead shielding to avoid gamma ray component. In the X-ray experiment, SF curve was obtained after irradiation of 0, 2, 5, 10, and 15 Gy. $\underline{Results}$: The neutron beams have 84% of neutron and 16% of gamma component at the depth of 2 cm with the field size of $26{\times}26\;cm^2$, beam current $20\;{\mu}A$, and dose rate of 9.25 cGy/min. The SF curve from X-ray, when fitted to linear-quadratic (LQ) model, had 0.611 as ${\alpha}/{\beta}$ ratio (${\alpha}=0.0204,\;{\beta}=0.0334,\;R^2=0.999$, respectively). The SF curve from neutron beam had shoulders at low dose area and fitted well to LQ model with the value of $R^2$ exceeding 0.99 in all experiments. The mean value of alpha and beta were -0.315 (range, $-0.254{\sim}-0.360$) and 0.247 ($0.220{\sim}0.262$), respectively. The addition of lead shielding resulted in no straightening of SF curve and shoulders in low dose area still existed. The RBE of neutron beam was in range of $2.07{\sim}2.19$ with SF=0.1 and $2.21{\sim}2.35$ with SF=0.01, respectively. $\underline{Conclusion}$: The neutron beam from MC50 cyclotron has significant amount of gamma component and this may have contributed to form the shoulder of survival curve. The RBE of neutron beam generated by MC50 was about 2.2.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

An Exploratory Study on Forecasting Sales Take-off Timing for Products in Multiple Markets (해외 복수 시장 진출 기업의 제품 매출 이륙 시점 예측 모형에 관한 연구)

  • Chung, Jaihak;Chung, Hokyung
    • Asia Marketing Journal
    • /
    • v.10 no.2
    • /
    • pp.1-29
    • /
    • 2008
  • The objective of our study is to provide an exploratory model for forecasting sales take-off timing of a product in the context of multi-national markets. We evaluated the usefulness of key predictors such as multiple market information, product attributes, price, and sales for the forecasting of sales take-off timing by applying the suggested model to monthly sales data for PDP and LCD TV provided by a Korean electronics manufacturer. We have found some important results for global companies from the empirical analysis. Firstly, innovation coefficients obtained from sales data of a particular product in other markets can provide the most useful information on sales take-off timing of the product in a target market. However, imitation coefficients obtained from the sales data of a particular product in the target market and other markets are not useful for sales take-off timing of the product in the target market. Secondly, price and product attributes significantly influence on take-off timing. It is noteworthy that the ratio of the price of the target product to the average price of the market is more important than the price ofthe target product itself. Lastly, the cumulative sales of the product are still useful for the prediction of sales take-off timing. Our model outperformed the average model in terms of hit-rate.

  • PDF