• Title/Summary/Keyword: Split

Search Result 3,515, Processing Time 0.037 seconds

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Studies on the Cutting Managemente of Pasture during the Mid Summer Season I. Effect of cutting management on tall fescue dominated pasture (고온기 초지의 예취관리에 관한 연구 I. 고온기 예취방법이 tall fescue 우점초지의 재생 , 잡초발생 및 수량에 미치는 영향)

  • Seo, S.;Han, Y.C.;Park, M.S.
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.5 no.1
    • /
    • pp.22-32
    • /
    • 1985
  • Optimum pasture management during the summer season is an important factor to maintain good regrowth and persistence of pasture in Korea. This experiment was carried out to investigate the effects of the cutting management on the dead plant, weed appearance, regrowth and carbohydrate reserves in stubble, and dry matter yield of tall fescue dominated pasture during the mid summer season. For the test, a split plot design with 4 replications was treated with 2 different the third cutting times (July 12 and Aug. 4) as the mainplots, and 3 different cutting heights (3, 6 and 9 cm) at the third cut as the subplots, and the experiment was done at the experimental field of the Livestock Experiment Station, in Suweon, 1984. The results obtained are summarized as follows: 1. Considering the meteorological conditions during the experimental period, the temperature was a little higher by $2^{\circ}C$ than that of average year, especially the first and second decade of August were high. And the precipitation of 1984 tended to be low when compared with the average year. 2. Temperature of soil surface and underground tended to increase by $1-3^{\circ}C$ as the stubble height was low during the summer season. 3. Regrowth leaf length and leaf area after the third cut increased significantly with the high cutting height at the third cut. 4. A significant higher total nonstructural carbohydrate (TNC) content in stubble after the third cut was observed in the high stubble cut on July 12. The results indicate that the high stubble height reserves more carbohydrates for early regrowth stage after the third cut when compared with the low stubble. On Aug. 4, however, the recovery of TNC contents after the third cut was not effective due to high temperature and rainfall. 5. The percentage of dead plant after the third cut was found to be high with the low cutting height during the mid summer season (p<0.05). 6. With the low stubble height on July 12 cut, it was appeared that the percentage of weed was significantly increased (p<0.05), and main weeds appeared after the third cut were Echinochloa crusgalli>Digitaria sanguinalis>Cyperus iria>Rumex crispus, and so on. In case of cut on Aug. 4, weed appearance was no difference at three cutting heights. 7. Dry matter yield at the third cut was increased in the plot of cutting on Aug. 4 and stubble height (p<0.05). However, yields at the fourth and fifth cut were increased with high stubble height (p<0.05), regardless of harvest time. 8. In total dry matter yield after the third cut, there was no significant difference between the cutting time and forage yield. However, total yield on July 12 was increased with the high stubble height (p<0.05). 9. From the above results, it is suggested that the 9 cm cutting height during the mid summer season is the most effective for good regrowth, weed control and forage yield of tall fescue dominated pasture.

  • PDF

Studies on Dry Matter Yields , Chemical Composition and Net Energy Accumulation in Three Leading Temperate Grass Species I. Influence of meteorolgical factors on the dry matter productivity and net energy value under different cutting management (주요 북방형목초의 건물수량 , 화학성분 및 Net Energy 축적에 관한 연구 I. 기상환경 및 예취관리에 따른 건물 및 에너지 생산성 변화)

  • F. Muhlschlegel;G. Voigtlander
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.6 no.2
    • /
    • pp.103-110
    • /
    • 1986
  • The experiments were carried out to study the influence of meteorological factors and cutting management on dry matter accumulation and net energy value in orchardgrass (Dactlylis glomerata L.) cv. Potomac and Baraula, perennial ryegrass (Lolium perenne L.) cv. Reveille and Semperweide and meadow fescue (Festuca pratensis Huds.) cv. Cosmos 11 and N.F.G.. The field trials were designed as a split plot design with three cutting regimes of 6-7 cuts at grazing stage, 4-5 cuts at silage stage and 3 cuts at hat stage in Korea and West Germany from 1975 to 1979. The results obtained are summarized as follows: 1. Productivity of orchardgrass, perennial ryegrass and meadow fescue were mainly affected by cutting systems and meteorological factors, especially air temperature, rainfalls, solar radiation and their interactions. In West Germany, cutting frequency was to be found asan most important factor influenced to dry matter yield and net energy value. 2. Orchardgrass, taken as average of all experimental sites in Korea, produced high yield of 875 kg/10 a in dry matter, which was as much as 32% and 27% higher than those of perennial ryegrass and meadow fescue, respectively. The annual dry matter yields of orchardgrass from 1976 to 1977 were shown a little variation. Dry matter yields in Freising and Braunschweig in West Germany were increased in all grass species continuously. 3. Orchardgrass, perennial ryegrass and meadow fescue showed different response to cutting frequency. The highest dry matter yields were found under 3 cuts at hay stage for orchardgrass and 4-5 cuts at silage stage for perennial ryegrass and meadow fescue. In West Germany, dry matter yields, as average of all grass species under different cutting systems, were 1326 kg, 1175 kg and 1098 kg/10a for 3 cuts, 4-5 cuts and 6-7 cuts, respectively. 4. Chemical composition and net energy concentration of temperate grasses were influenced by cutting managements. The highest yields of digestible crude protein were obtained under 6-7 cuts at grazing stage both in Korea and West Germany. In net energy yields, 3 cutting system produced the highest yield with 694 (orchardgrass), 665 (perennial ryegrass) an 623 kStE/10 a (meadow fescue). However, frequent cutting at grazing and silage stage produced higher yields than 3 cuts at hay stage in Cheju, Suweon and Taekwalyong.

  • PDF

A Survey on Consumption Behaviors of the Fast-Foods in University Students (대학생의 패스트푸드 소비행태에 관한 연구)

  • Cho, Kyu-Seok;Im, Byoung-Soon;Kim, Seok-Eun;Kim, Gye-Woong
    • Korean Journal of Human Ecology
    • /
    • v.14 no.2
    • /
    • pp.313-319
    • /
    • 2005
  • This survey was conducted in order to obtain the basic data for desirable consumption habits through investigation and analysis of university students' fast food consumption behaviors. Questionnaires were collected from a total of 374 male and female students living in big or small and medium-sized cities in August, 2004. The contents surveyed were utilization and expenses of fast foods, choice of fast foods, relationship between fast foods and a diet, and characteristics of fast food restaurants. The results obtained are summarized as follows: 1. The ratio of the surveyees varied according to gender, residence, and the size of a city they're living in. For example, males took up 48.66% of the surveyees, while females did 51.34%. The ratio of residents in apartments and stand-alone houses was 54.81% and 45.19% each. 47.33% of the respondents were living in big cities, while 52.67% of them in small and medium-sized cities. 2. 70.1% of the surveyees responded that they are with friends when having fast foods. There was a highly significant difference between male and female in the type of eating companions (p<0.001). The average number of days that they eat fast foods was 1 to 2 times a week, which accounted for 63.7% of the respondents. However, in the case of eating foods, there was no significant differences between two sexes. 3. 64.2% of the surveyees paid more than 20,000 won to buy fast foods for a week, which showed no significant differences between genders. They tend to split a bill, rather than one person pays all. There was a highly significant difference between genders in paying method (p<0.001). 4. 52.1 % of the respondents chose a menu themselves. Their most favored food was chickens (26.5%), which showed a statistically significant difference between genders (p<0.001). 46.8% of them preferred coke as a drink, which had no significant difference between genders. 42.2% of the surveyees had fast foods between lunch and dinner, which also had no significant difference between genders. The most important factor in choosing a menu was its taste (62.8%), which indicated a significant difference between males and females (p<0.05). 5. The preference to fast foods was due to the influence of western culture (36.4%) and eating-out habits (29.1%), which was significantly different between genders (p<0.05). Those who eat fast foods answered they have normal weight and normal body type (49.5%). 24.3% of them were relatively fat with significant difference between genders (p<0.05). 63.4% of the surveyees thought themselves not picky with foods, and there was a significant difference between genders (p<0.05). 78.3% of them mostly preferred franchise restaurants because they are convenient and cheap.

  • PDF

Herbicidal Phytotoxicity under Adverse Environments and Countermeasures (불량환경하(不良環境下)에서의 제초제(除草劑) 약해(藥害)와 경감기술(輕減技術))

  • Kwon, Y.W.;Hwang, H.S.;Kang, B.H.
    • Korean Journal of Weed Science
    • /
    • v.13 no.4
    • /
    • pp.210-233
    • /
    • 1993
  • The herbicide has become indispensable as much as nitrogen fertilizer in Korean agriculture from 1970 onwards. It is estimated that in 1991 more than 40 herbicides were registered for rice crop and treated to an area 1.41 times the rice acreage ; more than 30 herbicides were registered for field crops and treated to 89% of the crop area ; the treatment acreage of 3 non-selective foliar-applied herbicides reached 2,555 thousand hectares. During the last 25 years herbicides have benefited the Korean farmers substantially in labor, cost and time of farming. Any herbicide which causes crop injury in ordinary uses is not allowed to register in most country. Herbicides, however, can cause crop injury more or less when they are misused, abused or used under adverse environments. The herbicide use more than 100% of crop acreage means an increased probability of which herbicides are used wrong or under adverse situation. This is true as evidenced by that about 25% of farmers have experienced the herbicide caused crop injury more than once during last 10 years on authors' nationwide surveys in 1992 and 1993 ; one-half of the injury incidences were with crop yield loss greater than 10%. Crop injury caused by herbicide had not occurred to a serious extent in the 1960s when the herbicides fewer than 5 were used by farmers to the field less than 12% of total acreage. Farmers ascribed about 53% of the herbicidal injury incidences at their fields to their misuses such as overdose, careless or improper application, off-time application or wrong choice of the herbicide, etc. While 47% of the incidences were mainly due to adverse natural conditions. Such misuses can be reduced to a minimum through enhanced education/extension services for right uses and, although undesirable, increased farmers' experiences of phytotoxicity. The most difficult primary problem arises from lack of countermeasures for farmers to cope with various adverse environmental conditions. At present almost all the herbicides have"Do not use!" instructions on label to avoid crop injury under adverse environments. These "Do not use!" situations Include sandy, highly percolating, or infertile soils, cool water gushing paddy, poorly draining paddy, terraced paddy, too wet or dry soils, days of abnormally cool or high air temperature, etc. Meanwhile, the cultivated lands are under poor conditions : the average organic matter content ranges 2.5 to 2.8% in paddy soil and 2.0 to 2.6% in upland soil ; the canon exchange capacity ranges 8 to 12 m.e. ; approximately 43% of paddy and 56% of upland are of sandy to sandy gravel soil ; only 42% of paddy and 16% of upland fields are on flat land. The present situation would mean that about 40 to 50% of soil applied herbicides are used on the field where the label instructs "Do not use!". Yet no positive effort has been made for 25 years long by government or companies to develop countermeasures. It is a really sophisticated social problem. In the 1960s and 1970s a subside program to incoporate hillside red clayish soil into sandy paddy as well as campaign for increased application of compost to the field had been operating. Yet majority of the sandy soils remains sandy and the program and campaign had been stopped. With regard to this sandy soil problem the authors have developed a method of "split application of a herbicide onto sandy soil field". A model case study has been carried out with success and is introduced with key procedure in this paper. Climate is variable in its nature. Among the climatic components sudden fall or rise in temperature is hardly avoidable for a crop plant. Our spring air temperature fluctuates so much ; for example, the daily mean air temperature of Inchon city varied from 6.31 to $16.81^{\circ}C$ on April 20, early seeding time of crops, within${\times}$2Sd range of 30 year records. Seeding early in season means an increased liability to phytotoxicity, and this will be more evident in direct water-seeding of rice. About 20% of farmers depend on the cold underground-water pumped for rice irrigation. If the well is deep over 70m, the fresh water may be about $10^{\circ}C$ cold. The water should be warmed to about $20^{\circ}C$ before irrigation. This is not so practiced well by farmers. In addition to the forementioned adverse conditions there exist many other aspects to be amended. Among them the worst for liquid spray type herbicides is almost total lacking in proper knowledge of nozzle types and concern with even spray by the administrative, rural extension officers, company and farmers. Even not available in the market are the nozzles and sprayers appropriate for herbicides spray. Most people perceive all the pesticide sprayers same and concern much with the speed and easiness of spray, not with correct spray. There exist many points to be improved to minimize herbicidal phytotoxicity in Korea and many ways to achieve the goal. First of all it is suggested that 1) the present evaluation of a new herbicide at standard and double doses in registration trials is to be an evaluation for standard, double and triple doses to exploit the response slope in making decision for approval and recommendation of different dose for different situation on label, 2) the government is to recognize the facts and nature of the present problem to correct the present misperceptions and to develop an appropriate national program for improvement of soil conditions, spray equipment, extention manpower and services, 3) the researchers are to enhance researches on the countermeasures and 4) the herbicide makers/dealers are to correct their misperceptions and policy for sales, to develop database on the detailed use conditions of consumer one by one and to serve the consumers with direct counsel based on the database.

  • PDF

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Randomized Trial of Early Versus Late Alternating Radiotherapy/ Chemotherapy in Limited-Disease Patients with Small Cell Lung Cancer (국한성병기 소세포폐암 환자에서 조기 혹은 지연 교대 방사선-항암제치료의 전향적 비교연구)

  • Lee Chang Geol;Kim Joo Hang;Kim Sung Kyu;Kim Sei Kyu;Kim Gwi Eon;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.20 no.2
    • /
    • pp.116-122
    • /
    • 2002
  • Purpose : A randomized prospective study was conducted to compare the efficacy of early or late alternating schedules of radiotherapy, and carboplatin and ifosfamide chemotherapy in patients with limited-disease small cell lung cancer. Materials and Methods: From August 1993 to August 1996, a total of 44 patients with newly diagnosed, limited-disease small cell lung cancer, PS $H0\~2$, wt $loss<10\%$ were enrolled in a randomized trial which compared early alternating radiotherapy (RT)/chemotherapy (CT) and late alternating RT/CT. The CT regimen included ifosfamide $1.5\;g/m^2$ IV, d1-5 and carboplatin AUC 5/d IV, d2 peformed at 4 week intervals for a total of 6 cycles. RT (54 Gy/30 fr) was started after the first cycle of CT (early arm, N=22) or after the third cycle of CT (late arm, N=22) with a split course of treatment. Results : The pretreatment characteristics between the two arms were well balanced. The response rates in the early $(86\%)$ and late $(85\%)$ arm were similar. The median survival durations and 2-year survival rates were 15 months and $22.7\%$ in the early arm, and 17 months and $14.9\%$ in the late arm (p=0.47 by the log-rank test). The two-year progression free survival rates were $19.1\%$ in the early arm and $19.6\%$ in the late arm (p=0.52 by the log-rank test). Acute grade 3 or 4 hematologic and nonhematologic toxicities were similar between the two arms. Eighteen patients $(82\%)$ completed 6 cycles of CT in the early arm and 17 $(77\%)$ in the late arm. Four patients received less than 45 Gy of RT in the early arm and two in the late arm. There was no significant difference in the failure patterns. The local failure rate was $43\%$ in the early arm and $45\%$ in the late arm. The first site of failure was the brain in $24\%$ of the early arm patients compared to $35\%$ in the late arm (p=0.51). Conclusion : There were no statistical differences in the overall survival rate and the pattern of failure between the early and late alternating RT/CT in patients with limited-disease small cell lung cancer.

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

  • Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.35-55
    • /
    • 2013
  • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.