• Title/Summary/Keyword: performance variable

Search Result 4,478, Processing Time 0.031 seconds

Fragment Analysis for Detection of the FLT3-Internal Tandem Duplication: Comparison with Conventional PCR and Sanger Sequencing (FLT3-ITD 검출을 위한 절편분석법: 일반 중합효소연쇄반응 및 직접염기서열분석법과의 비교)

  • Lee, GunDong;Kim, Jeongeun;Lee, SangYoon;Jang, Woori;Park, Joonhong;Chae, Hyojin;Kim, Myungshin;Kim, Yonggoo
    • Laboratory Medicine Online
    • /
    • v.7 no.1
    • /
    • pp.13-19
    • /
    • 2017
  • Background: We evaluated a sensitive and quantitative method utilizing fragment analysis of the fms-like tyrosine kinase 3-internal tandem duplication (FLT3-ITD), simultaneously measuring mutant allele burden and length, and verified the analytical performance. Methods: The number and allelic burden of FLT3-ITD mutations was determined by fragment analysis. Serial mixtures of mutant and wild-type plasmid DNA were used to calculate the limit of detection of fragment analysis, conventional PCR, and Sanger sequencing. Specificity was evaluated using DNA samples derived from 50 normal donors. Results of fragment analysis were compared to those of conventional PCR, using 481 AML specimens. Results: Defined mixtures were consistently and accurately identified by fragment analysis at a 5% relative concentration of mutant to wild-type, and at 10% and 20% ratios by conventional PCR and direct sequencing, respectively. No false positivity was identified. Among 481 AML specimens, 40.1% (193/481) had FLT3-ITD mutations. The mutant allele burden (1.7-94.1%; median, 28.2%) and repeated length of the mutation (14-153 bp; median, 49 bp) were variable. The concordance rate between fragment analysis and conventional PCR was 97.7% (470/481). Fragment analysis was more sensitive than conventional PCR and detected 11 additional cases: seven had mutations below 10%, three cases represented conventional PCR failure, and one case showed false negativity because of short ITD length (14 bp). Conclusions: The new fragment analysis method proved to be sensitive and reliable for the detection and monitoring of FLT3-ITD in patients with AML. This could be used to simultaneously assess ITD mutant allele burden and length.

Associations of Communication Skills, Self-Efficacy on Clinical Performance and Empathy in Trainee Doctors (전공의 의료커뮤니케이션 능력과 진료수행 자기효능감, 공감능력과의 상관관계)

  • Kim, Doehyung;Kim, Min-Jeong;Lee, Haeyoung;Kim, Hyunseuk;Kim, Youngmi;Lee, Sang-Shin
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.29 no.1
    • /
    • pp.49-57
    • /
    • 2021
  • Objectives : This study evaluated the medical communication skills of trainee doctors and analyzed the relationship between medical communication skills, self-efficacy on clinical performance (SECP) and empathy. Methods : A total of 106 trainee doctors from a university hospital participated. The questionnaire comprised self-evaluated medical communication skills, modified SECP and the Korean version of the Jefferson Scale of Empathy-Health Professionals version. The mean difference in medical communication skills scores according to gender, age, division (intern, internal medicine group or surgery group) and position (intern, first-/second- and third-/fourth-year residents) were analyzed. Pearson correlation coefficients were determined between medical communication skills, modified SECP and empathy. The effects of each variable on medical communication skills were verified using the structural equation model. Results : There were no statistically significant mean differences in self-evaluated medical communication skills according to gender, age, division or position. Medical communication skills had a significant positive correlation with modified SECP (r=0.782, p<0.001) and empathy (r=0.210, p=0.038). Empathy had a direct effect on modified SECP (β=0.30, p<0.01) and modified SECP had a direct effect on medical communication skills (β=0.80, p<0.001). Empathy indirectly influenced medical communication skills, mediating modified SECP (β=0.26, p<0.05). Conclusions : Medical communication skills are an important core curriculum of residency programs, as they have a direct correlation with SECP, which is needed for successful treatment. Moreover, the medical communication needs a new understanding that is out of empathy.

Effect of Corn Silage and Soybean Silage Mixture on Rumen Fermentation Characteristics In Vitro, and Growth Performance and Meat Grade of Hanwoo Steers (옥수수 사일리지와 대두 사일리지의 혼합급여가 In Vitro 반추위 발효성상 및 거세한우의 성장과 육질등급에 미치는 영향)

  • Kang, Juhui;Lee, Kihwan;Marbun, Tabita Dameria;Song, Jaeyong;Kwon, Chan Ho;Yoon, Duhak;Seo, Jin-Dong;Jo, Young Min;Kim, Jin Yeoul;Kim, Eun Joong
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.42 no.2
    • /
    • pp.61-72
    • /
    • 2022
  • The present study was conducted to examine the effect of soybean silage as a crude protein supplement for corn silage in the diet of Hanwoo steers. The first experiment was conducted to evaluate the effect of replacing corn silage with soybean silage at different levels on rumen fermentation characteristics in vitro. Commercially-purchased corn silage was replaced with 0, 4, 8, or 12% of soybean silage. Half gram of the substrate was added to 50 mL of buffer and rumen fluid from Hanwoo cows, and then incubated at 39℃ for 0, 3, 6, 12, 24, and 48 h. At 24 h, the pH of the control (corn silage only) was lower (p<0.05) than that of soybean-supplemented silages, and the pH numerically increased along with increasing proportions of soybean silage. Other rumen parameters, including gas production, ammonia nitrogen, and total volatile fatty acids, were variable. However, they tended to increase with increasing proportions of soybean silage. In the second experiment, 60 Hanwoo steers were allocated to one of three dietary treatments, namely, CON (concentrate with Italian ryegrass), CS (concentrate with corn silage), CS4% (concentrate with corn silage and 4% of soybean silage). Animals were offered experimental diets for 110 days during the growing period and then finished with typified beef diets that were commercially available to evaluate the effect of soybean silage on animal performance and meat quality. With the soybean silage, the weight gain and feed efficiency of the animal were more significant than those of the other treatments during the growing period (p<0.05). However, the dietary treatments had little effect on meat quality except for meat color. In conclusion, corn silage mixed with soybean silage even at a lower level provided a greater ruminal environment and animal performances, particularly with increased carcass weight and feed efficiency during growing period.

Changing Aspects and Factors of Shaman's Play in Donghaeanbyeolsingut (동해안별신굿 굿놀이의 변화양상과 요인)

  • Kim, Shin-Hyo
    • (The) Research of the performance art and culture
    • /
    • no.38
    • /
    • pp.33-69
    • /
    • 2019
  • In this article, I would like to pay attention to the changesofShaman's play as a part of examining the process of changeofDonghaeanbyeolsingut. Currently, Shaman's play can be seeninBaekseok2-ri, Byeonggok-myeon, Yeongdeok-gun, Gugye-ri, Namjeong-myeon and Yeongdeok-gun. Among them, ShamanRitual of Baekseok2-ri is short in cycle and easy to see change. InShaman Ritual of Baekseok2-ri, various Shaman's plays suchasJungdodukjabi(중도둑잡이), Wonnimnori(원님놀이), Talgut(탈굿), Mallori(말놀이), Hotalgut(호탈굿) and Georigut(거리굿) areperformed. Shaman's play carried out in Donghaeanbyeolsingut changesevery time it is carried out. Depending on the degree of change, itcan be classified into passive change and positive change. Whilepassive changes are improvisational, aggressive changes areintentional. Shaman's plays are jungdodukjabi, Mallori, Hotalgut, which are the improvisational changes, and the intentional changesare Wonnimnori, Talgut, Georigut. The biggest change in Shaman's play is the disappearance of the Shaman Ritual. The Shaman Ritual is suspended due to lackofpeople, financial difficulties, religious conflicts or rational accidents. Secondly, the period of separation is shortened or shortened. Thiscauses Shaman's play to be dropped or shrunk to change. InShaman Ritual, changes in Shaman's play are variable andcreative. The change due to the intervention of the spectatorsismainly improvisation. On the other hand, the change bythepreliminary plan of the acquaintance is intentional, and the changeis large. The changing factors of Shaman's play are influenced bythedemands of the times and the recognition of the tradition group. Changes in the traditional environment can be attributed to lackofhuman resources, individualism, changes in the workingenvironment and time constraints. At the same time, givingautonomy to traffickers is a major reason for Shaman's playtochange.

Development Strengths of High Strength Headed Bars of RC and SFRC Exterior Beam-Column Joint (RC 및 SFRC 외부 보-기둥 접합부에 대한 고강도 확대머리 철근의 정착강도)

  • Duck-Young Jang;Jae-Won Jeong;Kang-Seok Lee;Seung-Hun Kim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.6
    • /
    • pp.94-101
    • /
    • 2023
  • In this study, the development performance of the head bars, which is SD700, was experimentally evaluated at the RC (reinforced concrete) or SFRC (steel fiber reinforced concrete external beam-column joint. A total of 10 specimens were tested, and variables such as steel fibers, length of settlement, effective depth of the beam, and stirrups of the column were planned. As a result of the experiment, the specimens showed side-face blowout, concrete breakout, and shear failure depending on the experimental variables. In the RC series experiments with development length as a variable, it was confirmed that the development strength increased by 26.5~42.2% as the development length increased by 25-80%, which was not proportional to the development length. JD-based experiments with twice the effective depth of beams showed concrete breakout failure, reducing the maximum strength by 31.5% to 62% compared to the reference experiment. The S-series experiment, in which the spacing of the shear reinforcement around the enlarged head reinforcement was 1/2 times that of the reference experiment, increased the maximum strength by 8.4 to 9.7%. The concrete compressive strength of SFRC was evaluated to be 29.3% smaller than the concrete compressive strength of RC, but the development strength of SFRC specimens increased by 7.3% to 12.2%. Accordingly it was confirmed that the development performance of the head bar was greatly improved by reinforcing the steel fiber. Considering the results of 92% and 99% of the experimental maximum strength of the experiment arranged with 92% and 110% of the KDS-based settlement length, it is judged that the safety rate needs to be considered even more. In addition, it is required to present a design formula that considers the effective depth of the beam compared to the development length.

A Data-based Sales Forecasting Support System for New Businesses (데이터기반의 신규 사업 매출추정방법 연구: 지능형 사업평가 시스템을 중심으로)

  • Jun, Seung-Pyo;Sung, Tae-Eung;Choi, San
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.1-22
    • /
    • 2017
  • Analysis of future business or investment opportunities, such as business feasibility analysis and company or technology valuation, necessitate objective estimation on the relevant market and expected sales. While there are various ways to classify the estimation methods of these new sales or market size, they can be broadly divided into top-down and bottom-up approaches by benchmark references. Both methods, however, require a lot of resources and time. Therefore, we propose a data-based intelligent demand forecasting system to support evaluation of new business. This study focuses on analogical forecasting, one of the traditional quantitative forecasting methods, to develop sales forecasting intelligence systems for new businesses. Instead of simply estimating sales for a few years, we hereby propose a method of estimating the sales of new businesses by using the initial sales and the sales growth rate of similar companies. To demonstrate the appropriateness of this method, it is examined whether the sales performance of recently established companies in the same industry category in Korea can be utilized as a reference variable for the analogical forecasting. In this study, we examined whether the phenomenon of "mean reversion" was observed in the sales of start-up companies in order to identify errors in estimating sales of new businesses based on industry sales growth rate and whether the differences in business environment resulting from the different timing of business launch affects growth rate. We also conducted analyses of variance (ANOVA) and latent growth model (LGM) to identify differences in sales growth rates by industry category. Based on the results, we proposed industry-specific range and linear forecasting models. This study analyzed the sales of only 150,000 start-up companies in Korea in the last 10 years, and identified that the average growth rate of start-ups in Korea is higher than the industry average in the first few years, but it shortly shows the phenomenon of mean-reversion. In addition, although the start-up founding juncture affects the sales growth rate, it is not high significantly and the sales growth rate can be different according to the industry classification. Utilizing both this phenomenon and the performance of start-up companies in relevant industries, we have proposed two models of new business sales based on the sales growth rate. The method proposed in this study makes it possible to objectively and quickly estimate the sales of new business by industry, and it is expected to provide reference information to judge whether sales estimated by other methods (top-down/bottom-up approach) pass the bounds from ordinary cases in relevant industry. In particular, the results of this study can be practically used as useful reference information for business feasibility analysis or technical valuation for entering new business. When using the existing top-down method, it can be used to set the range of market size or market share. As well, when using the bottom-up method, the estimation period may be set in accordance of the mean reverting period information for the growth rate. The two models proposed in this study will enable rapid and objective sales estimation of new businesses, and are expected to improve the efficiency of business feasibility analysis and technology valuation process by developing intelligent information system. In academic perspectives, it is a very important discovery that the phenomenon of 'mean reversion' is found among start-up companies out of general small-and-medium enterprises (SMEs) as well as stable companies such as listed companies. In particular, there exists the significance of this study in that over the large-scale data the mean reverting phenomenon of the start-up firms' sales growth rate is different from that of the listed companies, and that there is a difference in each industry. If a linear model, which is useful for estimating the sales of a specific company, is highly likely to be utilized in practical aspects, it can be explained that the range model, which can be used for the estimation method of the sales of the unspecified firms, is highly likely to be used in political aspects. It implies that when analyzing the business activities and performance of a specific industry group or enterprise group there is political usability in that the range model enables to provide references and compare them by data based start-up sales forecasting system.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Cooperative Sales Promotion in Manufacturer-Retailer Channel under Unplanned Buying Potential (비계획구매를 고려한 제조업체와 유통업체의 판매촉진 비용 분담)

  • Kim, Hyun Sik
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.29-53
    • /
    • 2012
  • As so many marketers get to use diverse sales promotion methods, manufacturer and retailer in a channel often use them too. In this context, diverse issues on sales promotion management arise. One of them is the issue of unplanned buying. Consumers' unplanned buying is clearly better off for the retailer but not for manufacturer. This asymmetric influence of unplanned buying should be dealt with prudently because of its possibility of provocation of channel conflict. However, there have been scarce studies on the sales promotion management strategy considering the unplanned buying and its asymmetric effect on retailer and manufacturer. In this paper, we try to find a better way for a manufacturer in a channel to promote performance through the retailer's sales promotion efforts when there is potential of unplanned buying effect. We investigate via game-theoretic modeling what is the optimal cost sharing level between the manufacturer and retailer when there is unplanned buying effect. We investigated following issues about the topic as follows: (1) What structure of cost sharing mechanism should the manufacturer and retailer in a channel choose when unplanned buying effect is strong (or weak)? (2) How much payoff could the manufacturer and retailer in a channel get when unplanned buying effect is strong (or weak)? We focus on the impact of unplanned buying effect on the optimal cost sharing mechanism for sales promotions between a manufacturer and a retailer in a same channel. So we consider two players in the game, a manufacturer and a retailer who are interacting in a same distribution channel. The model is of complete information game type. In the model, the manufacturer is the Stackelberg leader and the retailer is the follower. Variables in the model are as following table. Manufacturer's objective function in the basic game is as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. And retailer's is as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+p_u(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. The model is of four stages in two periods. Stages of the game are as follows. (Stage 1) Manufacturer sets wholesale price of the first period($w_1$) and cost sharing level of channel sales promotion(${\Psi}$). (Stage 2) Retailer sets retail price of the focal brand($p_1$), the unplanned buying item($p_u$), and sales promotion level(L). (Stage 3) Manufacturer sets wholesale price of the second period($w_2$). (Stage 4) Retailer sets retail price of the second period($p_2$). Since the model is a kind of dynamic games, we try to find a subgame perfect equilibrium to derive some theoretical and managerial implications. In order to obtain the subgame perfect equilibrium, we use the backward induction method. In using backward induction approach, we solve the problems backward from stage 4 to stage 1. By completely knowing follower's optimal reaction to the leader's potential actions, we can fold the game tree backward. Equilibrium of each variable in the basic game is as following table. We conducted more analysis of additional game about diverse cost level of manufacturer. Manufacturer's objective function in the additional game is same with that of the basic game as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. But retailer's objective function is different from that of the basic game as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+(p_u-c)(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. Equilibrium of each variable in this additional game is as following table. Major findings of the current study are as follows: (1) As the unplanned buying effect gets stronger, manufacturer and retailer had better increase the cost for sales promotion. (2) As the unplanned buying effect gets stronger, manufacturer had better decrease the cost sharing portion of total cost for sales promotion. (3) Manufacturer's profit is increasing function of the unplanned buying effect. (4) All results of (1),(2),(3) are alleviated by the increase of retailer's procurement cost to acquire unplanned buying items. The authors discuss the implications of those results for the marketers in manufacturers or retailers. The current study firstly suggests some managerial implications for the manufacturer how to share the sales promotion cost with the retailer in a channel to the high or low level of the consumers' unplanned buying potential.

  • PDF