Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)
-
- Journal of Intelligence and Information Systems
- /
- v.20 no.1
- /
- pp.163-176
- /
- 2014
Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.
Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.
Korea has stepped up efforts to investigate and catalog its flora and fauna to conserve the biodiversity of the Korean Peninsula and secure biological resources since the ratification of the Convention on Biological Diversity (CBD) in 1992 and the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits (ABS) in 2010. Thus, after its establishment in 2007, the National Institute of Biological Resources (NIBR) of the Ministry of Environment of Korea initiated a project called the Korean Indigenous Species Investigation Project to investigate indigenous species on the Korean Peninsula. For 15 years since its beginning in 2006, this project has been carried out in five phases, Phase 1 from 2006-2008, Phase 2 from 2009-2011, Phase 3 from 2012-2014, Phase 4 from 2015-2017, and Phase 5 from 2018-2020. Before this project, in 2006, the number of indigenous species surveyed was 29,916. The figure was cumulatively aggregated at the end of each phase as 33,253 species for Phase 1 (2008), 38,011 species for Phase 2 (2011), 42,756 species for Phase 3 (2014), 49,027 species for Phase 4 (2017), and 54,428 species for Phase 5(2020). The number of indigenous species surveyed grew rapidly, showing an approximately 1.8-fold increase as the project progressed. These statistics showed an annual average of 2,320 newly recorded species during the project period. Among the recorded species, a total of 5,242 new species were reported in scientific publications, a great scientific achievement. During this project period, newly recorded species on the Korean Peninsula were identified using the recent taxonomic classifications as follows: 4,440 insect species (including 988 new species), 4,333 invertebrate species except for insects (including 1,492 new species), 98 vertebrate species (fish) (including nine new species), 309 plant species (including 176 vascular plant species, 133 bryophyte species, and 39 new species), 1,916 algae species (including 178 new species), 1,716 fungi and lichen species(including 309 new species), and 4,812 prokaryotic species (including 2,226 new species). The number of collected biological specimens in each phase was aggregated as follows: 247,226 for Phase 1 (2008), 207,827 for Phase 2 (2011), 287,133 for Phase 3 (2014), 244,920 for Phase 4(2017), and 144,333 for Phase 5(2020). A total of 1,131,439 specimens were obtained with an annual average of 75,429. More specifically, 281,054 insect specimens, 194,667 invertebrate specimens (except for insects), 40,100 fish specimens, 378,251 plant specimens, 140,490 algae specimens, 61,695 fungi specimens, and 35,182 prokaryotic specimens were collected. The cumulative number of researchers, which were nearly all professional taxonomists and graduate students majoring in taxonomy across the country, involved in this project was around 5,000, with an annual average of 395. The number of researchers/assistant researchers or mainly graduate students participating in Phase 1 was 597/268; 522/191 in Phase 2; 939/292 in Phase 3; 575/852 in Phase 4; and 601/1,097 in Phase 5. During this project period, 3,488 papers were published in major scientific journals. Of these, 2,320 papers were published in domestic journals and 1,168 papers were published in Science Citation Index(SCI) journals. During the project period, a total of 83.3 billion won (annual average of 5.5 billion won) or approximately US $75 million (annual average of US $5 million) was invested in investigating indigenous species and collecting specimens. This project was a large-scale research study led by the Korean government. It is considered to be a successful example of Korea's compressed development as it attracted almost all of the taxonomists in Korea and made remarkable achievements with a massive budget in a short time. The results from this project led to the National List of Species of Korea, where all species were organized by taxonomic classification. Information regarding the National List of Species of Korea is available to experts, students, and the general public (https://species.nibr.go.kr/index.do). The information, including descriptions, DNA sequences, habitats, distributions, ecological aspects, images, and multimedia, has been digitized, making contributions to scientific advancement in research fields such as phylogenetics and evolution. The species information also serves as a basis for projects aimed at species distribution and biological monitoring such as climate-sensitive biological indicator species. Moreover, the species information helps bio-industries search for useful biological resources. The most meaningful achievement of this project can be in providing support for nurturing young taxonomists like graduate students. This project has continued for the past 15 years and is still ongoing. Efforts to address issues, including species misidentification and invalid synonyms, still have to be made to enhance taxonomic research. Research needs to be conducted to investigate another 50,000 species out of the estimated 100,000 indigenous species on the Korean Peninsula.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
Purpose : 3D conformal radiotherapy, the optimum dose delivered to the tumor and provided the risk of normal tissue unless marginal miss, was restricted by organ motion. For tumors in the thorax and abdomen, the planning target volume (PTV) is decided including the margin for movement of tumor volumes during treatment due to patients breathing. We designed the respiratory gating radiotherapy device (RGRD) for using during CT simulation, dose planning and beam delivery at identical breathing period conditions. Using RGRD, reducing the treatment margin for organ (thorax or abdomen) motion due to breathing and improve dose distribution for 3D conformal radiotherapy. Materials and Methods : The internal organ motion data for lung cancer patients were obtained by examining the diaphragm in the supine position to find the position dependency. We made a respiratory gating radiotherapy device (RGRD) that is composed of a strip band, drug sensor, micro switch, and a connected on-off switch in a LINAC control box. During same breathing period by RGRD, spiral CT scan, virtual simulation, and 3D dose planing for lung cancer patients were peformed, without an extended PTV margin for free breathing, and then the dose was delivered at the same positions. We calculated effective volumes and normal tissue complication probabilities (NTCP) using dose volume histograms for normal lung, and analyzed changes in doses associated with selected NTCP levels and tumor control probabilities (TCP) at these new dose levels. The effects of 3D conformal radiotherapy by RGRD were evaluated with DVH (Dose Volume Histogram), TCP, NTCP and dose statistics. Results : The average movement of a diaphragm was 1.5 cm in the supine position when patients breathed freely. Depending on the location of the tumor, the magnitude of the PTV margin needs to be extended from 1 cm to 3 cm, which can greatly increase normal tissue irradiation, and hence, results in increase of the normal tissue complications probabiliy. Simple and precise RGRD is very easy to setup on patients and is sensitive to length variation (+2 mm), it also delivers on-off information to patients and the LINAC machine. We evaluated the treatment plans of patients who had received conformal partial organ lung irradiation for the treatment of thorax malignancies. Using RGRD, the PTV margin by free breathing can be reduced about 2 cm for moving organs by breathing. TCP values are almost the same values
Purpose: Bone metastasis in breast cancer patients are usually assessed by conventional Tc-99m methylene diphosphonate whole-body bone scan, which has a high sensitivity but a poor specificity. However, positron emission tomography with
At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.
In recent years, one of the major reasons for the fierce competition amongst firms is that they strive to increase their own market shares and customer acquisition rate in the same market with similar and apparently undifferentiated products in terms of quality and perceived benefit. Because of this change in recent marketing environment, the differentiated after-sales service and diversified promotion strategies have become more important to gain competitive advantage. Price promotion is the favorite strategy that most retailers use to achieve short-term sales increase, induce consumer's brand switch, in troduce new product into market, and so forth. However, if marketers apply or copy an identical price promotion strategy without considering the characteristic differences in product and consumer preference, it will cause serious problems because discounted price itself could make people skeptical about product quality, and the changes of perceived value might appear differently depending on other factors such as consumer involvement or brand attitude. Previous studies showed that price promotion would certainly increase sales, and the discounted price compared to regular price would enhance the consumer's perceived values. On the other hand, discounted price itself could make people depreciate or skeptical about product quality, and reduce the consumers' positivity bias because consumers might be unsure whether the current price promotion is the retailer's best price offer. Moreover, we cannot say that discounted price absolutely enhances the consumer's perceived values regardless of product category and purchase situations. That is, the factors that affect consumers' value perceptions and buying behavior are so diverse in reality that the results of studies on the same dependent variable come out differently depending on what variable was used or how experiment conditions were designed. Majority of previous researches on the effect of price-comparison advertising have used consumers' buying behavior as dependent variable. In order to figure out consumers' buying behavior theoretically, analysis of value perceptions which influence buying intentions is needed. In addition, they did not combined the independent variables such as brand loyalty and price discount rate together. For this reason, this paper tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception. And we provided with theoretical and managerial implications that marketers need to consider such variables as product attributes, brand loyalty, and consumer involvement at the same time, and then establish a differentiated pricing strategy case by case in order to enhance consumer's perceived values properl. Three research concepts were used in our study and each concept based on past researches was defined. The perceived acquisition value in this study was defined as the perceived net gains associated with the products or services acquired. That is, the perceived acquisition value of the product will be positively influenced by the benefits buyers believe they are getting by acquiring and using the product, and negatively influenced by the money given up to acquire the product. And the perceived transaction value was defined as the perception of psychological satisfaction or pleasure obtained from taking advantage of the financial terms of the price deal. Lastly, the brand loyalty was defined as favorable attitude towards a purchased product. Thus, a consumer loyal to a brand has an emotional attachment to the brand or firm. Repeat purchasers continue to buy the same brand even though they do not have an emotional attachment to it. We assumed that if the degree of brand loyalty is high, the perceived acquisition value and the perceived transaction value will increase when higher discount rate is provided. But we found that there are no significant differences in values between two different discount rates as a result of empirical analysis. It means that price reduction did not affect consumer's brand choice significantly because the perceived sacrifice decreased only a little, and customers are satisfied with product's benefits when brand loyalty is high. From the result, we confirmed that consumers with high degree of brand loyalty to a specific product are less sensitive to price change. Thus, using price promotion strategy to merely expect sale increase is not recommendable. Instead of discounting price, marketers need to strengthen consumers' brand loyalty and maintain the skimming strategy. On the contrary, when the degree of brand loyalty is low, the perceived acquisition value and the perceived transaction value decreased significantly when higher discount rate is provided. Generally brands that are considered inferior might be able to draw attention away from the quality of the product by making consumers focus more on the sacrifice component of price. But considering the fact that consumers with low degree of brand loyalty are known to be unsatisfied with product's benefits and have relatively negative brand attitude, bigger price reduction offered in experiment condition of this paper made consumers depreciate product's quality and benefit more and more, and consumer's psychological perceived sacrifice increased while perceived values decreased accordingly. We infer that, in the case of inferior brand, a drastic price-cut or frequent price promotion may increase consumers' uncertainty about overall components of product. Therefore, it appears that reinforcing the augmented product such as after-sale service, delivery and giving credit which is one of the levels consisting of product would be more effective in reality. This will be better rather than competing with product that holds high brand loyalty by reducing sale price. Although this study tried to examine the moderating effect of brand loyalty on relationship between the different levels of discounting rate and buyers' value perception, there are several limitations. This study was conducted in controlled conditions where the high involvement product and two different levels of discount rate were applied. Given the presence of low involvement product, when both pieces of information are available, it is likely that the results we have reported here may have been different. Thus, this research results explain only the specific situation. Second, the sample selected in this study was university students in their twenties, so we cannot say that the results are firmly effective to all generations. Future research that manipulates the level of discount along with the consumer involvement might lead to a more robust understanding of the effects various discount rate. And, we used a cellular phone as a product stimulus, so it would be very interesting to analyze the result when the product stimulus is an intangible product such as service. It could be also valuable to analyze whether the change of perceived value affects consumers' final buying behavior positively or negatively.