• Title/Summary/Keyword: SELECT model

Search Result 1,831, Processing Time 0.035 seconds

Status and Implications of Hydrogeochemical Characterization of Deep Groundwater for Deep Geological Disposal of High-Level Radioactive Wastes in Developed Countries (고준위 방사성 폐기물 지질처분을 위한 해외 선진국의 심부 지하수 환경 연구동향 분석 및 시사점 도출)

  • Jaehoon Choi;Soonyoung Yu;SunJu Park;Junghoon Park;Seong-Taek Yun
    • Economic and Environmental Geology
    • /
    • v.55 no.6
    • /
    • pp.737-760
    • /
    • 2022
  • For the geological disposal of high-level radioactive wastes (HLW), an understanding of deep subsurface environment is essential through geological, hydrogeological, geochemical, and geotechnical investigations. Although South Korea plans the geological disposal of HLW, only a few studies have been conducted for characterizing the geochemistry of deep subsurface environment. To guide the hydrogeochemical research for selecting suitable repository sites, this study overviewed the status and trends in hydrogeochemical characterization of deep groundwater for the deep geological disposal of HLW in developed countries. As a result of examining the selection process of geological disposal sites in 8 countries including USA, Canada, Finland, Sweden, France, Japan, Germany, and Switzerland, the following geochemical parameters were needed for the geochemical characterization of deep subsurface environment: major and minor elements and isotopes (e.g., 34S and 18O of SO42-, 13C and 14C of DIC, 2H and 18O of water) of both groundwater and pore water (in aquitard), fracture-filling minerals, organic materials, colloids, and oxidation-reduction indicators (e.g., Eh, Fe2+/Fe3+, H2S/SO42-, NH4+/NO3-). A suitable repository was selected based on the integrated interpretation of these geochemical data from deep subsurface. In South Korea, hydrochemical types and evolutionary patterns of deep groundwater were identified using artificial neural networks (e.g., Self-Organizing Map), and the impact of shallow groundwater mixing was evaluated based on multivariate statistics (e.g., M3 modeling). The relationship between fracture-filling minerals and groundwater chemistry also has been investigated through a reaction-path modeling. However, these previous studies in South Korea had been conducted without some important geochemical data including isotopes, oxidationreduction indicators and DOC, mainly due to the lack of available data. Therefore, a detailed geochemical investigation is required over the country to collect these hydrochemical data to select a geological disposal site based on scientific evidence.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

A Study on Consumer Eco-friendly Behavior Utilizing the Photovoice Methodology : Focus Group Study (포토보이스(Photovoice) 기법을 활용한 소비자의 친환경 행동에 대한 연구 : Focus Group Study)

  • Lee, Il-han
    • Journal of Venture Innovation
    • /
    • v.6 no.4
    • /
    • pp.63-81
    • /
    • 2023
  • The purpose of this study was to utilize the Photovoice qualitative research method targeting university students. Through this method, we aimed to understand the perceptions of environmental issues, environmental barriers, and eco-friendly behaviors among university students. By employing the Photovoice methodology, we sought to share the perspectives of university students on eco-friendly behaviors, explore the motivations and manifestations of these behaviors, and reflect on their significance. The ultimate goal was to provide practical suggestions for fostering eco-friendly behaviors through an in-depth examination of the visual narratives and reflections of university students. Under the overarching theme of the environment, participants were given the opportunity to individually select and explore three specific sub-themes: 'My Concept of the Environment,' 'Environmental Barriers in My Life,' and 'My Eco-friendly Behaviors.' Participants engaged in the process of capturing photographs from their daily lives related to each theme, expressing their thoughts and perspectives through the selected images. Subsequently, they shared and discussed their insights, actively listening to the opinions of others in the group. The results of this study revealed several key findings. Firstly, participants assigned meaning to the photographs they selected by directly capturing aspects related to the environment, such as 'waste,' 'discomfort,' 'fine dust=environmental pollution,' and 'indifference.' Secondly, participants attributed meaning to the selected photographs related to environmental barriers, associating them with concepts like 'invisibility,' 'apathy,' 'social stigma,' 'inefficiency,' and 'compulsion.' Lastly, participants ascribed significance to photographs selected in the context of eco-friendly behaviors, with themes like 'recycling,' 'energy conservation,' 'reuse,' and 'reducing the use of disposable items.' Based on these research findings, the confirmation of the V-A-B (Values-Attitudes-Behavior) model was established. It was observed that consumers structure a hierarchical relationship between their personal values, attitudes, and behaviors. The study also identified clear impediments in consumers' daily lives hindering the practice of eco-friendly behaviors. In light of this, the research highlighted the need for strategies to address the discomfort or inconvenience associated with implementing environmentally friendly consumer behaviors. The implications of the study suggest that interventions or solutions are necessary to alleviate barriers and promote a more seamless integration of eco-friendly practices into consumers' daily routines.

The Ability of Anti-tumor Necrosis Factor Alpha(TNF-${\alpha}$) Antibodies Produced in Sheep Colostrums

  • Yun, Sung-Seob
    • 한국유가공학회:학술대회논문집
    • /
    • 2007.09a
    • /
    • pp.49-58
    • /
    • 2007
  • Inflammatory process leads to the well-known mucosal damage and therefore a further disturbance of the epithelial barrier function, resulting abnormal intestinal wall function, even further accelerating the inflammatory process[1]. Despite of the records, etiology and pathogenesis of IBD remain rather unclear. There are many studies over the past couple of years have led to great advanced in understanding the inflammatory bowel disease(IBD) and their underlying pathophysiologic mechanisms. From the current understanding, it is likely that chronic inflammation in IBD is due to aggressive cellular immune responses including increased serum concentrations of different cytokines. Therefore, targeted molecules can be specifically eliminated in their expression directly on the transcriptional level. Interesting therapeutic trials are expected against adhesion molecules and pro-inflammatory cytokines such as TNF-${\alpha}$. The future development of immune therapies in IBD therefore holds great promises for better treatment modalities of IBD but will also open important new insights into a further understanding of inflammation pathophysiology. Treatment of cytokine inhibitors such as Immunex(Enbrel) and J&J/Centocor(Remicade) which are mouse-derived monoclonal antibodies have been shown in several studies to modulate the symptoms of patients, however, theses TNF inhibitors also have an adverse effect immune-related problems and also are costly and must be administered by injection. Because of the eventual development of unwanted side effects, these two products are used in only a select patient population. The present study was performed to elucidate the ability of TNF-${\alpha}$ antibodies produced in sheep colostrums to neutralize TNF-${\alpha}$ action in a cell-based bioassay and in a small animal model of intestinal inflammation. In vitro study, inhibitory effect of anti-TNF-${\alpha}$ antibody from the sheep was determined by cell bioassay. The antibody from the sheep at 1 in 10,000 dilution was able to completely inhibit TNF-${\alpha}$ activity in the cell bioassay. The antibodies from the same sheep, but different milkings, exhibited some variability in inhibition of TNF-${\alpha}$ activity, but were all greater than the control sample. In vivo study, the degree of inflammation was severe to experiment, despite of the initial pilot trial, main trial 1 was unable to figure out of any effect of antibody to reduce the impact of PAF and LPS. Main rat trial 2 resulted no significant symptoms like characteristic acute diarrhea and weight loss of colitis. This study suggested that colostrums from sheep immunized against TNF-${\alpha}$ significantly inhibited TNF-${\alpha}$ bioactivity in the cell based assay. And the higher than anticipated variability in the two animal models precluded assessment of the ability of antibody to prevent TNF-${\alpha}$ induced intestinal damage in the intact animal. Further study will require to find out an alternative animal model, which is more acceptable to test anti-TNF-${\alpha}$ IgA therapy for reducing the impact of inflammation on gut dysfunction. And subsequent pre-clinical and clinical testing also need generation of more antibody as current supplies are low.

  • PDF

A Study on the Forest Yield Regulation by Systems Analysis (시스템분석(分析)에 의(依)한 삼림수확조절(森林收穫調節)에 관(關)한 연구(硏究))

  • Cho, Eung-hyouk
    • Korean Journal of Agricultural Science
    • /
    • v.4 no.2
    • /
    • pp.344-390
    • /
    • 1977
  • The purpose of this paper was to schedule optimum cutting strategy which could maximize the total yield under certain restrictions on periodic timber removals and harvest areas from an industrial forest, based on a linear programming technique. Sensitivity of the regulation model to variations in restrictions has also been analyzed to get information on the changes of total yield in the planning period. The regulation procedure has been made on the experimental forest of the Agricultural College of Seoul National University. The forest is composed of 219 cutting units, and characterized by younger age group which is very common in Korea. The planning period is devided into 10 cutting periods of five years each, and cutting is permissible only on the stands of age groups 5-9. It is also assumed in the study that the subsequent forests are established immediately after cutting existing forests, non-stocked forest lands are planted in first cutting period, and established forests are fully stocked until next harvest. All feasible cutting regimes have been defined to each unit depending on their age groups. Total yield (Vi, k) of each regime expected in the planning period has been projected using stand yield tables and forest inventory data, and the regime which gives highest Vi, k has been selected as a optimum cutting regime. After calculating periodic yields and cutting areas, and total yield from the optimum regimes selected without any restrictions, the upper and lower limits of periodic yields(Vj-max, Vj-min) and those of periodic cutting areas (Aj-max, Aj-min) have been decided. The optimum regimes under such restrictions have been selected by linear programming. The results of the study may be summarized as follows:- 1. The fluctuations of periodic harvest yields and areas under cutting regimes selected without restrictions were very great, because of irregular composition of age classes and growing stocks of existing stands. About 68.8 percent of total yield is expected in period 10, while none of yield in periods 6 and 7. 2. After inspection of the above solution, restricted optimum cutting regimes were obtained under the restrictions of Amin=150 ha, Amax=400ha, $Vmin=5,000m^3$ and $Vmax=50,000m^3$, using LP regulation model. As a result, about $50,000m^3$ of stable harvest yield per period and a relatively balanced age group distribution is expected from period 5. In this case, the loss in total yield was about 29 percent of that of unrestricted regimes. 3. Thinning schedule could be easily treated by the model presented in the study, and the thinnings made it possible to select optimum regimes which might be effective for smoothing the wood flows, not to speak of increasing total yield in the planning period. 4. It was known that the stronger the restrictions becomes in the optimum solution the earlier the period comes in which balanced harvest yields and age group distribution can be formed. There was also a tendency in this particular case that the periodic yields were strongly affected by constraints, and the fluctuations of harvest areas depended upon the amount of periodic yields. 5. Because the total yield was decreased at the increasing rate with imposing stronger restrictions, the Joss would be very great where strict sustained yield and normal age group distribution are required in the earlier periods. 6. Total yield under the same restrictions in a period was increased by lowering the felling age and extending the range of cutting age groups. Therefore, it seemed to be advantageous for producing maximum timber yield to adopt wider range of cutting age groups with the lower limit at which the smallest utilization size of timber could be produced. 7. The LP regulation model presented in the study seemed to be useful in the Korean situation from the following point of view: (1) The model can provide forest managers with the solution of where, when, and how much to cut in order to best fulfill the owners objective. (2) Planning is visualized as a continuous process where new strateges are automatically evolved as changes in the forest environment are recognized. (3) The cost (measured as decrease in total yield) of imposing restrictions can be easily evaluated. (4) Thinning schedule can be treated without difficulty. (5) The model can be applied to irregular forests. (6) Traditional regulation methods can be rainforced by the model.

  • PDF

The Study of Characteristics of Consumer Purchasing Private Brand Products at Large-Scale Mart (국내 대형마트의 유통업체 브랜드 상품 구매 소비자의 특성 분석에 관한 연구)

  • Hwang, Seong-Huyk;Lee, Jung-Hee;Roh, Eun-Jung
    • Journal of Distribution Research
    • /
    • v.15 no.4
    • /
    • pp.1-19
    • /
    • 2010
  • As having the movement of developing private brand (PB) goods, domestic big retailers are facing up with new problems. Thus, it is required studies of PB products, and how consumers recognize PB products as a consideration commodity set. Also, it is worthy in order that it gives us the important meaning on the marketing strategy with focusing on evaluating the differences between customers buying PB grocery goods with respect to demographic characteristics and purchasing behaviors. PB has some advantages for customers and retailers. However, according to AC Nielson's report (2005), Asian and emerging market has 1/5 sales relatively to Western countries. But we can assume that the emerging market has the most potential growth through this result. As a result from several other studies, it becomes necessary to not only increase the rate of selling composition of PB product temporarily, but also analyze the characteristics of customers using big retailers and segmenting customer groups to make PB product as a consideration commodity set for them. In addition, it is needed to have a variety of acts of marketing. From studies related to PB, there is a prejudice - cheap products have low quality - but, evaluation by customers who have used those products shows neutral stand, and there is a study representing that it is the most important to accumulate the belief between the retailers selling PB products and consumers using those for the accurate evaluation and intention on purchasing. Also, by the result from analyzing the characteristics of customers buying PB products, we could assume that higher income and higher education level, more preference on PB products. Especially, according to TNS's research, the primary targets of PB product are 30's who seeks value for money and planned spending habits, and 40's who have teenager children, and are interested in encouraging themselves. This paper used Probit model to analyze the characteristics of consumers. This model helps us to analyze with the variables representing the demographic characteristics of consumers (gender, age, educational level, occupation, income level, living area), and variables related to purchasing behavior (visiting frequency on big retailers, the average amount that they pay for goods in there, and check-up which brand made those goods). The method we used in this study is by man to man interview and survey on-line with the rate of 89% and 11% in Seoul and Gyunggi Province, respectively, for about one month from the beginning of February, 2008. As a result of this, under the assumption that people buy PB products more as long as they go shopping more, it was not meaningful for target groups which we pointed out as frequently visiting customers to be. Although, we have expected women buy more PB products than men do, gender doesn't mean anything for the result. And, it has inferred that married people buy more PB goods than singles do. It was also meaningless with variables related to occupation. Because housewives are often exposed to any kind of supermarket than workers are, we could not get any relatives. Moreover, we couldn't proof that younger generation prefer big retailers more than older people who 50~60's. Education levels doesn't affect on the purchase of PB product as well. Related to living area, the result is statistically not similar as we expected whether living in Seoul or not. It shows there is no relationship with the preference on retail brands and PB products, and it is similar with the study researched by TNS(2008) that customers tend to buy PB product impulsively no matter which brand it is and where they are even though their shopping place is the big market where customers are often using. Variables on which we had meaningful results are income level and living place. That is, customers who have 3,000,000~6,000,000 WON every month on average are more willing to buy PB products than other customers whose income is over 6,000,000 WON, and residents not living in Seoul prefer PB goods than those who are living in Seoul. To explain more about what we got, if there is only one condition about customer's visiting frequency on big retails, we could come up with this result that more exposed to PB products, more purchasing frequency. Consequently, it brings the important insight that large retailers have to prepare something to make customers visit them often to increase selling rate of PB products. To demonstrate the result of analyzing more, what is more efficient variables are demographically including marital status, income level, and residential area to buy items that affect the PB products and could include the frequency of visiting large markets by the purchase habits. Specifically, then, married couples rather than singles, middle-income customers than high-income customers, and local residents not living in Seoul than customers in Seoul are more likely to purchase PB goods. In addition, as long as a customer visits two times more, then the purchasing rate of PB products is to increase over 5.3%. Therefore, it seems that retailers are better to make a shopping place as fun and comfortable places. With overwhelming the idea that PB products are just cheap, one-time purchase goods, it is needed to increase the loyalty on those goods like NB products, try to make PB products as a consideration products set, and occur to sustainable sales. Especially, as suggested by this paper, it seems like it strongly needs to identify the characteristics of customers who prefer PB, to segment those customers, and to select the main target, and to do positioning with well-planned marketing strategies. Then, it is able to give us a meaningful point on marketing strategy by developing the field of PB study, identifying the difference of life style and shopping habits of customers.

  • PDF

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

A Study on Market Expansion Strategy via Two-Stage Customer Pre-segmentation Based on Customer Innovativeness and Value Orientation (고객혁신성과 가치지향성 기반의 2단계 사전 고객세분화를 통한 시장 확산 전략)

  • Heo, Tae-Young;Yoo, Young-Sang;Kim, Young-Myoung
    • Journal of Korea Technology Innovation Society
    • /
    • v.10 no.1
    • /
    • pp.73-97
    • /
    • 2007
  • R&D into future technologies should be conducted in conjunction with technological innovation strategies that are linked to corporate survival within a framework of information and knowledge-based competitiveness. As such, future technology strategies should be ensured through open R&D organizations. The development of future technologies should not be conducted simply on the basis of future forecasts, but should take into account customer needs in advance and reflect them in the development of the future technologies or services. This research aims to select as segmentation variables the customers' attitude towards accepting future telecommunication technologies and their value orientation in their everyday life, as these factors wilt have the greatest effect on the demand for future telecommunication services and thus segment the future telecom service market. Likewise, such research seeks to segment the market from the stage of technology R&D activities and employ the results to formulate technology development strategies. Based on the customer attitude towards accepting new technologies, two groups were induced, and a hierarchical customer segmentation model was provided to conduct secondary segmentation of the two groups on the basis of their respective customer value orientation. A survey was conducted in June 2006 on 800 consumers aged 15 to 69, residing in Seoul and five other major South Korean cities, through one-on-one interviews. The samples were divided into two sub-groups according to their level of acceptance of new technology; a sub-group demonstrating a high level of technology acceptance (39.4%) and another sub-group with a comparatively lower level of technology acceptance (60.6%). These two sub-groups were further divided each into 5 smaller sub-groups (10 total smaller sub-groups) through two rounds of segmentation. The ten sub-groups were then analyzed in their detailed characteristics, including general demographic characteristics, usage patterns in existing telecom services such as mobile service, broadband internet and wireless internet and the status of ownership of a computing or information device and the desire or intention to purchase one. Through these steps, we were able to statistically prove that each of these 10 sub-groups responded to telecom services as independent markets. We found that each segmented group responds as an independent individual market. Through correspondence analysis, the target segmentation groups were positioned in such a way as to facilitate the entry of future telecommunication services into the market, as well as their diffusion and transferability.

  • PDF

The Effects of Self-regulatory Resources and Construal Levels on the Choices of Zero-cost Products (자아조절자원 및 해석수준이 공짜대안 선택에 미치는 영향)

  • Lee, Jinyong;Im, Seoung Ah
    • Asia Marketing Journal
    • /
    • v.13 no.4
    • /
    • pp.55-76
    • /
    • 2012
  • Most people prefer to choose zero-cost products they may get without paying any money. The 'zero-cost effect' can be explained with a 'zero-cost model' where consumers attach special values to zero-cost products in a different way from general economic models (Shampanier, Mazar and Ariely 2007). If 2 different products at the regular prices of ₩200 and ₩400 simultaneously offer ₩200 discounts, the prices will be changed to ₩0 and ₩200, respectively. In spite of the same price gap of the two products after the ₩200 discounts, people are much more likely to select the free alternative than the same product at the price of ₩200. Although prior studies have focused on the 'zero-cost effect' in isolation of other factors, this study investigates the moderating effects of a self-regulatory resource and a construal level on the selection of free products. Self-regulatory resources induce people to control or regulate their behavior. However, since self-regulatory resources are limited, they are to be easily depleted when exerted (Muraven, Tice, and Baumeister 1998). Without the resources, consumers tend to become less sensitive to price changes and to spend money more extravagantly (Vohs and Faber 2007). Under this condition, they are also likely to invest less effort on their information processing and to make more intuitive decisions (Pocheptsova, Amir, Dhar, and Baumeister 2009). Therefore, context effects such as price changes and zero cost effects are less likely in the circumstances of resource depletion. In addition, construal levels have profound effects on the ways of information processing (Trope and Liberman 2003, 2010). In a high construal level, people tend to attune their minds to core features and desirability aspects, whereas, in a low construal level, they are more likely to process information based on secondary features and feasibility aspects (Khan, Zhu, and Kalra 2010). A perceived value of a product is more related to desirability whereas a zero cost or a price level is more associated with feasibility. Thus, context effects or reliance on feasibility (for instance, the zero cost effect) will be diminished in a high level construal while those effects may remain in a low level construal. When people make decisions, these 2 factors can influence the magnitude of the 'zero-cost effect'. This study ran two experiments to investigate the effects of self-regulatory resources and construal levels on the selection of a free product. Kisses and Ferrero-Rocher, which were adopted in the prior study (Shampanier et al. 2007) were also used as alternatives in Experiments 1 and 2. We designed Experiment 1 in order to test whether self-regulatory resource depletion will moderate the zero-cost effect. The level of self-regulatory resources was manipulated with two different tasks, a Sudoku task in the depletion condition and a task of drawing diagrams in the non-depletion condition. Upon completion of the manipulation task, subjects were randomly assigned to one of a decision set with a zero-cost option (i.e., Kisses ₩0, and Ferrero-Rocher ₩200) or a set without a zero-cost option (i.e., Kisses ₩200, and Ferrero-Rocher ₩400). A pair of alternatives in the two decision sets have the same price gap of ₩200 between a low-priced Kisses and a high-priced Ferrero-Rocher. Subjects in the no-depletion condition selected Kisses more often (71.88%) over Ferrero-Rocher when Kisses was free than when it was priced at ₩200 (34.88%). However, the zero-cost effect disappeared when people do not have self-regulatory resources. Experiment 2 was conducted to investigate whether constual levels influence the magnitude of the 'zero-cost effect'. To manipulate construal levels, 4 different 'why (in the high construal level condition)' or 'how (in the low construal level condition)' questions about health management were asked. They were presented with 4 boxes connected with downward arrows. In a box at the top, there was one question, 'Why do I maintain good physical health?' or 'How do I maintain good physical health?' Subjects inserted a response to the question of why or how they would maintain good physical health. Similar tasks were repeated for the 2nd, 3rd, and 4th responses. After the manipulation task, subjects were randomly assigned either to a decision set with a zero-cost option, or to a set without it, as in Experiment 1. When a low construal level is primed with 'how', subjects chose free Kisses (60.66%) more often over Ferrero-Rocher than they chose ₩200 Kisses (42.19%) over ₩400 FerreroRocher. On contrast, the zero-cost effect could not be observed any longer when a high construal level is primed with 'why'.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF