• Title/Summary/Keyword: Management Target

Search Result 2,888, Processing Time 0.035 seconds

Ecological Network on Benthic Diatom in Estuary Environment by Bayesian Belief Network Modelling (베이지안 모델을 이용한 하구수생태계 부착돌말류의 생태 네트워크)

  • Kim, Keonhee;Park, Chaehong;Kim, Seung-hee;Won, Doo-Hee;Lee, Kyung-Lak;Jeon, Jiyoung
    • Korean Journal of Ecology and Environment
    • /
    • v.55 no.1
    • /
    • pp.60-75
    • /
    • 2022
  • The Bayesian algorithm model is a model algorithm that calculates probabilities based on input data and is mainly used for complex disasters, water quality management, the ecological structure between living things or living-non-living factors. In this study, we analyzed the main factors affected Korean Estuary Trophic Diatom Index (KETDI) change based on the Bayesian network analysis using the diatom community and physicochemical factors in the domestic estuarine aquatic ecosystem. For Bayesian analysis, estuarine diatom habitat data and estuarine aquatic diatom health (2008~2019) data were used. Data were classified into habitat, physical, chemical, and biological factors. Each data was input to the Bayesian network model (GeNIE model) and performed estuary aquatic network analysis along with the nationwide and each coast. From 2008 to 2019, a total of 625 taxa of diatoms were identified, consisting of 2 orders, 5 suborders, 18 families, 141 genera, 595 species, 29 varieties, and 1 species. Nitzschia inconspicua had the highest cumulative cell density, followed by Nitzschia palea, Pseudostaurosira elliptica and Achnanthidium minutissimum. As a result of analyzing the ecological network of diatom health assessment in the estuary ecosystem using the Bayesian network model, the biological factor was the most sensitive factor influencing the health assessment score was. In contrast, the habitat and physicochemical factors had relatively low sensitivity. The most sensitive taxa of diatoms to the assessment of estuarine aquatic health were Nitzschia inconspicua, N. fonticola, Achnanthes convergens, and Pseudostaurosira elliptica. In addition, the ratio of industrial area and cattle shed near the habitat was sensitively linked to the health assessment. The major taxa sensitive to diatom health evaluation differed according to coast. Bayesian network analysis was useful to identify major variables including diatom taxa affecting aquatic health even in complex ecological structures such as estuary ecosystems. In addition, it is possible to identify the restoration target accurately when restoring the consequently damaged estuary aquatic ecosystem.

Selection and Validation of an Analytical Method for Trifludimoxazin in Agricultural Products with LC-MS/MS (LC-MS/MS를 이용한 농산물 중 Trifludimoxazin의 시험법 선정 및 검증)

  • Sun Young Gu;Su Jung Lee;So eun Lee;Chae Young Park;Jung Mi Lee;Inju Park;Yun Mi Chung;Gui Hyun Jang;Guiim Moon
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.3
    • /
    • pp.79-88
    • /
    • 2023
  • Trifludimoxazin is a triazinone herbicide that inhibits the synthesis of protoporphyrinogen oxidase (PPO). The lack of PPO damages the cell membranes, leading to plant cell death. An official analytical method for the safety management of trifludimoxazin is necessary because it is a newly registered herbicide in Korea. Therefore, this study aimed to develop a residual analysis method to detect trifludimoxazin in five representative agricultural products. The EN method was established as the final extraction method by comparing the recovery test and matrix effect with those of the QuEChERS method. Various sorbent agents were used to establish the clean-up method, and no differences were observed among them. MgSO4 and PSA were selected as the final clean-up conditions. We used LC-MS/MS considering the selectivity and sensitivity of the target pesticide and analyzed the samples in the MRM mode. The recovery test results using the established analysis method and inter-laboratory validation showed a valid range of 73.5-100.7%, with a relative standard deviation and coefficient of variation less than 12.6% and 14.5%, respectively. Therefore, the presence of trifludimoxazin can be analyzed using a modified QuEChERS method, which is widely available in Korea to ensure the safety of residual insecticides.

The Influence of Loyalty Program on the Effect of Customer Retention: Focused on Education Service Industry (고객보상 프로그램이 고객 유지에 미치는 효과: 교육 서비스 산업을 중심으로)

  • Jeon, Hoseong
    • Asia Marketing Journal
    • /
    • v.13 no.3
    • /
    • pp.25-53
    • /
    • 2011
  • This study probes the effect of loyalty program on the customer retention based on the real transaction data(n=2,892) acquired from education service industry. We try to figure out the outcomes of reward program through more than 1 year-long data gathered and analyzed according to quasi-experimental design(i.e., before and after design). We adopt this kinds of research scheme in regard that previous studies measured the effect of loyalty program by dividing the customers into two group(i.e., members vs. non-members) after the firms or stores had started the program. We believe that it might not avoid the self-selection bias. The research questions of this study could be explained such as: First, most research said that the loyalty programs could increase the customer loyalty and contribute to the sustainable growth of company. But there are little confirmation that this promotional tool could be justified in terms of financial perspective. Thus, we are interested in both the retention rate and financial outcomes caused by the introduction of loyalty programs. Second, reward programs target mainly current customer. Especially CRM(customer relationship management) said that it is more profitable for company to build positive relationship with current customer instead of pursuing new customer. And it claims that reward program is excellent means to achieve this goal. For this purpose, we check in this study whether there is a interaction effect between loyalty program and customer type in retaining customer. Third, it is said that dis-satisfied customers are more likely to leave the company than satisfied customers. While, Bolton, Kannan and Bramlett(2000) claimed that reward program could contribute to minimize the effect of negative service by building emotional link with customer, it is not empirically confirmed. This point of view explained that the loyalty programs might work as exit barrier to current customer. Thus, this study tries to identify whether there is a interaction effect between loyalty program and service experience in keeping customer. To achieve this purpose, this study adopt both Kaplan-Meier survival analysis and Cox proportional hazard model. The research outcomes show that the average retention period is 179 days before introducing loyalty program but it is increased to 227 days after reward is given to the customers. Since this difference is statistically significant, it could be said that H1 is supported. In addition, the contribution margin coming from increased transaction period is bigger than the cost for administering loyalty programs. To address other research questions, we probe the interaction effect between loyalty program and other factors(i.e., customer type and service experience) affecting it. The analysis of Cox proportional hazard model said that the current customer is more likely to engage in building relationship with company compared to new customer. In addition, retention rate of satisfied customer is significantly increased in relation to dis-satisfied customer. Interestingly, the transaction period of dis-satisfied customer is notably increased after introducing loyalty programs. Thus, it could be said that H2, H3, and H4 are also supported. In summary, we found that the loyalty programs have values as a promotional tool in forming positive relationship with customer and building exit barrier.

  • PDF

Application and Comparative Analysis of River Discharge Estimation Methods Using Surface Velocity (표면유속을 이용한 하천 유량산정방법의 적용 및 비교 분석)

  • Jae Hyun, Song;Seok Geun Park;Chi Young Kim;Hung Soo Kim
    • Journal of Korean Society of Disaster and Security
    • /
    • v.16 no.2
    • /
    • pp.15-32
    • /
    • 2023
  • There are some difficulties such as safety problem and need of manpower in measuring discharge by submerging the instruments because of many floating debris and very fast flow in the river during the flood season. As an alternative, microwave water surface current meters have been increasingly used these days, which are easy to measure the discharge in the field without contacting the water surface directly. But it is also hard to apply the method in the sudden and rapidly changing field conditions. Therefore, the estimation of the discharge using the surface velocity in flood conditions requires a theoretical and economical approach. In this study, the measurements from microwave water surface current meter and rating curve were collected and then analyzed by the discharge estimation method using the surface velocity. Generally, the measured and converted discharge are analyzed to be similar in all methods at a hydraulic radius of 3 m or over or a mean velocity of 2 ㎧ or more. Besides, the study computed the discharge by the index velocity method and the velocity profile method with the maximum surface velocity in the section where the maximum velocity occurs at the high water level range of the rating curve among the target locations. As a result, the mean relative error with the converted discharge was within 10%. That is, in flood season, the discharge estimation method using one maximum surface velocity measurement, index velocity method, and velocity profile method can be applied to develop high-level extrapolation, therefore, it is judged that the reliability for the range of extrapolation estimation could be improved. Therefore, the discharge estimation method using the surface velocity is expected to become a fast and efficient discharge measurement method during the flood season.

A Study on the Improvement of Flexible Working Hours (유연근로시간제 개선에 대한 연구)

  • Kwon, Yong-man;Seo, Ei-seok
    • Journal of Venture Innovation
    • /
    • v.4 no.2
    • /
    • pp.97-108
    • /
    • 2021
  • Labor contracts appear in form as an exchange relationship between labor products and wages, but since they transcend the level of simple barter, they can be economically identified as "trading" and can be identified as "rental." From a legal point of view, a legal device that legally supports and imposes binding force on commodity exchange relations is a contract. Such a labor contract led to a relationship in which wages were received and a certain amount of time was placed under the direction and supervision of the employer as a counter benefit to the receipt of wages. Since working hours are subordinate hours with one's labor under the disposition authority of the employer, long hours of work can be done for the health and safety of workers and furthermore, it can be an act that violates the value to enjoy as a human being. The reduction of working hours needs to be shortened in terms of productivity and enjoyment of workers' culture so that they can expand and reproduce, but users' corporate management labor and production activities should also be compatible compared to those pursued by capitalist countries. Working hours can be seen as individual time and time in society as a whole, and long hours of work at the individual level are reduced, which is undesirable at the individual level, but an increase in products due to an increase in production time at the social level can help social development. It is necessary to consider working hours in terms of finding the balance between these individual and social levels. If the regulation method of working hours was to regulate the total amount of working hours, flexibility and elasticity of working hours are a qualitative regulation method that allows companies to flexibly allocate and organize working hours within a certain range of up to 52 hours per week. Accordingly, it is necessary to shorten working hours, but expand and implement the flexible working hours system according to the situation of the company. To this end, it is necessary to flexibly operate the flexible working hours system, which is currently limited to six months, handle the selective working hours by agreement between employers and workers, and expand the target work of discretionary working hours according to the development of information and communication technology and new types based on the 4th industrial revolution.

The Influence of K-Content Experience on National Image, Tourism Attitude and Visit to Intention: Targeting Chinese (K-콘텐츠 경험이 국가이미지와 관광태도 및 방문의도에 미치는 영향 : 중국인을 대상으로)

  • Park, Heejung
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.91-107
    • /
    • 2024
  • This study attempted to empirically verify the possibility of K-content for Chinese people who have recently slowed down due to restriction of movement and political and diplomatic conflicts, although it is a very meaningful market for Korea's content industry and tourism industry. As a result of the study, each item of K-content experience, national image, tourism attitude, and visit intention was derived as one factor, and only the national image factor was derived as two factors: 'functional image' and 'cultural image'. As a result of examining the influence relationship between them established based on previous studies focusing on the derived factors, all five research hypotheses were adopted. K-content experience was found to have a significant influence on both factors of the national image. It was found that it had a greater influence on cultural image factors than functional image factors, cultural image factors were found to have a greater influence on tourism attitudes, K-content experiences had a significant effect on tourism attitudes, and tourism attitudes had a significant effect on visit intentions. Based on the results of this study, it was once again confirmed that the national image even comtually bees an important factor for linking to practical tourism behavior, and in this respect, "culture" is an important key factor that can lead to practical tourism and visits. Previous national images indicate that if the functional aspect of the country was more emphasized, it is now necessary to focus more on the importance of culture than on the functional aspect. As the K-content experience has a significant effect on tourism attitude, it can have a positive effect on the formation of a positive tourism attitude that can lead to actual tourism behavior, so various efforts will be needed to form an active tourism attitude using K-content in the future. As the content and target scope of K-content are expanded and diversified, specific strategies for each sub-market using cultural contents in various fields should be established and implemented.

The Effect of Price Promotional Information about Brand on Consumer's Quality Perception: Conditioning on Pretrial Brand (품패개격촉소신식대소비자질량인지적영향(品牌价格促销信息对消费者质量认知的影响))

  • Lee, Min-Hoon;Lim, Hang-Seop
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.17-27
    • /
    • 2009
  • Price promotion typically reduces the price for a given quantity or increases the quantity available at the same price, thereby enhancing value and creating an economic incentive to purchase. It often is used to encourage product or service trial among nonusers of products or services. Thus, it is important to understand the effects of price promotions on quality perception made by consumer who do not have prior experience with the promoted brand. However, if consumers associate a price promotion itself with inferior brand quality, the promotion may not achieve the sales increase the economic incentives otherwise might have produced. More specifically, low qualitative perception through price promotion will undercut the economic and psychological incentives and reduce the likelihood of purchase. Thus, it is important for marketers to understand how price promotional informations about a brand have impact on consumer's unfavorable quality perception of the brand. Previous literatures on the effects of price promotions on quality perception reveal inconsistent explanations. Some focused on the unfavorable effect of price promotion on consumer's perception. But others showed that price promotions didn't raise unfavorable perception on the brand. Prior researches found these inconsistent results related to the timing of the price promotion's exposure and quality evaluation relative to trial. And, whether the consumer has been experienced with the product promotions in the past or not may moderate the effects. A few studies considered differences among product categories as fundamental factors. The purpose of this research is to investigate the effect of price promotional informations on consumer's unfavorable quality perception under the different conditions. The author controlled the timing of the promotional exposure and varied past promotional patterns and information presenting patterns. Unlike previous researches, the author examined the effects of price promotions setting limit to pretrial situation by controlling potentially moderating effects of prior personal experience with the brand. This manipulations enable to resolve possible controversies in relation to this issue. And this manipulation is meaningful for the work sector. Price promotion is not only used to target existing consumers but also to encourage product or service trial among nonusers of products or services. Thus, it is important for marketers to understand how price promotional informations about a brand have impact on consumer's unfavorable quality perception of the brand. If consumers associate a price promotion itself with inferior quality about unused brand, the promotion may not achieve the sales increase the economic incentives otherwise might have produced. In addition, if the price promotion ends, the consumer that have purchased that certain brand will likely to display sharply decreased repurchasing behavior. Through a literature review, hypothesis 1 was set as follows to investigate the adjustive effect of past price promotion on quality perception made by consumers; The influence that price promotion of unused brand have on quality perception made by consumers will be adjusted by past price promotion activity of the brand. In other words, a price promotion of an unused brand that have not done a price promotion in the past will have a unfavorable effect on quality perception made by consumer. Hypothesis 2-1 was set as follows : When an unused brand undertakes price promotion for the first time, the information presenting pattern of price promotion will have an effect on the consumer's attribution for the cause of the price promotion. Hypothesis 2-2 was set as follows : The more consumer dispositionally attribute the cause of price promotion, the more unfavorable the quality perception made by consumer will be. Through test 1, the subjects were given a brief explanation of the product and the brand before they were provided with a $2{\times}2$ factorial design that has 4 patterns of price promotion (presence or absence of past price promotion * presence or absence of current price promotion) and the explanation describing the price promotion pattern of each cell. Then the perceived quality of imaginary brand WAVEX was evaluated in the scale of 7. The reason tennis racket was chosen is because the selected product group must have had almost no past price promotions to eliminate the influence of average frequency of promotion on the value of price promotional information as Raghubir and Corfman (1999) pointed out. Test 2 was also carried out on students of the same management faculty of test 1 with tennis racket as the product group. As with test 1, subjects with average familiarity for the product group and low familiarity for the brand was selected. Each subjects were assigned to one of the two cells representing two different information presenting patterns of price promotion of WAVEX (case where the reason behind price promotion was provided/case where the reason behind price promotion was not provided). Subjects looked at each promotional information before evaluating the perceived quality of the brand WAVEX in the scale of 7. The effect of price promotion for unfamiliar pretrial brand on consumer's perceived quality was proved to be moderated with the presence or absence of past price promotion. The consistency with past promotional behavior is important variable that makes unfavorable effect on brand evaluations get worse. If the price promotion for the brand has never been carried out before, price promotion activity may have more unfavorable effects on consumer's quality perception. Second, when the price promotion of unfamiliar pretrial brand was executed for the first time, presenting method of informations has impact on consumer's attribution for the cause of firm's promotion. And the unfavorable effect of quality perception is higher when the consumer does dispositional attribution comparing with situational attribution. Unlike the previous studies where the main focus was the absence or presence of favorable or unfavorable motivation from situational/dispositional attribution, the focus of this study was exaus ing the fact that a situational attribution can be inferred even if the consumer employs a dispositional attribution on the price promotional behavior, if the company provides a persuasive reason. Such approach, in academic perspectih sis a large significance in that it explained the anchoring and adjng ch approcedures by applying it to a non-mathematical problem unlike the previous studies where it wis ionaly explained by applying it to a mathematical problem. In other wordn, there is a highrspedency tmatispositionally attribute other's behaviors according to the fuedach aal attribution errors and when this is applied to the situation of price promotions, we can infer that consumers are likely tmatispositionally attribute the company's price promotion behaviors. Ha ever, even ueder these circumstances, the company can adjng the consumer's anchoring tmareduce the po wibiliute thdispositional attribution. Furthermore, unlike majority of previous researches on short/long-term effects of price promotion that only considered the effect of price promotions on consumer's purchasing behaviors, this research measured the effect on perceived quality, one of man elements that affects the purchasing behavior of consumers. These results carry useful implications for the work sector. A guideline of effectively providing promotional informations for a new brand can be suggested through the outcomes of this research. If the brand is to avoid false implications such as inferior quality while implementing a price promotion strategy, it must provide a clear and acceptable reasons behind the promotion. Especially it is more important for the company with no past price promotion to provide a clear reason. An inconsistent behavior can be the cause of consumer's distrust and anxiety. This is also one of the most important factor of risk of endless price wars. Price promotions without prior notice can buy doubt from consumers not market share.

  • PDF

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

Critical Success Factor of Noble Payment System: Multiple Case Studies (새로운 결제서비스의 성공요인: 다중사례연구)

  • Park, Arum;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.59-87
    • /
    • 2014
  • In MIS field, the researches on payment services are focused on adoption factors of payment service using behavior theories such as TRA(Theory of Reasoned Action), TAM(Technology Acceptance Model), and TPB (Theory of Planned Behavior). The previous researches presented various adoption factors according to types of payment service, nations, culture and so on even though adoption factors of identical payment service were presented differently by researchers. The payment service industry relatively has strong path dependency to the existing payment methods so that the research results on the identical payment service are different due to payment culture of nation. This paper aims to suggest a successful adoption factor of noble payment service regardless of nation's culture and characteristics of payment and prove it. In previous researches, common adoption factors of payment service are convenience, ease of use, security, convenience, speed etc. But real cases prove the fact that adoption factors that the previous researches present are not always critical to success to penetrate a market. For example, PayByPhone, NFC based parking payment service, successfully has penetrated to early market and grown. In contrast, Google Wallet service failed to be adopted to users despite NFC based payment method which provides convenience, security, ease of use. As shown in upper case, there remains an unexplained aspect. Therefore, the present research question emerged from the question: "What is the more essential and fundamental factor that should takes precedence over factors such as provides convenience, security, ease of use for successful penetration to market". With these cases, this paper analyzes four cases predicted on the following hypothesis and demonstrates it. "To successfully penetrate a market and sustainably grow, new payment service should find non-customer of the existing payment service and provide noble payment method so that they can use payment method". We give plausible explanations for the hypothesis using multiple case studies. Diners club, Danal, PayPal, Square were selected as a typical and successful cases in each category of payment service. The discussion on cases is primarily non-customer analysis that noble payment service targets on to find the most crucial factor in the early market, we does not attempt to consider factors for business growth. We clarified three-tier non-customer of the payment method that new payment service targets on and elaborated how new payment service satisfy them. In case of credit card, this payment service target first tier of non-customer who can't pay for because they don't have any cash temporarily but they have regular income. So credit card provides an opportunity which they can do economic activities by delaying the date of payment. In a result of wireless phone payment's case study, this service targets on second of non-customer who can't use online payment because they concern about security or have to take a complex process and learn how to use online payment method. Therefore, wireless phone payment provides very convenient payment method. Especially, it made group of young pay for a little money without a credit card. Case study result of PayPal, online payment service, shows that it targets on second tier of non-customer who reject to use online payment service because of concern about sensitive information leaks such as passwords and credit card details. Accordingly, PayPal service allows users to pay online without a provision of sensitive information. Final Square case result, Mobile POS -based payment service, also shows that it targets on second tier of non-customer who can't individually transact offline because of cash's shortness. Hence, Square provides dongle which function as POS by putting dongle in earphone terminal. As a result, four cases made non-customer their customer so that they could penetrate early market and had been extended their market share. Consequently, all cases supported the hypothesis and it is highly probable according to 'analytic generation' that case study methodology suggests. We present for judging the quality of research designs the following. Construct validity, internal validity, external validity, reliability are common to all social science methods, these have been summarized in numerous textbooks(Yin, 2014). In case study methodology, these also have served as a framework for assessing a large group of case studies (Gibbert, Ruigrok & Wicki, 2008). Construct validity is to identify correct operational measures for the concepts being studied. To satisfy construct validity, we use multiple sources of evidence such as the academic journals, magazine and articles etc. Internal validity is to seek to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships. To satisfy internal validity, we do explanation building through four cases analysis. External validity is to define the domain to which a study's findings can be generalized. To satisfy this, replication logic in multiple case studies is used. Reliability is to demonstrate that the operations of a study -such as the data collection procedures- can be repeated, with the same results. To satisfy this, we use case study protocol. In Korea, the competition among stakeholders over mobile payment industry is intensifying. Not only main three Telecom Companies but also Smartphone companies and service provider like KakaoTalk announced that they would enter into mobile payment industry. Mobile payment industry is getting competitive. But it doesn't still have momentum effect notwithstanding positive presumptions that will grow very fast. Mobile payment services are categorized into various technology based payment service such as IC mobile card and Application payment service of cloud based, NFC, sound wave, BLE(Bluetooth Low Energy), Biometric recognition technology etc. Especially, mobile payment service is discontinuous innovations that users should change their behavior and noble infrastructure should be installed. These require users to learn how to use it and cause infra-installation cost to shopkeepers. Additionally, payment industry has the strong path dependency. In spite of these obstacles, mobile payment service which should provide dramatically improved value as a products and service of discontinuous innovations is focusing on convenience and security, convenience and so on. We suggest the following to success mobile payment service. First, non-customers of the existing payment service need to be identified. Second, needs of them should be taken. Then, noble payment service provides non-customer who can't pay by the previous payment method to payment method. In conclusion, mobile payment service can create new market and will result in extension of payment market.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.