• Title/Summary/Keyword: E-Metrics

Search Result 197, Processing Time 0.021 seconds

Development of an Angle Estimation System Using a Soft Textile Bending Angle Sensor (소프트 텍스타일 굽힘 각 센서를 이용한 각도 추정 시스템 개발 )

  • Seung-Ah Yang;Sang-Un Kim;Joo-Yong Kim
    • Science of Emotion and Sensibility
    • /
    • v.27 no.1
    • /
    • pp.59-68
    • /
    • 2024
  • This study aimed to develop a soft fabric-based elbow-bending angle sensor that can replace conventional hard-type inertial sensors and a system for estimating bending angles using it. To enhance comfort during exercise, this study treated four fabrics (Bergamo, E-band, span cushion, and polyester) by single-walled carbon nanotube dip coating to create conductive textiles. Subsequently, one fabric was selected based on performance evaluations, and an elbow flexion angle sensor was fabricated. Gauge factor, hysteresis, and sensing range were employed as performance evaluation metrics. The data obtained using the fabricated sensor showed different trends in sensor values for the changes in the angle during bending and extending movements. Because of this divergence, the two movements were separated, and this constituted the one-step process. In the two-step process, multilayer perceptron (MLP) was employed to handle the complex nonlinear relationships and achieve high data accuracy. Based on the results of this study, we anticipate effective utilization in various smart wearable and healthcare domains. Consequently, a soft- fabric bending angle sensor was developed, and using MLP, nonlinear relationships can be addressed, enabling angle estimation. Based on the results of this study, we anticipate the effective utilization of the developed system in smart wearables and healthcare.

Exploring the Role of Preference Heterogeneity and Causal Attribution in Online Ratings Dynamics

  • Chu, Wujin;Roh, Minjung
    • Asia Marketing Journal
    • /
    • v.15 no.4
    • /
    • pp.61-101
    • /
    • 2014
  • This study investigates when and how disagreements in online customer ratings prompt more favorable product evaluations. Among the three metrics of volume, valence, and variance that feature in the research on online customer ratings, volume and valence have exhibited consistently positive patterns in their effects on product sales or evaluations (e.g., Dellarocas, Zhang, and Awad 2007; Liu 2006). Ratings variance, or the degree of disagreement among reviewers, however, has shown rather mixed results, with some studies reporting positive effects on product sales (e.g., Clement, Proppe, and Rott 2007) while others finding negative effects on product evaluations (e.g., Zhu and Zhang 2010). This study aims to resolve these contradictory findings by introducing preference heterogeneity as a possible moderator and causal attribution as a mediator to account for the moderating effect. The main proposition of this study is that when preference heterogeneity is perceived as high, a disagreement in ratings is attributed more to reviewers' different preferences than to unreliable product quality, which in turn prompts better quality evaluations of a product. Because disagreements mostly result from differences in reviewers' tastes or the low reliability of a product's quality (Mizerski 1982; Sen and Lerman 2007), a greater level of attribution to reviewer tastes can mitigate the negative effect of disagreement on product evaluations. Specifically, if consumers infer that reviewers' heterogeneous preferences result in subjectively different experiences and thereby highly diverse ratings, they would not disregard the overall quality of a product. However, if consumers infer that reviewers' preferences are quite homogeneous and thus the low reliability of the product quality contributes to such disagreements, they would discount the overall product quality. Therefore, consumers would respond more favorably to disagreements in ratings when preference heterogeneity is perceived as high rather than low. This study furthermore extends this prediction to the various levels of average ratings. The heuristicsystematic processing model so far indicates that the engagement in effortful systematic processing occurs only when sufficient motivation is present (Hann et al. 2007; Maheswaran and Chaiken 1991; Martin and Davies 1998). One of the key factors affecting this motivation is the aspiration level of the decision maker. Only under conditions that meet or exceed his aspiration level does he tend to engage in systematic processing (Patzelt and Shepherd 2008; Stephanous and Sage 1987). Therefore, systematic causal attribution processing regarding ratings variance is likely more activated when the average rating is high enough to meet the aspiration level than when it is too low to meet it. Considering that the interaction between ratings variance and preference heterogeneity occurs through the mediation of causal attribution, this greater activation of causal attribution in high versus low average ratings would lead to more pronounced interaction between ratings variance and preference heterogeneity in high versus low average ratings. Overall, this study proposes that the interaction between ratings variance and preference heterogeneity is more pronounced when the average rating is high as compared to when it is low. Two laboratory studies lend support to these predictions. Study 1 reveals that participants exposed to a high-preference heterogeneity book title (i.e., a novel) attributed disagreement in ratings more to reviewers' tastes, and thereby more favorably evaluated books with such ratings, compared to those exposed to a low-preference heterogeneity title (i.e., an English listening practice book). Study 2 then extended these findings to the various levels of average ratings and found that this greater preference for disagreement options under high preference heterogeneity is more pronounced when the average rating is high compared to when it is low. This study makes an important theoretical contribution to the online customer ratings literature by showing that preference heterogeneity serves as a key moderator of the effect of ratings variance on product evaluations and that causal attribution acts as a mediator of this moderation effect. A more comprehensive picture of the interplay among ratings variance, preference heterogeneity, and average ratings is also provided by revealing that the interaction between ratings variance and preference heterogeneity varies as a function of the average rating. In addition, this work provides some significant managerial implications for marketers in terms of how they manage word of mouth. Because a lack of consensus creates some uncertainty and anxiety over the given information, consumers experience a psychological burden regarding their choice of a product when ratings show disagreement. The results of this study offer a way to address this problem. By explicitly clarifying that there are many more differences in tastes among reviewers than expected, marketers can allow consumers to speculate that differing tastes of reviewers rather than an uncertain or poor product quality contribute to such conflicts in ratings. Thus, when fierce disagreements are observed in the WOM arena, marketers are advised to communicate to consumers that diverse, rather than uniform, tastes govern reviews and evaluations of products.

  • PDF

Olympic Advertisers Win Gold, Experience Stock Price Gains During and After the Games (오운선수작위엄고대언인영득금패(奥运选手作为广告代言人赢得金牌), 비새중화비새후적고표개격상양(比赛中和比赛后的股票价格上扬))

  • Tomovick, Chuck;Yelkur, Rama
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.80-88
    • /
    • 2010
  • There has been considerable research examining the relationship between stockholders equity and various marketing strategies. These include studies linking stock price performance to advertising, customer service metrics, new product introductions, research and development, celebrity endorsers, brand perception, brand extensions, brand evaluation, company name changes, and sports sponsorships. Another facet of marketing investments which has received heightened scrutiny for its purported influence on stockholder equity is television advertisement embedded within specific sporting events such as the Super Bowl. Research indicates that firms which advertise in Super Bowls experience stock price gains. Given this reported relationship between advertising investment and increased shareholder value, for both general and special events, it is surprising that relatively little research attention has been paid to investigating the relationship between advertising in the Olympic Games and its subsequent impact on stockholder equity. While attention has been directed at examining the effectiveness of sponsoring the Olympic Games, much less focus has been placed on the financial soundness of advertising during the telecasts of these Games. Notable exceptions to this include Peters (2008), Pfanner (2008), Saini (2008), and Keller Fay Group (2009). This paper presents a study of Olympic advertisers who ran TV ads on NBC in the American telecasts of the 2000, 2004, and 2008 Summer Olympic Games. Five hypothesis were tested: H1: The stock prices of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics (referred to as O-Stocks), will outperform the S&P 500 during this same period of time (i.e., the Monday before the Games through to the Friday after the Games). H2: O-Stocks will outperform the S&P 500 during the medium term, that is, for the period of the Monday before the Games through to the end of each Olympic calendar year (December 31st of 2000, 2004, and 2008 respectively). H3: O-Stocks will outperform the S&P 500 in the longer term, that is, for the period of the Monday before the Games through to the midpoint of the following years (June 30th of 2001, 2005, and 2009 respectively). H4: There will be no difference in the performance of these O-Stocks vs. the S&P 500 in the Non-Olympic time control periods (i.e. three months earlier for each of the Olympic years). H5: The annual revenue of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics will be higher for those years than the revenue for those same firms in the years preceding those three Olympics respectively. In this study, we recorded stock prices of those companies that advertised during the Olympics for the last three Summer Olympic Games (i.e. Beijing in 2008, Athens in 2004, and Sydney in 2000). We identified these advertisers using Google searches as well as with the help of the television network (i.e., NBC) that hosted the Games. NBC held the American broadcast rights to all three Olympic Games studied. We used Internet sources to verify the parent companies of the brands that were advertised each year. Stock prices of these parent companies were found using Yahoo! Finance. Only companies that were publicly held and traded were used in the study. We identified changes in Olympic advertisers' stock prices over the four-week period that included the Monday before through the Friday after the Games. In total, there were 117 advertisers of the Games on telecasts which were broadcast in the U.S. for 2008, 2004, and 2000 Olympics. Figure 1 provides a breakdown of those advertisers, by industry sector. Results indicate the stock of the firms that advertised (O-Stocks) out-performed the S&P 500 during the period of interest and under-performed the S&P 500 during the earlier control periods. These same O-Stocks also outperformed the S&P 500 from the start of these Games through to the end of each Olympic year, and for six months beyond that. Price pressure linkage, signaling theory, high involvement viewers, and corporate activation strategies are believed to contribute to these positive results. Implications for advertisers and researchers are discussed, as are study limitations and future research directions.

A Nobel Video Quality Degradation Monitoring Schemes Over an IPTV Service with Packet Loss (IPTV 서비스에서 패킷손실에 의한 비디오품질 열화 모니터링 방법)

  • Kwon, Jae-Cheol;Oh, Seoung-Jun;Suh, Chang-Ryul;Chin, Young-Min
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.573-588
    • /
    • 2009
  • In this paper, we propose a novel video quality degradation monitoring scheme titled VR-VQMS(Visual Rhythm based Video Quality Monitoring Scheme) over an IPTV service prone to packet losses during network transmission. Proposed scheme quantifies the amount of quality degradation due to packet losses, and can be classified into a RR(reduced-reference) based quality measurement scheme exploiting visual rhythm data of H.264-encoded video frames at a media server and reconstructed ones at an Set-top Box as feature information. Two scenarios, On-line and Off-line VR-VQMS, are proposed as the practical solutions. We define the NPSNR(Networked Peak-to-peak Signal-to-Noise Ratio) modified by the well-known PSNR as a new objective quality metric, and several additional objective and subjective metrics based on it to obtain the statistics on timing, duration, occurrence, and amount of quality degradation. Simulation results show that the proposed method closely approximates the results from 2D video frames and gives good estimation of subjective quality(i.e.,MOS(mean opinion score)) performed by 10 test observers. We expect that the proposed scheme can play a role as a practical solution to monitor the video quality experienced by individual customers in a commercial IPTV service, and be implemented as a small and light agent program running on a resource-limited set-top box.

Effects of Consumers' Perceived Service Convenience: Differences between Department Stores and General Super Markets (소매업태의 지각된 서비스 편의성이 서비스 성과에 미치는 영향: 백화점과 종합슈퍼마켓간 차이를 중심으로)

  • Kim, Mi-Jeong;Park, Chul-Ju
    • Journal of Distribution Science
    • /
    • v.13 no.2
    • /
    • pp.85-94
    • /
    • 2015
  • Purpose - This study attempts to examine the impacts of consumers' perceived service convenience of retailers on various service performance metrics such as service quality and customer satisfaction. It also tries to investigate differences in the importance of service convenience dimensions on service performance between a department store and a general super market. Research design, data, and methodology - The four hypotheses in this study were proposed and tested. Two hypotheses were on the causal relationships between service convenience dimensions and service performances (service quality and customer satisfaction). The other two hypotheses were on comparisons for the effects of convenience dimensions on service quality and customer satisfaction between department stores and general super markets. To test the hypotheses, three department store chains (Hyundai, Lotte, and Shinsegae department Store) and three general super markets (E-mart, Homeplus, and Lotte mart) were involved. Overall, 510 usable responses were used. The data were analyzed using regression analysis. Results - The results largely support the hypothesized relationships of the proposed model. The results show that access convenience, transaction convenience, benefit convenience, and post-benefit convenience have positive influences on service quality, whereas decision convenience, access convenience, transaction convenience, benefit convenience, and post-benefit convenience have positive effects on customer satisfaction. Furthermore, the results show that there are differences between department stores and general super markets in the effects of benefit convenience and post-benefit convenience on service quality as well as the effects of transaction convenience and post-benefit convenience on customer satisfaction. Conclusions - The concept of service convenience is important in retail environments but little is known about this topic in retail literature. Specially, while service convenience dimensions have different impacts on service performance in distinct retail environments, there has been little investigation or comparison between retail types as regards service convenience. This study is the first to test the differences between distinct retail types (department stores and general super markets) on the service convenience-service performance links. Managerially, the findings of this study suggest that the service convenience management of retailers is an important part of successful service performance management. Because it is most important that both department stores and general super markets enhance benefit convenience to improve service performance, managers of both store types need to invest their resources to reduce consumers' perceived time and effort expenditures to experience the retailer's core benefits. Therefore, the results of this study suggest that retail stores should spend human and financial resources to enhance customer perceptions of service convenience, while also considering what constitutes the service outcome in the consumer's mind. Furthermore, the findings suggest that managers need to use different service convenience management tactics in department stores and general super markets. Specifically, managers in general super markets should pay more attention to benefit convenience and transaction convenience to achieve better service performance whereas managers in department stores should concentrate on post-benefit convenience to create customers' positive evaluation.

Detection of Clavibacter michiganensis subsp. michiganensis Assisted by Micro-Raman Spectroscopy under Laboratory Conditions

  • Perez, Moises Roberto Vallejo;Contreras, Hugo Ricardo Navarro;Herrera, Jesus A. Sosa;Avila, Jose Pablo Lara;Tobias, Hugo Magdaleno Ramirez;Martinez, Fernando Diaz-Barriga;Ramirez, Rogelio Flores;Vazquez, Angel Gabriel Rodriguez
    • The Plant Pathology Journal
    • /
    • v.34 no.5
    • /
    • pp.381-392
    • /
    • 2018
  • Clavibacter michiganensis subsp. michiganesis (Cmm) is a quarantine-worthy pest in $M{\acute{e}}xico$. The implementation and validation of new technologies is necessary to reduce the time for bacterial detection in laboratory conditions and Raman spectroscopy is an ambitious technology that has all of the features needed to characterize and identify bacteria. Under controlled conditions a contagion process was induced with Cmm, the disease epidemiology was monitored. Micro-Raman spectroscopy ($532nm\;{\lambda}$ laser) technique was evaluated its performance at assisting on Cmm detection through its characteristic Raman spectrum fingerprint. Our experiment was conducted with tomato plants in a completely randomized block experimental design (13 plants ${\times}$ 4 rows). The Cmm infection was confirmed by 16S rDNA and plants showed symptoms from 48 to 72 h after inoculation, the evolution of the incidence and severity on plant population varied over time and it kept an aggregated spatial pattern. The contagion process reached 79% just 24 days after the epidemic was induced. Micro-Raman spectroscopy proved its speed, efficiency and usefulness as a non-destructive method for the preliminary detection of Cmm. Carotenoid specific bands with wavelengths at 1146 and $1510cm^{-1}$ were the distinguishable markers. Chemometric analyses showed the best performance by the implementation of PCA-LDA supervised classification algorithms applied over Raman spectrum data with 100% of performance in metrics of classifiers (sensitivity, specificity, accuracy, negative and positive predictive value) that allowed us to differentiate Cmm from other endophytic bacteria (Bacillus and Pantoea). The unsupervised KMeans algorithm showed good performance (100, 96, 98, 91 y 100%, respectively).

User Satisfaction of Mobile Convergence Device: The Expectation and Disconfirmation Approach (모바일 복합 단말기 사용자 만족: 기대-불일치 접근)

  • Lee, Seung-Chang;Suh, Eung-Kyo
    • Journal of Distribution Science
    • /
    • v.10 no.11
    • /
    • pp.89-99
    • /
    • 2012
  • Purpose - Mobile devices, especially mobile terminals capable of telecommunication and wireless connectivity, are leading the advancements in consumer electronics. Digital convergence drives the functions of various devices, such as cellular phones, MP3 players, personal digital assistants, and gaming, into a single device. This trend would continue and applications such as digital audio and video streaming (including personalized content delivery mechanisms) would soon be on a handheld device. As customers want mobile convergence devices, manufacturers are driving new initiatives in the emerging mobile device market. Given the roles played by device design and service content in user satisfaction of a mobile convergence device, this study focuses on identifying and measuring the constructs for the process by which user satisfaction is achieved. This study synthesizes the expectation-disconfirmation paradigm with empirical theories in user satisfaction. Device and service levels are separated, and nine key constructs for user satisfaction of mobile convergence devices are proposed. Insight into this process could help web-based businesses to improve user satisfaction, thus enhancing the effectiveness of e-commerce for sellers and buyers. Research design, data, methodology - This study draws on three users of mobile convergence devices as examples. To test there search model and hypotheses, survey questionnaires were sent to 607 mobile device users. Mobile device users were initially identified from several members, and subjects were randomly drawn. Data from 577 survey responses were finally analyzed. The unit of measurement and analysis in this research study is at a personal level. Results - The measurements for the constructs were developed and tested in a two-phase study. In the first phase, the device and service dimensions were identified, and instruments for measuring them were developed and tested. In the second phase, using the salient dimensions of the device and service as the formulating first-order factors, instruments were developed and empirically tested to measure satisfaction of the device and service. In measuring satisfaction of mobile convergence devices, the critical tasks are to identify the key constructs of such user satisfaction and to develop validated instruments to measure them. Hence, the results of this study have immediate implications for businesses and for research in user satisfaction of mobile convergence devices. Conclusions - This study provides reliable instruments for operationalizing key constructs in the analysis of user satisfaction of mobile convergence devices within the expectation-disconfirmation paradigm. Hence, convergence device makers will be able to examine whether their websites meet their customers' expectations by examining the device aspect of the mobile convergence device customers, and the service aspect expectations and disconfirmation. Moreover, the introduction of expectation and disconfirmation constructs brings the marketing aspect of convergence devices into focus for such retailers, an aspect crucial to the effective design of websites for online businesses. In addition,this study provides the metrics required to initiate future studies on user satisfaction of mobile convergence devices.

  • PDF

A Systematic Approach Of Construction Management Based On Last Planner System And Its Implementation In The Construction Industry

  • Hussain, SM Abdul Mannan;Sekhar, Dr.T.Seshadri;Fatima, Asra
    • Journal of Construction Engineering and Project Management
    • /
    • v.5 no.2
    • /
    • pp.11-15
    • /
    • 2015
  • The Last PlannerSystem (LPS) has been implemented on construction projects to increase work flow reliability, a precondition for project performance againstproductivity and progress targets. The LPS encompasses four tiers of planning processes:master scheduling, phase scheduling, lookahead planning, and commitment / weeklywork planning. This research highlights deficiencies in the current implementation of LPS including poor lookahead planning which results in poor linkage between weeklywork plans and the master schedule. This poor linkage undetermines the ability of theweekly work planning process to select for execution tasks that are critical to projectsuccess. As a result, percent plan complete (PPC) becomes a weak indicator of project progress. The purpose of this research is to improve lookahead planning (the bridgebetween weekly work planning and master scheduling), improve PPC, and improve theselection of tasks that are critical to project success by increasing the link betweenShould, Can, Will, and Did (components of the LPS), thereby rendering PPC a betterindicator of project progress. The research employs the case study research method to describe deficiencies inthe current implementation of the LPS and suggest guidelines for a better application ofLPS in general and lookahead planning in particular. It then introduces an analyticalsimulation model to analyze the lookahead planning process. This is done by examining the impact on PPC of increasing two lookahead planning performance metrics: tasksanticipated (TA) and tasks made ready (TMR). Finally, the research investigates theimportance of the lookahead planning functions: identification and removal ofconstraints, task breakdown, and operations design.The research findings confirm the positive impact of improving lookaheadplanning (i.e., TA and TMR) on PPC. It also recognizes the need to perform lookaheadplanning differently for three types of work involving different levels of uncertainty:stable work, medium uncertainty work, and highly emergent work.The research confirms the LPS rules for practice and specifically the need to planin greater detail as time gets closer to performing the work. It highlights the role of LPSas a production system that incorporates deliberate planning (predetermined andoptimized) and situated planning (flexible and adaptive). Finally, the research presents recommendations for production planningimprovements in three areas: process related, (suggesting guidelines for practice),technical, (highlighting issues with current software programs and advocating theinclusion of collaborative planning capability), and organizational improvements(suggesting transitional steps when applying the LPS).

An Experiment in Refactoring an Object-Oriented CASE Tool (객체 지향 CASE 도구에 대한 재구조화 실험)

  • Jo, Jang-U;Kim, Tae-Gyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.932-940
    • /
    • 1999
  • Object-oriented programming is often touted as promoting software reuse. However it is recognized that objected-oriented software often need to be restructured before it can be reused. refactoring is the process that changes the software structure to make it more reusable, easier to maintain and easire to be enhanced wit new functionalities. This paper desirbes experience gained and lessons learned from restructuring OODesigner, a Computer Aided Software Engineering(CASE) tool that supports Objects Modeling Technique(OMT). this tool supports a wide range of features such as constructing object modeler of OMT, managing information repository, documenting class resources, automatical generating C++ and java code, reverse engineering of C++ and Java cod, searching and reusing classes in the corresponding repository and collecting metrics data. although the version 1.x was developed using OMT(i.e the tool has been designed using OMT) and C++, we recognized that the potential maintenance problem originated from the ill-designed class architecture. Thus this version was totally restructured, resulting in a new version that is easier to maintain than the old version. In this paper, we briefly describe its restructuring process, emphasizing the fact that the Refactoring of the tool is conducted using the tool itself. Then we discuss lessons learned from these processes and we exhibit some comparative measurements of the developed version.

  • PDF

Comparison of Opened Rates and Quality Characteristics of Frozen Baby-clam In-shell Tapes philippinarum Prepared by Different Processing Method (제조방법을 달리하여 제조한 껍질붙은 냉동바지락(Tapes philippinarum)의 껍질 개패율 및 품질특성 비교)

  • Park, Si-Young;Kang, Kyung-Hun;Lee, Jae-Dong;Yoon, Moon-Joo;Kang, Young-Mi;Seoung, Tae-Jong;Kweon, Su-Hyun;Choo, Yi-Kwon;Kim, Jeong-Gyun
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.49 no.6
    • /
    • pp.743-749
    • /
    • 2016
  • We compared two different processing methods for preparing high quality frozen in-shell baby clam products. In the first method, sand and mud were removed from the clams, then they were vacuum packed in polyethylene film, boiled at $97^{\circ}C$ for 6 min, and snap frozen in a cold air blast freezer (sample 1). The second processing method was similar, except the boiling process was excluded (sample 2). Both frozen products were boiled for 4 min, and then shucked and minced. Various quality metrics, such as the opening rates of shells, chemical composition, pH, volatile basic nitrogen (VBN), salinity, thiobarbituric acid (TBA), amino-N, total amino acids and free amino acids were measured, and sensory evaluation was conducted. The opening rates of shells of sample 1 and sample 2 were 98.3% and 4.67%, respectively. The proximate composition of sample 1 and sample 2 was 75.2% and 78.7% moisture, 19.7% and 16.2% crude protein, 2.45 and 2.2% crude lipid, 2.8% and 2.1% ash, and 2.1% and 1.9% salinity, respectively. The L, a, b and ${\Delta}E$ values were similar: 48.6 and 49.2, 3.9 and 3.9, 15.7 and 15.5, and 50.7 and 50.1 for sample 1 and sample 2, respectively. The sensory evaluation score of sample 1 was higher than that of sample 2. Sample 1 was deemed to be superior to sample 2; therefore, we determined that the boiling process is needed for manufacturing high-quality frozen clam products.