• Title/Summary/Keyword: Performance Planning

Search Result 2,234, Processing Time 0.034 seconds

Is corporate rebranding a double-edged sword? Consumers' ambivalence towards corporate rebranding of familiar brands

  • Phang, Grace Ing
    • Asia Marketing Journal
    • /
    • v.15 no.4
    • /
    • pp.131-159
    • /
    • 2014
  • Corporate rebranding has been evident in the qualitative corporate rebranding studies as an imposed organizational change that induces mixed reactions and ambivalent attitudes among consumers. Corporate rebranding for the established and familiar corporate brands leads to more ambivalent attitudes as these companies represent larger targets for disparaging information. Consumers are found to hold both positive and negative reactions toward companies and brands that they are familiar with. Nevertheless, the imposed change assumption and ambivalent attitude, in particular corporate rebranding, have never been widely explored in the quantitative corporate rebranding studies. This paper aims to provide a comprehensive empirical examination of the ambivalence towards rebrandingrebranded brand attitude-purchase intention relationships. The author proposes that corporate rebranding for familiar corporate brands is a double-edged sword that not only raises the expectation for better performance, but also induces conflicted and ambivalent attitudes among consumers. These consumers' ambivalent attitudes are influenced by both the parent brands-related and general attitude factors which further affect their rebranded brand attitude and purchase intention. A total of 156 useable questionnaires were collected from Malaysian working adults; and two established Malaysian airfreight operators were utilized as the focal parent brands. The study found a significant impact of prior parent brand attitudes on ambivalence towards rebranding (ATR). The parent brand attitudes served as anchors in influencing how new information was processed (Mazaheri et al., 2011; Sherif & Hovland, 1961) and closely related to behavioral intention (Prislin & Quellete, 1996). The ambivalent attitudes experienced were higher when individuals held both positive and negative reactions toward the parent brands. Consumers also held higher ambivalent attitudes when they preferred one of the parent brands; while disliked the other brand. The study also found significant relationships between the lead brand and the rebranded brand attitude; and between the partner brands and ATR. The familiar but controversial partner brand contributed significantly to the ambivalent attitudes experienced; while the more established lead brand had significant impact on the rebranded brand attitude. The lead and partner brands, though both familiar, represented different meanings to consumers. The author attributed these results to the prior parent brand attitudes, the skepticism and their general ambivalence toward the corporate rebranding. Both general attitude factors (i.e. skepticism and general ambivalence towards rebranding) were found to have significant positive impacts on ATR. Skeptical individuals questioned the possibility of a successful rebranding (Chang, 2011) and were more careful with their evaluations toward 'too god to be true' or 'made in heaven' pair of companies. The embedded general ambivalent attitudes that people held toward rebranding could be triggered from the associative network by the ambiguous situation (Prislin & Quellete, 1996). In addition, the ambivalent rebranded brand attitude was found to lower down purchase intention, supporting Hanze (2001), Lavine (2001) and van Harreveld et al. (2009)'s studies. Ambivalent individuals were found to prefer delay decision making by choosing around the mid-ranged points in 'willingness to buy' scale. The study provides several marketing implications. Ambivalence management is proven to be important to corporate rebranding to minimize the ambivalent attitudes experienced. This could be done by carefully controlling the parent brands-related and general attitude factors. The high ambivalent individuals are less confident with their own conflicted attitudes and are motivated to get rid of the psychological discomfort caused by these conflicted attitudes (Bell & Esses, 2002; Lau-Gesk, 2005; van Harreveld et al., 2009). They tend to process information more deeply (Jonas et al., 1997; Maio et al., 2000; Wood et al., 1985) and pay more attention to message that provides convincible arguments. Providing strong, favorable and convincible message is hence effective in alleviating consumers' ambivalent attitudes. In addition, brand name heuristic could be utilized because the rebranding strategy sends important signal to consumers about the changes that happen or going to happen. The ambivalent individuals will pay attention to both brand name heuristic and rebranding message in their effort to alleviate the psychological discomfort caused by ambivalent attitudes. The findings also provide insights to Malaysian and airline operators for a better planning and implementation of corporate rebranding exercise.

  • PDF

Changes in Soil Physiochemcial Properties Over 11 Years in Larix kaempferi Stands Planted in Larix kaempferi and Pinus rigida Clear-Cut Sites (낙엽송과 리기다소나무 벌채지에 조성된 낙엽송 임분의 11년간 토양 물리·화학적 특성 변화)

  • Nam Jin Noh;Seung-hyun Han;Sang-tae Lee;Min Seok Cho
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.4
    • /
    • pp.502-514
    • /
    • 2023
  • This study was conducted to understand the long-term changes in soil physiochemical properties and seedling growth in Larix kaempferi (larch) stands planted in clear-cut larch and Pinus rigida (pine) forest soils over an 11-year period after reforestation. Two-year-old bare-root larch seedlings were planted in 2009-2010 at a density of 3,000 seedlings ha-1 in clear-cut areas that harvested larch (Chuncheon and Gimcheon) and pine (Wonju and Gapyeong) stands. We analyzed the physiochemical properties of the mineral soils sampled at 0-20 cm soil depths in the planting year, and the 3rd, 7thand 11th years after planting, and we measured seedling height and root collar diameter in those years. We found significant differences in soil silt and clay content, total carbon and nitrogen concentration, available phosphorus, and cation exchangeable capacity between the two stands; however, seedling growth did not differ. The mineral soil was more fertile in Gimcheon than in the other plantations, while early seedling growth was greatest in Gapyeong. The seedling height and diameter at 11 years after planting were largest in Wonju (1,028 tree ha-1) and Chuncheon (1,359 tree ha-1) due to decreases in stand density after tending the young trees. The soil properties in all plantations were similar 11 years after larch planting. In particular, the high sand content and high available phosphorus levels (caused by soil disturbance during clear-cutting and planting) showed marked decreases, potentially due to soil organic matter input and nutrient uptake, respectively. Thus, early reforestation after clear-cutting could limit nutrient leaching and contribute to soil stabilization. These results provide useful information for nutrient management of larch plantations.

Analysis of research trends for utilization of P-MFC as an energy source for nature-based solutions - Focusing on co-occurring word analysis using VOSviewer - (자연기반해법의 에너지원으로서 P-MFC 활용을 위한 연구경향 분석 - VOSviewer를 활용한 동시 출현단어 분석 중심으로 -)

  • Mi-Li Kwon;Gwon-Soo Bahn
    • Journal of Wetlands Research
    • /
    • v.26 no.1
    • /
    • pp.41-50
    • /
    • 2024
  • Plant Microbial Fuel Cells (P-MFCs) are biomass-based energy technologies that generate electricity from plant and root microbial communities and are suitable for natural fundamental solutions considering sustainable environments. In order to develop P-MFC technology suitable for domestic waterfront space, it is necessary to analyze international research trends first. Therefore, in this study, 700 P-MFC-related research papers were investigated in Web of Science, and the core keywords were derived using VOSviewer, a word analysis program, and the research trends were analyzed. First, P-MFC-related research has been on the rise since 1998, especially since the mid to late 2010s. The number of papers submitted by each country was "China," "U.S." and "India." Since the 2010s, interest in P-MFCs has increased, and the number of publications in the Philippines, Ukraine, and Mexico, which have abundant waterfront space and wetland environments, is increasing. Secondly, from the perspective of research trends in different periods, 1998-2015 mainly carried out microbial fuel cell performance verification research in different environments. The 2016-2020 period focuses on the specific conditions of microbial fuel cell use, the structure of P-MFC and how it develops. From 2021 to 2023, specific research on constraints and efficiency improvement in the development of P-MFC was carried out. The P-MFC-related international research trends identified through this study can be used as useful data for developing technologies suitable for domestic waterfront space in the future. In addition to this study, further research is needed on research trends and levels in subsectors, and in order to develop and revitalize P-MFC technologies in Korea, research on field applicability should be expanded and policies and systems improved.

Characteristics and Implications of Sports Content Business of Big Tech Platform Companies : Focusing on Amazon.com (빅테크 플랫폼 기업의 스포츠콘텐츠 사업의 특징과 시사점 : 아마존을 중심으로)

  • Shin, Jae-hyoo
    • Journal of Venture Innovation
    • /
    • v.7 no.1
    • /
    • pp.1-15
    • /
    • 2024
  • This study aims to elucidate the characteristics of big tech platform companies' sports content business in an environment of rapid digital transformation. Specifically, this study examines the market structure of big tech platform companies with a focus on Amazon, revealing the role of sports content within this structure through an analysis of Amazon's sports marketing business and provides an outlook on the sports content business of big tech platform companies. Based on two-sided market platform business models, big tech platform companies incorporate sports content as a strategy to enhance the value of their platforms. Therefore, sports content is used as a tool to enhance the value of their platforms and to consolidate their monopoly position by maximizing profits by increasing the synergy of platform ecosystems such as infrastructure. Amazon acquires popular live sports broadcasting rights on a continental or national basis and supplies them to its platforms, which not only increases the number of new customers and purchasing effects, but also provides IT solution services to sports organizations and teams while planning and supplying various promotional contents, thus creates synergy across Amazon's platforms including its advertising business. Amazon also expands its business opportunities and increases its overall value by supplying live sports contents to Amazon Prime Video and Amazon Prime, providing technical services to various stakeholders through Amazon Web Services, and offering Amazon Marketing Cloud services for analyzing and predicting advertisers' advertising and marketing performance. This gives rise to a new paradigm in the sports marketing business in the digital era, stemming from the difference in market structure between big tech companies based on two-sided market platforms and legacy global companies based on one-sided markets. The core of this new model is a business through the development of various contents based on live sports streaming rights, and sports content marketing will become a major field of sports marketing along with traditional broadcasting rights and sponsorship. Big tech platform global companies such as Amazon, Apple, and Google have the potential to become new global sports marketing companies, and the current sports marketing and advertising companies, as well as teams and leagues, are facing both crises and opportunities.

An Analysis of Web Services in the Legal Works of the Metropolitan Representative Library (광역대표도서관 법정업무의 웹서비스 분석)

  • Seon-Kyung Oh
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.2
    • /
    • pp.177-198
    • /
    • 2024
  • Article 22(1) of the Library Act, which was completely revised in December 2006, stipulated that regional representative libraries are statutory organizations, and Article 25(1) of the Library Act, which was revised again in late 2021, renamed them as metropolitan representative libraries and expanded their duties. The reason why cities and provinces are required to specify or establish and operate metropolitan representative libraries is that in addition to their role as public libraries for public information use, cultural activities, and lifelong learning as stipulated in Article 23 of the Act, they are also responsible for the legal works of metropolitan representative libraries as stipulated in Article 26, and lead the development of libraries and knowledge culture by serving as policy libraries, comprehensive knowledge information centers, support and cooperation centers, research centers, and joint preservation libraries for all public libraries in the city or province. Therefore, it is necessary to analyze and diagnose whether the metropolitan representative library has been faithfully fulfilling its legal works for the past 15 years(2009-2023), and whether it is properly providing the results of its statutory planning and implementation on its website to meet the digital and mobile era. Therefore, this study investigated and analyzed the performance of the metropolitan representative library for the last two years based on the current statutory tasks and evaluated the extent to which it provides them through its website, and suggested complementary measures to strengthen its web services. As a result, it was analyzed that the web services for legal works that the metropolitan representative library should perform are quite insufficient and inadequate, so it suggested complementary measures such as building a website for legal works on the homepage, enhancing accessibility and visibility through providing an independent website, providing various policy information and web services (portal search, inter-library loan, one-to-one consultation, joint DB construction, data transfer and preservation, etc.), and ensuring digital accessibility of knowledge information for the vulnerable.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Risk Factor Analysis for Preventing Foodborne Illness in Restaurants and the Development of Food Safety Training Materials (레스토랑 식중독 예방을 위한 위해 요소 규명 및 위생교육 매체 개발)

  • Park, Sung-Hee;Noh, Jae-Min;Chang, Hye-Ja;Kang, Young-Jae;Kwak, Tong-Kyung
    • Korean journal of food and cookery science
    • /
    • v.23 no.5
    • /
    • pp.589-600
    • /
    • 2007
  • Recently, with the rapid expansion of the franchise restaurants, ensuring food safety has become essential for restaurant growth. Consequently, the need for food safety training and related material is in increasing demand. In this study, we identified potentially hazardous risk factors for ensuring food safety in restaurants through a food safety monitoring tool, and developed training materials for restaurant employees based on the results. The surveyed restaurants, consisting of 6 Korean restaurants and 1 Japanese restaurant were located in Seoul. Their average check was 15,500 won, ranging from 9,000 to 23,000 won. The range of their total space was 297.5 to $1322.4m^2$, and the amount of kitchen space per total area ranged from 4.4 to 30 percent. The mean score for food safety management performance was 57 out of 100 points, with a range of 51 to 73 points. For risk factor analysis, the most frequently cited sanitary violations involved the handwashing methods/handwashing facilities supplies (7.5%), receiving activities (7.5%), checking and recording of frozen/refrigerated foods temperature (0%), holding foods off the floor (0%), washing of fruits and vegetables (42%), planning and supervising facility cleaning and maintaining programs of facilities (50%), pest control (13%), and toilet equipped/cleaned (13%). Base on these results, the main points that were addressed in the hygiene training of restaurant employees included 4 principles and 8 concepts. The four principles consisted of personal hygiene, prevention of food contamination, time/temperature control, and refrigerator storage. The eight concepts included: (1) personal hygiene and cleanliness with proper handwashing, (2) approved food source and receiving management (3) refrigerator and freezer control, (4) storage management, (5) labeling, (6) prevention of food contamination, (7) cooking and reheating control, and (8) cleaning, sanitation, and plumbing control. Finally, a hygiene training manual and poster leaflets were developed as a food safety training materials for restaurants employees.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.