• Title/Summary/Keyword: Set value test

Search Result 608, Processing Time 0.027 seconds

The Study of Dose Distribution according to the Using Linac and Tomotherapy on Total Lymphnode Irradiation (선형가속기와 토모치료기를 이용한 전림프계의 방사선 치료시 선량분포에 관한 연구)

  • Kim, Youngjae;Seol, Gwanguk
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.4
    • /
    • pp.285-291
    • /
    • 2013
  • In this study, compare and analyze the dose distribution and availability of radiation therapy when using a different devices to TNI(Total Lymphnodal Irradiation). Test subjects(patients) are 15 people(Male 7, Female 8). Acquire CT Simulation images of the 15 people using Somatom Sansation Open 16 channel and then acquired images was transferred to each treatment planning system Pinnacle Ver 8.0 and Tomotherapy Planning System and separate the tumor tissue and normal tissues(whole lung, spinal cord, Rt kidney, Lt kidney). Tumor prescription dose was set to 750 cGy. and then Compare the Dose Compatibility, Normal Tissue's Absorbed Dose, Dose Distribution and DVH. Statistical analysis was performed SPSS Ver. 18.0 by paired sample Assay. The absorbed dose in the tumor tissue was $751.0{\pm}4.7cGy$ in tomotherapy planning, $746.9{\pm}14.1cGy$ in linac. Tomotherapy's absorbed dose in the tumor was more appropriate than linac. and These values are not statistically significant(p>0.05). Tomotherapy plan's absorbed dose in the normal tissues were less than linac's plan. This value was statistically significant(p<0.05) excepted of whole lung. In DVH, appropriated on tumor and normal tissues in tomotherapy and linac but tomotherapy's TER was better than linac. Namely, a result of Absorbed dose in tumor and normal tissue, Dose distribution pattern, DVH, Both radiation therapy devices were appropriated in radiation therapy on TER. The Linac has a short treatment time(about 15-20 min) and open space on treatment time. It cause infant and pediatric patients to receiving uncomfortable treatment. So, In this case, it will be fine that Linac based therapy was restricted use. and if the patient was cooperative, it will be show a better prognosis that Tomotherapy using Radiation Therapy.

The Effects of LBS Information Filtering on Users' Perceived Uncertainty and Information Search Behavior (위치기반 서비스를 통한 정보 필터링이 사용자의 불확실성과 정보탐색 행동에 미치는 영향)

  • Zhai, Xiaolin;Im, Il
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.493-513
    • /
    • 2014
  • With the development of related technologies, Location-Based Services (LBS) are growing fast and being used in many ways. Past LBS studies have focused on adoption of LBS because of the fact that LBS users have privacy concerns regarding revealing their location information. Meanwhile, the number of LBS users and revenues from LBS are growing rapidly because users can get some benefits by revealing their location information. Little research has been done on how LBS affects consumers' information search behavior in product purchase. The purpose of this paper is examining the effect of LBS information filtering on buyers' uncertainty and their information search behavior. When consumers purchase a product, they try to reduce uncertainty by searching information. Generally, there are two types of uncertainties - knowledge uncertainty and choice uncertainty. Knowledge uncertainty refers to the lack of information on what kinds of alternatives are available in the market and/or their important attributes. Therefore, consumers having knowledge uncertainty will have difficulties in identifying what alternatives exist in the market to fulfil their needs. Choice uncertainty refers to the lack of information about consumers' own preferences and which alternative will fit in their needs. Therefore, consumers with choice uncertainty have difficulties selecting best product among available alternatives.. According to economics of information theory, consumers narrow the scope of information search when knowledge uncertainty is high. It is because consumers' information search cost is high when their knowledge uncertainty is high. If people do not know available alternatives and their attributes, it takes time and cognitive efforts for them to acquire information about available alternatives. Therefore, they will reduce search breadth. For people with high knowledge uncertainty, the information about products and their attributes is new and of high value for them. Therefore, they will conduct searches more in-depth because they have incentive to acquire more information. When people have high choice uncertainty, people tend to search information about more alternatives. It is because increased search breadth will improve their chances to find better alternative for them. On the other hand, since human's cognitive capacity is limited, the increased search breadth (more alternatives) will reduce the depth of information search for each alternative. Consumers with high choice uncertainty will spend less time and effort for each alternative because considering more alternatives will increase their utility. LBS provides users with the capability to screen alternatives based on the distance from them, which reduces information search costs. Therefore, it is expected that LBS will help users consider more alternatives even when they have high knowledge uncertainty. LBS provides distance information, which helps users choose alternatives appropriate for them. Therefore, users will perceive lower choice uncertainty when they use LBS. In order to test the hypotheses, we selected 80 students and assigned them to one of the two experiment groups. One group was asked to use LBS to search surrounding restaurants and the other group was asked to not use LBS to search nearby restaurants. The experimental tasks and measures items were validated in a pilot experiment. The final measurement items are shown in Appendix A. Each subject was asked to read one of the two scenarios - with or without LBS - and use a smartphone application to pick a restaurant. All behaviors on smartphone were recorded using a recording application. Search breadth was measured by the number of restaurants clicked by each subject. Search depths was measured by two metrics - the average number of sub-level pages each subject visited and the average time spent on each restaurant. The hypotheses were tested using SPSS and PLS. The results show that knowledge uncertainty reduces search breadth (H1a). However, there was no significant correlation between knowledge uncertainty and search depth (H1b). Choice uncertainty significantly reduces search depth (H2b), but no significant relationship was found between choice uncertainty and search breadth (H2a). LBS information filtering significantly reduces the buyers' choice uncertainty (H4) and reduces the negative relationship between knowledge uncertainty and search breadth (H3). This research provides some important implications for service providers. Service providers should use different strategies based on their service properties. For those service providers who are not well-known to consumers (high knowledge uncertainty) should encourage their customers to use LBS. This is because LBS would increase buyers' consideration sets when the knowledge uncertainty is high. Therefore, less known services have chances to be included in consumers' consideration sets with LBS. On the other hand, LBS information filtering decrease choice uncertainty and the near service providers are more likely to be selected than without LBS. Hence, service providers should analyze geographically approximate competitors' strength and try to reduce the gap so that they can have chances to be included in the consideration set.

A Study on the Optimal Angle as Modified Tangential Projection of Knee Bones (무릎뼈의 변형된 접선방향 검사 시 최적의 입사각에 관한 연구)

  • Oh, Wang-Kyun;Kim, Sang-Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.6
    • /
    • pp.919-926
    • /
    • 2021
  • In this study, we wanted to find out the optimal angle as a modified tangential projection of the patella. In the experiment, we used Kyoto Kagaku's PBU-50 phantom. In the supine position, the F-T angle was set to 95°, 105°, 115°, 125°, 135°, 145°, and Patella tangential projection images were obtained by varying the X-ray tube angle by 5° so that the angle between the X-ray centerline and tibia at each angle was 5~20°. Image J was used for image analysis and the congruence angle, lateral patellofemoral angle, patellofemoral index and contrast to noise ratio(CNR) were also measured. SPSS 22 was used for statistical analysis, and the mean values of congruence angle, patellofemoral angle, patellofemoral index, and CNR were compared with Merchant method through one-way batch analysis and corresponding sample t-test. As a result of the study, in the case of congruence angle, the angle of incidence of the knee-angle X-ray centerline was 105°-72.5° (20° tangential irradiation), 115°-72.5°, 77.5° (15, 20° tangential irradiation), 125°-82.5° (20° tangential irradiation), lateral patellofemoral angle is 115°-72.5°, 77.5° (15, 20° tangential irradiation), 125°-72.5° (10° tangential irradiation), patellofemoral index is 115°-72.5° (15° tangential irradiation) and 125°-72.5° (10° tangential irradiation) were not significantly different from Merchant method (p> .05). In case of CNR, it is not different from Merchant method at 105°-67.5°, 72.5° (15, 20° tangential irradiation), 115°-67.5°, 72.5°, 77.5° (10, 15, 20° tangential irradiation). (P> .05). Based on the results of this study, high diagnostic value images can be obtained by setting the knee angle and the angle of incidence of the X-ray tube to 115°-72.5° (15° tangential irradiation) during the modified tangential examination of the knee bone. It was confirmed.

Empirical Analysis of Accelerator Investment Determinants Based on Business Model Innovation Framework (비즈니스 모델 혁신 프레임워크 기반의 액셀러레이터 투자결정요인 실증 분석)

  • Jung, Mun-Su;Kim, Eun-Hee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.253-270
    • /
    • 2023
  • Research on investment determinants of accelerators, which are attracting attention by greatly improving the survival rate of startups by providing professional incubation and investment to startups at the same time, is gradually expanding. However, previous studies do not have a theoretical basis in developing investment determinants in the early stages, and they use factors of angel investors or venture capital, which are similar investors, and are still in the stage of analyzing importance and priority through empirical research. Therefore, this study verified for the first time in Korea the discrimination and effectiveness of investment determinants using accelerator investment determinants developed based on the business model innovation framework in previous studies. To this end, we first set the criteria for success and failure of startup investment based on scale-up theory and conducted a survey of 22 investment experts from 14 accelerators in Korea, and secured valid data on a total of 97 startups, including 52 successful scale-up startups and 45 failed scale-up startups, were obtained and an independent sample t-test was conducted to verify the mean difference between these two groups by accelerator investment determinants. As a result of the analysis, it was confirmed that the investment determinants of accelerators based on business model innovation framework have considerable discrimination in finding successful startups and making investment decisions. In addition, as a result of analyzing manufacturing-related startups and service-related startups considering the characteristics of innovation by industry, manufacturing-related startups differed in business model, strategy, and dynamic capability factors, while service-related startups differed in dynamic capabilities. This study has great academic implications in that it verified the practical effectiveness of accelerator investment determinants derived based on business model innovation framework for the first time in Korea, and it has high practical value in that it can make effective investments by providing theoretical grounds and detailed information for investment decisions.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

    • 박만배
      • Proceedings of the KOR-KST Conference
      • /
      • 1995.02a
      • /
      • pp.101-113
      • /
      • 1995
    • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

    • PDF

    The Effects of Online Service Quality on Consumer Satisfaction and Loyalty Intention -About Booking and Issuing Air Tickets on Website- (온라인 서비스 품질이 고객만족 및 충성의도에 미치는 영향 -항공권 예약.발권 웹사이트를 중심으로-)

    • Park, Jong-Gee;Ko, Do-Eun;Lee, Seung-Chang
      • Journal of Distribution Research
      • /
      • v.15 no.3
      • /
      • pp.71-110
      • /
      • 2010
    • 1. Introduction Today Internet is recognized as an important way for the transaction of products and services. According to the data surveyed by the National Statistical Office, the on-line transaction in 2007 for a year, 15.7656 trillion, shows a 17.1%(2.3060 trillion won) increase over last year, of these, the amount of B2C has been increased 12.0%(10.2258 trillion won). Like this, because the entry barrier of on-line market of Korea is low, many retailers could easily enter into the market. So the bigger its scale is, but on the other hand, the tougher its competition is. Particularly due to the Internet and innovation of IT, the existing market has been changed into the perfect competitive market(Srinivasan, Rolph & Kishore, 2002). In the early years of on-line business, they think that the main reason for success is a moderate price, they are awakened to its importance of on-line service quality with tough competition. If it's not sure whether customers can be provided with what they want, they can use the Web sites, perhaps they can trust their products that had been already bought or not, they have a doubt its viability(Parasuraman, Zeithaml & Malhotra, 2005). Customers can directly reserve and issue their air tickets irrespective of place and time at the Web sites of travel agencies or airlines, but its empirical studies about these Web sites for reserving and issuing air tickets are insufficient. Therefore this study goes on for following specific objects. First object is to measure service quality and service recovery of Web sites for reserving and issuing air tickets. Second is to look into whether above on-line service quality and on-line service recovery have an impact on overall service quality. Third is to seek for the relation with overall service quality and customer satisfaction, then this customer satisfaction and loyalty intention. 2. Theoretical Background 2.1 On-line Service Quality Barnes & Vidgen(2000; 2001a; 2001b; 2002) had invented the tool to measure Web sites' quality four times(called WebQual). The WebQual 1.0, Step one invented a measuring item for information quality based on QFD, and this had been verified by students of UK business school. The Web Qual 2.0, Step two invented for interaction quality, and had been judged by customers of on-line bookshop. The WebQual 3.0, Step three invented by consolidating the WebQual 1.0 for information quality and the WebQual2.0 for interactionquality. It includes 3-quality-dimension, information quality, interaction quality, site design, and had been assessed and confirmed by auction sites(e-bay, Amazon, QXL). Furtheron, through the former empirical studies, the authors changed sites quality into usability by judging that usability is a concept how customers interact with or perceive Web sites and It is used widely for accessing Web sites. By this process, WebQual 4.0 was invented, and is consist of 3-quality-dimension; information quality, interaction quality, usability, 22 items. However, because WebQual 4.0 is focusing on technical part, it's usable at the Website's design part, on the other hand, it's not usable at the Web site's pleasant experience part. Parasuraman, Zeithaml & Malhorta(2002; 2005) had invented the measure for measuring on-line service quality in 2002 and 2005. The study in 2002 divided on-line service quality into 5 dimensions. But these were not well-organized, so there needed to be studied again totally. So Parasuraman, Zeithaml & Malhorta(2005) re-worked out the study about on-line service quality measure base on 2002's study and invented E-S-QUAL. After they invented preliminary measure for on-line service quality, they made up a question for customers who had purchased at amazon.com and walmart.com and reassessed this measure. And they perfected an invention of E-S-QUAL consists of 4 dimensions, 22 items of efficiency, system availability, fulfillment, privacy. Efficiency measures assess to sites and usability and others, system availability measures accurate technical function of sites and others, fulfillment measures promptness of delivering products and sufficient goods and others and privacy measures the degree of protection of data about their customers and so on. 2.2 Service Recovery Service industries tend to minimize the losses by coping with service failure promptly. This responses of service providers to service failure mean service recovery(Kelly & Davis, 1994). Bitner(1990) went on his study from customers' view about service providers' behavior for customers to recognize their satisfaction/dissatisfaction at service point. According to them, to manage service failure successfully, exact recognition of service problem, an apology, sufficient description about service failure and some tangible compensation are important. Parasuraman, Zeithaml & Malhorta(2005) approached the service recovery from how to measure, rather than how to manage, and moved to on-line market not to off-line, then invented E-RecS-QUAL which is a measuring tool about on-line service recovery. 2.3 Customer Satisfaction The definition of customer satisfaction can be divided into two points of view. First, they approached customer satisfaction from outcome of comsumer. Howard & Sheth(1969) defined satisfaction as 'a cognitive condition feeling being rewarded properly or improperly for their sacrifice.' and Westbrook & Reilly(1983) also defined customer satisfaction/dissatisfaction as 'a psychological reaction to the behavior pattern of shopping and purchasing, the display condition of retail store, outcome of purchased goods and service as well as whole market.' Second, they approached customer satisfaction from process. Engel & Blackwell(1982) defined satisfaction as 'an assessment of a consistency in chosen alternative proposal and their belief they had with them.' Tse & Wilton(1988) defined customer satisfaction as 'a customers' reaction to discordance between advance expectation and ex post facto outcome.' That is, this point of view that customer satisfaction is process is the important factor that comparing and assessing process what they expect and outcome of consumer. Unlike outcome-oriented approach, process-oriented approach has many advantages. As process-oriented approach deals with customers' whole expenditure experience, it checks up main process by measuring one by one each factor which is essential role at each step. And this approach enables us to check perceptual/psychological process formed customer satisfaction. Because of these advantages, now many studies are adopting this process-oriented approach(Yi, 1995). 2.4 Loyalty Intention Loyalty has been studied by dividing into behavioral approaches, attitudinal approaches and complex approaches(Dekimpe et al., 1997). In the early years of study, they defined loyalty focusing on behavioral concept, behavioral approaches regard customer loyalty as "a tendency to purchase periodically within a certain period of time at specific retail store." But the loyalty of behavioral approaches focuses on only outcome of customer behavior, so there are someone to point the limits that customers' decision-making situation or process were neglected(Enis & Paul, 1970; Raj, 1982; Lee, 2002). So the attitudinal approaches were suggested. The attitudinal approaches consider loyalty contains all the cognitive, emotional, voluntary factors(Oliver, 1997), define the customer loyalty as "friendly behaviors for specific retail stores." However these attitudinal approaches can explain that how the customer loyalty form and change, but cannot say positively whether it is moved to real purchasing in the future or not. This is a kind of shortcoming(Oh, 1995). 3. Research Design 3.1 Research Model Based on the objects of this study, the research model derived is

      . 3.2 Hypotheses 3.2.1 The Hypothesis of On-line Service Quality and Overall Service Quality The relation between on-line service quality and overall service quality I-1. Efficiency of on-line service quality may have a significant effect on overall service quality. I-2. System availability of on-line service quality may have a significant effect on overall service quality. I-3. Fulfillment of on-line service quality may have a significant effect on overall service quality. I-4. Privacy of on-line service quality may have a significant effect on overall service quality. 3.2.2 The Hypothesis of On-line Service Recovery and Overall Service Quality The relation between on-line service recovery and overall service quality II-1. Responsiveness of on-line service recovery may have a significant effect on overall service quality. II-2. Compensation of on-line service recovery may have a significant effect on overall service quality. II-3. Contact of on-line service recovery may have a significant effect on overall service quality. 3.2.3 The Hypothesis of Overall Service Quality and Customer Satisfaction The relation between overall service quality and customer satisfaction III-1. Overall service quality may have a significant effect on customer satisfaction. 3.2.4 The Hypothesis of Customer Satisfaction and Loyalty Intention The relation between customer satisfaction and loyalty intention IV-1. Customer satisfaction may have a significant effect on loyalty intention. 3.2.5 The Hypothesis of a Mediation Variable Wolfinbarger & Gilly(2003) and Parasuraman, Zeithaml & Malhotra(2005) had made clear that each dimension of service quality has a significant effect on overall service quality. Add to this, the authors analyzed empirically that each dimension of on-line service quality has a positive effect on customer satisfaction. With that viewpoint, this study would examine if overall service quality mediates between on-line service quality and each dimension of customer satisfaction, keeping on looking into the relation between on-line service quality and overall service quality, overall service quality and customer satisfaction. And as this study understands that each dimension of on-line service recovery also has an effect on overall service quality, this would examine if overall service quality also mediates between on-line service recovery and each dimension of customer satisfaction. Therefore these hypotheses followed are set up to examine if overall service quality plays its role as the mediation variable. The relation between on-line service quality and customer satisfaction V-1. Overall service quality may mediate the effects of efficiency of on-line service quality on customer satisfaction. V-2. Overall service quality may mediate the effects of system availability of on-line service quality on customer satisfaction. V-3. Overall service quality may mediate the effects of fulfillment of on-line service quality on customer satisfaction. V-4. Overall service quality may mediate the effects of privacy of on-line service quality on customer satisfaction. The relation between on-line service recovery and customer satisfaction VI-1. Overall service quality may mediate the effects of responsiveness of on-line service recovery on customer satisfaction. VI-2. Overall service quality may mediate the effects of compensation of on-line service recovery on customer satisfaction. VI-3. Overall service quality may mediate the effects of contact of on-line service recovery on customer satisfaction. 4. Empirical Analysis 4.1 Research design and the characters of data This empirical study aimed at customers who ever purchased air ticket at the Web sites for reservation and issue. Total 430 questionnaires were distributed, and 400 were collected. After surveying with the final questionnaire, the frequency test was performed about variables of sex, age which is demographic factors for analyzing general characters of sample data. Sex of data is consist of 146 of male(42.7%) and 196 of female(57.3%), so portion of female is a little higher. Age is composed of 11 of 10s(3.2%), 199 of 20s(58.2%), 105 of 30s(30.7%), 22 of 40s(6.4%), 5 of 50s(1.5%). The reason that portions of 20s and 30s are higher can be supposed that they use the Internet frequently and purchase air ticket directly. 4.2 Assessment of measuring scales This study used the internal consistency analysis to measure reliability, and then used the Cronbach'$\alpha$ to assess this. As a result of reliability test, Cronbach'$\alpha$ value of every component shows more than 0.6, it is found that reliance of the measured variables are ensured. After reliability test, the explorative factor analysis was performed. the factor sampling was performed by the Principal Component Analysis(PCA), the factor rotation was performed by the Varimax which is good for verifying mutual independence between factors. By the result of the initial factor analysis, items blocking construct validity were removed, and the result of the final factor analysis performed for verifying construct validity is followed above. 4.3 Hypothesis Testing 4.3.1 Hypothesis Testing by the Regression Analysis(SPSS) 4.3.2 Analysis of Mediation Effect To verify mediation effect of overall service quality of and , this study used the phased analysis method proposed by Baron & Kenny(1986) generally used. As
    shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient $\beta$eta : efficiency=.164, system availability=.074, fulfillment=.108, privacy=.107) is smaller than its estimate ability at Step 2(Standardized coefficient $\beta$eta : efficiency=.409, system availability=.227, fulfillment=.386, privacy=.237), so it was proved that overall service quality played a role as the partial mediation between on-line service quality and satisfaction. As
    shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient $\beta$eta : responsiveness=.164, compensation=.117, contact=.113) is smaller than its estimate ability at Step 2(Standardized coefficient $\beta$eta : responsiveness=.409, compensation=.386, contact=.237), so it was proved that overall service quality played a role as the partial mediation between on-line service recovery and satisfaction. Verified results on the basis of empirical analysis are followed. First, as the result of , it shows that all were chosen, so on-line service quality has a positive effect on overall service quality. Especially fulfillment of overall service quality has the most effect, and then efficiency, system availability, privacy in order. Second, as the result of , it shows that all were chosen, so on-line service recovery has a positive effect on overall service quality. Especially responsiveness of overall service quality has the most effect, and then contact, compensation in order. Third, as the result of and , it shows that and all were chosen, so overall service quality has a positive effect on customer satisfaction, customer satisfaction has a positive effect on loyalty intention. Fourth, as the result of and , it shows that and all were chosen, so overall service quality plays a role as the partial mediation between on-line service quality and customer satisfaction, on-line service recovery and customer satisfaction. 5. Conclusion This study measured and analyzed service quality and service recovery of the Web sites that customers made a reservation and issued their air tickets, and by improving customer satisfaction through the result, this study put its final goal to grope how to keep loyalty customers. On the basis of the result of empirical analysis, suggestion points of this study are followed. First, this study regarded E-S-QUAL that measures on-line service quality and E-RecS-QUAL that measures on-line service recovery as variables, so it overcame the limit of existing studies that used modified SERVQUAL to measure service quality of the Web sites. Second, it shows that fulfillment and efficiency of on-line service quality have the most significant effect on overall service quality. Therefore the Web sites of reserving and issuing air tickets should try harder to elevate efficiency and fulfillment. Third, privacy of on-line service quality has the least significant effect on overall service quality, but this may be caused by un-assurance of customers whether the Web sites protect safely their confidential information or not. So they need to notify customers of this fact clearly. Fourth, there are many cases that customers don't recognize the importance of on-line service recovery, but if they would think that On-line service recovery has an effect on customer satisfaction and loyalty intention, as its importance is very significant they should prepare for that. Fifth, because overall service quality has a positive effect on customer satisfaction and loyalty intention, they should try harder to elevate service quality and service recovery of the Web sites of reserving and issuing air tickets to maximize customer satisfaction and to secure loyalty customers. Sixth, it is found that overall service quality plays a role as the partial mediation, but now there are rarely existing studies about this, so there need to be more studies about this.

  • PDF

  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.