The Study of Dose Distribution according to the Using Linac and Tomotherapy on Total Lymphnode Irradiation (선형가속기와 토모치료기를 이용한 전림프계의 방사선 치료시 선량분포에 관한 연구)
-
- Journal of the Korean Society of Radiology
- /
- v.7 no.4
- /
- pp.285-291
- /
- 2013
In this study, compare and analyze the dose distribution and availability of radiation therapy when using a different devices to TNI(Total Lymphnodal Irradiation). Test subjects(patients) are 15 people(Male 7, Female 8). Acquire CT Simulation images of the 15 people using Somatom Sansation Open 16 channel and then acquired images was transferred to each treatment planning system Pinnacle Ver 8.0 and Tomotherapy Planning System and separate the tumor tissue and normal tissues(whole lung, spinal cord, Rt kidney, Lt kidney). Tumor prescription dose was set to 750 cGy. and then Compare the Dose Compatibility, Normal Tissue's Absorbed Dose, Dose Distribution and DVH. Statistical analysis was performed SPSS Ver. 18.0 by paired sample Assay. The absorbed dose in the tumor tissue was
With the development of related technologies, Location-Based Services (LBS) are growing fast and being used in many ways. Past LBS studies have focused on adoption of LBS because of the fact that LBS users have privacy concerns regarding revealing their location information. Meanwhile, the number of LBS users and revenues from LBS are growing rapidly because users can get some benefits by revealing their location information. Little research has been done on how LBS affects consumers' information search behavior in product purchase. The purpose of this paper is examining the effect of LBS information filtering on buyers' uncertainty and their information search behavior. When consumers purchase a product, they try to reduce uncertainty by searching information. Generally, there are two types of uncertainties - knowledge uncertainty and choice uncertainty. Knowledge uncertainty refers to the lack of information on what kinds of alternatives are available in the market and/or their important attributes. Therefore, consumers having knowledge uncertainty will have difficulties in identifying what alternatives exist in the market to fulfil their needs. Choice uncertainty refers to the lack of information about consumers' own preferences and which alternative will fit in their needs. Therefore, consumers with choice uncertainty have difficulties selecting best product among available alternatives.. According to economics of information theory, consumers narrow the scope of information search when knowledge uncertainty is high. It is because consumers' information search cost is high when their knowledge uncertainty is high. If people do not know available alternatives and their attributes, it takes time and cognitive efforts for them to acquire information about available alternatives. Therefore, they will reduce search breadth. For people with high knowledge uncertainty, the information about products and their attributes is new and of high value for them. Therefore, they will conduct searches more in-depth because they have incentive to acquire more information. When people have high choice uncertainty, people tend to search information about more alternatives. It is because increased search breadth will improve their chances to find better alternative for them. On the other hand, since human's cognitive capacity is limited, the increased search breadth (more alternatives) will reduce the depth of information search for each alternative. Consumers with high choice uncertainty will spend less time and effort for each alternative because considering more alternatives will increase their utility. LBS provides users with the capability to screen alternatives based on the distance from them, which reduces information search costs. Therefore, it is expected that LBS will help users consider more alternatives even when they have high knowledge uncertainty. LBS provides distance information, which helps users choose alternatives appropriate for them. Therefore, users will perceive lower choice uncertainty when they use LBS. In order to test the hypotheses, we selected 80 students and assigned them to one of the two experiment groups. One group was asked to use LBS to search surrounding restaurants and the other group was asked to not use LBS to search nearby restaurants. The experimental tasks and measures items were validated in a pilot experiment. The final measurement items are shown in Appendix A. Each subject was asked to read one of the two scenarios - with or without LBS - and use a smartphone application to pick a restaurant. All behaviors on smartphone were recorded using a recording application. Search breadth was measured by the number of restaurants clicked by each subject. Search depths was measured by two metrics - the average number of sub-level pages each subject visited and the average time spent on each restaurant. The hypotheses were tested using SPSS and PLS. The results show that knowledge uncertainty reduces search breadth (H1a). However, there was no significant correlation between knowledge uncertainty and search depth (H1b). Choice uncertainty significantly reduces search depth (H2b), but no significant relationship was found between choice uncertainty and search breadth (H2a). LBS information filtering significantly reduces the buyers' choice uncertainty (H4) and reduces the negative relationship between knowledge uncertainty and search breadth (H3). This research provides some important implications for service providers. Service providers should use different strategies based on their service properties. For those service providers who are not well-known to consumers (high knowledge uncertainty) should encourage their customers to use LBS. This is because LBS would increase buyers' consideration sets when the knowledge uncertainty is high. Therefore, less known services have chances to be included in consumers' consideration sets with LBS. On the other hand, LBS information filtering decrease choice uncertainty and the near service providers are more likely to be selected than without LBS. Hence, service providers should analyze geographically approximate competitors' strength and try to reduce the gap so that they can have chances to be included in the consideration set.
In this study, we wanted to find out the optimal angle as a modified tangential projection of the patella. In the experiment, we used Kyoto Kagaku's PBU-50 phantom. In the supine position, the F-T angle was set to 95°, 105°, 115°, 125°, 135°, 145°, and Patella tangential projection images were obtained by varying the X-ray tube angle by 5° so that the angle between the X-ray centerline and tibia at each angle was 5~20°. Image J was used for image analysis and the congruence angle, lateral patellofemoral angle, patellofemoral index and contrast to noise ratio(CNR) were also measured. SPSS 22 was used for statistical analysis, and the mean values of congruence angle, patellofemoral angle, patellofemoral index, and CNR were compared with Merchant method through one-way batch analysis and corresponding sample t-test. As a result of the study, in the case of congruence angle, the angle of incidence of the knee-angle X-ray centerline was 105°-72.5° (20° tangential irradiation), 115°-72.5°, 77.5° (15, 20° tangential irradiation), 125°-82.5° (20° tangential irradiation), lateral patellofemoral angle is 115°-72.5°, 77.5° (15, 20° tangential irradiation), 125°-72.5° (10° tangential irradiation), patellofemoral index is 115°-72.5° (15° tangential irradiation) and 125°-72.5° (10° tangential irradiation) were not significantly different from Merchant method (p> .05). In case of CNR, it is not different from Merchant method at 105°-67.5°, 72.5° (15, 20° tangential irradiation), 115°-67.5°, 72.5°, 77.5° (10, 15, 20° tangential irradiation). (P> .05). Based on the results of this study, high diagnostic value images can be obtained by setting the knee angle and the angle of incidence of the X-ray tube to 115°-72.5° (15° tangential irradiation) during the modified tangential examination of the knee bone. It was confirmed.
Research on investment determinants of accelerators, which are attracting attention by greatly improving the survival rate of startups by providing professional incubation and investment to startups at the same time, is gradually expanding. However, previous studies do not have a theoretical basis in developing investment determinants in the early stages, and they use factors of angel investors or venture capital, which are similar investors, and are still in the stage of analyzing importance and priority through empirical research. Therefore, this study verified for the first time in Korea the discrimination and effectiveness of investment determinants using accelerator investment determinants developed based on the business model innovation framework in previous studies. To this end, we first set the criteria for success and failure of startup investment based on scale-up theory and conducted a survey of 22 investment experts from 14 accelerators in Korea, and secured valid data on a total of 97 startups, including 52 successful scale-up startups and 45 failed scale-up startups, were obtained and an independent sample t-test was conducted to verify the mean difference between these two groups by accelerator investment determinants. As a result of the analysis, it was confirmed that the investment determinants of accelerators based on business model innovation framework have considerable discrimination in finding successful startups and making investment decisions. In addition, as a result of analyzing manufacturing-related startups and service-related startups considering the characteristics of innovation by industry, manufacturing-related startups differed in business model, strategy, and dynamic capability factors, while service-related startups differed in dynamic capabilities. This study has great academic implications in that it verified the practical effectiveness of accelerator investment determinants derived based on business model innovation framework for the first time in Korea, and it has high practical value in that it can make effective investments by providing theoretical grounds and detailed information for investment decisions.
In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.
Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used