• Title/Summary/Keyword: 3-D ANALYSIS

Search Result 16,658, Processing Time 0.048 seconds

A Study on a Prevention of Long-term Care self-reliance Support for the Elderly in Home: Proposal of an Prevention and Support for Self-reliance Support Model (재가노인의 장기요양예방과 자립지원에 관한 연구: 예방·자립지원 모형설계 방안제언)

  • Kim, Hyun-Sil;Hwang, Sung-Ja
    • 한국노년학
    • /
    • v.30 no.4
    • /
    • pp.1359-1375
    • /
    • 2010
  • Expecting the expansion of the elderly population under long-term home care with the coming of the aged society, this study purposed to propose a prevention and self-reliance support model and to get practical implications for minimizing dependency on care benefits and enhancing the effectiveness of prevention and self-reliance support. Research methods employed for this study were: first, reviewing theoretical literature for clarifying the concept of prevention and self-reliance support in providing long-term care benefits for the elderly; second, identifying factors hindering prevention and self-reliance support through analyzing standard long-term care use plans and documents related to long-term care benefits at elderly welfare centers to which the research subjects belonged; and third, surveying care benefit users on factors hindering their use of prevention and self-reliance support and their needs in the use of care benefits. Based on the results of the three types of qualitative research, we proposed directions for prevention and self-reliance support modeling and suggested practical implications for enhancing the effectiveness of prevention and self-reliance support. For this study, we collected documentary materials and conducted in-depth interviews with the participants with the consents and cooperation of managers and professional social workers at day care centers and elderly welfare centers in D City. According to the results of this study, literature review suggested that long-term care prevention and self-reliance support should be provided in a way of 'strengthening user-centered support systems,' which support elderly long-term care beneficiaries' right to lead a life as the subject of their own life. Document analysis found the absence of benefits related to health and medicine and lack of social support systems for prevention and self-reliance support, and the results of in-depth interviews suggested the necessity to strengthen services related to elderly long-term care beneficiaries' prevention and self-reliance, and the keen needs of the long-term care elders for prevention and self-reliance included: ① loneliness, anxiety, fear; ② missing for and worry about children and people; ③ moving, outing; ④ health and medical services, rehabilitation programs; ⑤ desire to use day care; ⑥ inconvenience of house structure; ⑦desire for meal menus; and ⑧ the occurrence of disuse syndrome. Based on these results, we suggested the base of prevention and self-reliance support modeling with three axes: ① strengthening user-centered support systems; ② strengthening support systems connected to health and medicine; and ③ strengthening social support systems.

Evaluation of Application Possibility for Floating Marine Pollutants Detection Using Image Enhancement Techniques: A Case Study for Thin Oil Film on the Sea Surface (영상 강화 기법을 통한 부유성 해양오염물질 탐지 기술 적용 가능성 평가: 해수면의 얇은 유막을 대상으로)

  • Soyeong Jang;Yeongbin Park;Jaeyeop Kwon;Sangheon Lee;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1353-1369
    • /
    • 2023
  • In the event of a disaster accident at sea, the scale of damage will vary due to weather effects such as wind, currents, and tidal waves, and it is obligatory to minimize the scale of damage by establishing appropriate control plans through quick on-site identification. In particular, it is difficult to identify pollutants that exist in a thin film at sea surface due to their relatively low viscosity and surface tension among pollutants discharged into the sea. Therefore, this study aims to develop an algorithm to detect suspended pollutants on the sea surface in RGB images using imaging equipment that can be easily used in the field, and to evaluate the performance of the algorithm using input data obtained from actual waters. The developed algorithm uses image enhancement techniques to improve the contrast between the intensity values of pollutants and general sea surfaces, and through histogram analysis, the background threshold is found,suspended solids other than pollutants are removed, and finally pollutants are classified. In this study, a real sea test using substitute materials was performed to evaluate the performance of the developed algorithm, and most of the suspended marine pollutants were detected, but the false detection area occurred in places with strong waves. However, the detection results are about three times better than the detection method using a single threshold in the existing algorithm. Through the results of this R&D, it is expected to be useful for on-site control response activities by detecting suspended marine pollutants that were difficult to identify with the naked eye at existing sites.

Development of a Simultaneous Analytical Method for Determination of Insecticide Broflanilide and Its Metabolite Residues in Agricultural Products Using LC-MS/MS (LC-MS/MS를 이용한 농산물 중 살충제 Broflanilide 및 대사물질 동시시험법 개발)

  • Park, Ji-Su;Do, Jung-Ah;Lee, Han Sol;Park, Shin-min;Cho, Sung Min;Kim, Ji-Young;Shin, Hye-Sun;Jang, Dong Eun;Jung, Yong-hyun;Lee, Kangbong
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.124-134
    • /
    • 2019
  • An analytical method was developed for the determination of broflanilide and its metabolites in agricultural products. Sample preparation was conducted using the QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method and LC-MS/MS (liquid chromatograph-tandem mass spectrometer). The analytes were extracted with acetonitrile and cleaned up using d-SPE (dispersive solid phase extraction) sorbents such as anhydrous magnesium sulfate, primary secondary amine (PSA) and octadecyl ($C_{18}$). The limit of detection (LOD) and quantification (LOQ) were 0.004 and 0.01 mg/kg, respectively. The recovery results for broflanilide, DM-8007 and S(PFP-OH)-8007 ranged between 90.7 to 113.7%, 88.2 to 109.7% and 79.8 to 97.8% at different concentration levels (LOQ, 10LOQ, 50LOQ) with relative standard deviation (RSD) less than 8.8%. The inter-laboratory study recovery results for broflanilide and DM-8007 and S (PFP-OH)-8007 ranged between 86.3 to 109.1%, 87.8 to 109.7% and 78.8 to 102.1%, and RSD values were also below 21%. All values were consistent with the criteria ranges requested in the Codex guidelines (CAC/GL 40-1993, 2003) and the Food and Drug Safety Evaluation guidelines (2016). Therefore, the proposed analytical method was accurate, effective and sensitive for broflanilide determination in agricultural commodities.

Color Analyses on Digital Photos Using Machine Learning and KSCA - Focusing on Korean Natural Daytime/nighttime Scenery - (머신러닝과 KSCA를 활용한 디지털 사진의 색 분석 -한국 자연 풍경 낮과 밤 사진을 중심으로-)

  • Gwon, Huieun;KOO, Ja Joon
    • Trans-
    • /
    • v.12
    • /
    • pp.51-79
    • /
    • 2022
  • This study investigates the methods for deriving colors which can serve as a reference to users such as designers and or contents creators who search for online images from the web portal sites using specific words for color planning and more. Two experiments were conducted in order to accomplish this. Digital scenery photos within the geographic scope of Korea were downloaded from web portal sites, and those photos were studied to find out what colors were used to describe daytime and nighttime. Machine learning was used as the study methodology to classify colors in daytime and nighttime, and KSCA was used to derive the color frequency of daytime and nighttime photos and to compare and analyze the two results. The results of classifying the colors of daytime and nighttime photos using machine learning show that, when classifying the colors by 51~100%, the area of daytime colors was approximately 2.45 times greater than that of nighttime colors. The colors of the daytime class were distributed by brightness with white as its center, while that of the nighttime class was distributed with black as its center. Colors that accounted for over 70% of the daytime class were 647, those over 70% of the nighttime class were 252, and the rest (31-69%) were 101. The number of colors in the middle area was low, while other colors were classified relatively clearly into day and night. The resulting color distributions in the daytime and nighttime classes were able to provide the borderline color values of the two classes that are classified by brightness. As a result of analyzing the frequency of digital photos using KSCA, colors around yellow were expressed in generally bright daytime photos, while colors around blue value were expressed in dark night photos. For frequency of daytime photos, colors on the upper 40% had low chroma, almost being achromatic. Also, colors that are close to white and black showed the highest frequency, indicating a large difference in brightness. Meanwhile, for colors with frequency from top 5 to 10, yellow green was expressed darkly, and navy blue was expressed brightly, partially composing a complex harmony. When examining the color band, various colors, brightness, and chroma including light blue, achromatic colors, and warm colors were shown, failing to compose a generally harmonious arrangement of colors. For the frequency of nighttime photos, colors in approximately the upper 50% are dark colors with a brightness value of 2 (Munsell signal). In comparison, the brightness of middle frequency (50-80%) is relatively higher (brightness values of 3-4), and the brightness difference of various colors was large in the lower 20%. Colors that are not cool colors could be found intermittently in the lower 8% of frequency. When examining the color band, there was a general harmonious arrangement of colors centered on navy blue. As the results of conducting the experiment using two methods in this study, machine learning could classify colors into two or more classes, and could evaluate how close an image was with certain colors to a certain class. This method cannot be used if an image cannot be classified into a certain class. The result of such color distribution would serve as a reference when determining how close a certain color is to one of the two classes when the color is used as a dominant color in the base or background color of a certain design. Also, when dividing the analyzed images into several classes, even colors that have not been used in the analyzed image can be determined to find out how close they are to a certain class according to the color distribution properties of each class. Nevertheless, the results cannot be used to find out whether a specific color was used in the class and by how much it was used. To investigate such an issue, frequency analysis was conducted using KSCA. The color frequency could be measured within the range of images used in the experiment. The resulting values of color distribution and frequency from this study would serve as references for color planning of digital design regarding natural scenery in the geographic scope of Korea. Also, the two experiments are meaningful attempts for searching the methods for deriving colors that can be a useful reference among numerous images for content creator users of the relevant field.

A Study on the Palsapum (八賜品, Eight-Bestowed Things), Treasure No. 440, in Tong-Yong Shrine to the Loyal Dead in Korea (보물 제440호 통영 충렬사 팔사품(八賜品) 연구)

  • Jang, Kyung-hee
    • Journal of Korean Historical Folklife
    • /
    • no.46
    • /
    • pp.195-237
    • /
    • 2014
  • Palsapum are ornaments to reveal the purpose of commander of three naval forces as well as symbols to remember the greatness of admiral Yi, Sun-Shin. In 1966, ther were designated as a treasure No. 440 based on their value; however, they have not received attention from academia because they are relics from China. This study compares and analyzes the document, paintings, and relevant references from Korea and China focusing on Palsapum, understands their formal characteristics, and examines their historical value such as years and location of creation. As a result, the study determines five of them are original, but three of them were newly created by the later generations. The five, Dodogin (都督印, Commander's seal)·Yeongpae (令牌, Commander's tablet)·Gwido (鬼刀, Replica of the devil sword)·Chamdo (斬刀, Replica of the decapitation swor d)·and Gognapal (bugle) were created by Ming Dynasty before 1598, and delivered by the hands of General Chen Lin. The other three, Dokjeongi (督戰 旗, Battle flag)·Hongsoryeonggi (紅小令旗, Commander's flag)·and Namsoryeonggi (藍小令旗, Commander's flag), were created in 19th century by Joseon Dynasty. After analysis on the former relics, the study determines that they are not official relics with the dignity of Ming Dynasty but personal relics with regional characteristics; in other words, Palsamun are not the royal gifts from Emperor Shenzong to Admiral Yi, Sun-Shin. but personal momentoes left by General Chen Lin in the Tongjeyoung to celebrate the admiral. The names, variety, numbers, and appurtenances of Palsapum have been changed with time as follows. First, the scholars of Jeseon in 17the century only focused on Dodogin. It was certainly created in Ming Dynasty; however, it was a personal stamp, so considered to be not from the emperor but from General Chen Lin. Second, Palsapum was called Palsamul and consisted of 14 pieces of 8 kinds in 18the century, ; it is confirmed on the 「Dosul(圖說, stories with pictures of」 『Yi Chungmugong Literary Collection』 The sizes of five relics including Dodogin are similar to the records, but their patterns and shapes are exotic, or cannot be found in Joseon. Thus, they reflect the regional characteristics of Guangdong province. Third, they were called Palsapum, and consisted on 15 pieces of 8 kinds in 19th century; it is confirmed on , a sixteen-fold folding screen drawn by Shin, Gwan-Ho in 1861. The stamp box, tablet bag, and three flags were newly created to engrave Joseon style letters and patterns on damageable materials such as leather and cloth. The relics easy to be destroyed have been renewed even after 19th century. Last, there are many misunderstandings about Palsapum by governmental indifference and improper management of records even though they were designated as a treasure in very early times. Thus, authorities should be concerned with Palsapum to provide the measures for stable maintenance of the relics; this will let people remember not only the history of cooperation between Korea and China to stop the Japanese ambition, but also Admiral Yi, Sun-Shin and General Chen Lin to bring victory in Japanese invasions of Korea.

Effects of Motion Correction for Dynamic $[^{11}C]Raclopride$ Brain PET Data on the Evaluation of Endogenous Dopamine Release in Striatum (동적 $[^{11}C]Raclopride$ 뇌 PET의 움직임 보정이 선조체 내인성 도파민 유리 정량화에 미치는 영향)

  • Lee, Jae-Sung;Kim, Yu-Kyeong;Cho, Sang-Soo;Choe, Yearn-Seong;Kang, Eun-Joo;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Kim, Sang-Eun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.413-420
    • /
    • 2005
  • Purpose: Neuroreceptor PET studies require 60-120 minutes to complete and head motion of the subject during the PET scan increases the uncertainty in measured activity. In this study, we investigated the effects of the data-driven head mutton correction on the evaluation of endogenous dopamine release (DAR) in the striatum during the motor task which might have caused significant head motion artifact. Materials and Methods: $[^{11}C]raclopride$ PET scans on 4 normal volunteers acquired with bolus plus constant infusion protocol were retrospectively analyzed. Following the 50 min resting period, the participants played a video game with a monetary reward for 40 min. Dynamic frames acquired during the equilibrium condition (pre-task: 30-50 min, task: 70-90 min, post-task: 110-120 min) were realigned to the first frame in pre-task condition. Intra-condition registrations between the frames were performed, and average image for each condition was created and registered to the pre-task image (inter-condition registration). Pre-task PET image was then co-registered to own MRI of each participant and transformation parameters were reapplied to the others. Volumes of interest (VOI) for dorsal putamen (PU) and caudate (CA), ventral striatum (VS), and cerebellum were defined on the MRI. Binding potential (BP) was measured and DAR was calculated as the percent change of BP during and after the task. SPM analyses on the BP parametric images were also performed to explore the regional difference in the effects of head motion on BP and DAR estimation. Results: Changes in position and orientation of the striatum during the PET scans were observed before the head motion correction. BP values at pre-task condition were not changed significantly after the intra-condition registration. However, the BP values during and after the task and DAR were significantly changed after the correction. SPM analysis also showed that the extent and significance of the BP differences were significantly changed by the head motion correction and such changes were prominent in periphery of the striatum. Conclusion: The results suggest that misalignment of MRI-based VOI and the striatum in PET images and incorrect DAR estimation due to the head motion during the PET activation study were significant, but could be remedied by the data-driven head motion correction.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF