• Title/Summary/Keyword: case tool

Search Result 2,756, Processing Time 0.032 seconds

Variation of Hospital Costs and Product Heterogeneity

  • Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.11 no.1
    • /
    • pp.123-127
    • /
    • 1978
  • The major objective of this research is to identify those hospital characteristics that best explain cost variation among hospitals and to formulate linear models that can predict hospital costs. Specific emphasis is placed on hospital output, that is, the identification of diagnosis related patient groups (DRGs) which are medically meaningful and demonstrate similar patterns of hospital resource consumption. A casemix index is developed based on the DRGs identified. Considering the common problems encountered in previous hospital cost research, the following study requirements are estab-lished for fulfilling the objectives of this research: 1. Selection of hospitals that exercise similar medical and fiscal practices. 2. Identification of an appropriate data collection mechanism in which demographic and medical characteristics of individual patients as well as accurate and comparable cost information can be derived. 3. Development of a patient classification system in which all the patients treated in hospitals are able to be split into mutually exclusive categories with consistent and stable patterns of resource consumption. 4. Development of a cost finding mechanism through which patient groups' costs can be made comparable across hospitals. A data set of Medicare patients prepared by the Social Security Administration was selected for the study analysis. The data set contained 27,229 record abstracts of Medicare patients discharged from all but one short-term general hospital in Connecticut during the period from January 1, 1971, to December 31, 1972. Each record abstract contained demographic and diagnostic information, as well as charges for specific medical services received. The 'AUT-OGRP System' was used to generate 198 DRGs in which the entire range of Medicare patients were split into mutually exclusive categories, each of which shows a consistent and stable pattern of resource consumption. The 'Departmental Method' was used to generate cost information for the groups of Medicare patients that would be comparable across hospitals. To fulfill the study objectives, an extensive analysis was conducted in the following areas: 1. Analysis of DRGs: in which the level of resource use of each DRG was determined, the length of stay or death rate of each DRG in relation to resource use was characterized, and underlying patterns of the relationships among DRG costs were explained. 2. Exploration of resource use profiles of hospitals; in which the magnitude of differences in the resource uses or death rates incurred in the treatment of Medicare patients among the study hospitals was explored. 3. Casemix analysis; in which four types of casemix-related indices were generated, and the significance of these indices in the explanation of hospital costs was examined. 4. Formulation of linear models to predict hospital costs of Medicare patients; in which nine independent variables (i. e., casemix index, hospital size, complexity of service, teaching activity, location, casemix-adjusted death. rate index, occupancy rate, and casemix-adjusted length of stay index) were used for determining factors in hospital costs. Results from the study analysis indicated that: 1. The system of 198 DRGs for Medicare patient classification was demonstrated not only as a strong tool for determining the pattern of hospital resource utilization of Medicare patients, but also for categorizing patients by their severity of illness. 2. The wei틴fed mean total case cost (TOTC) of the study hospitals for Medicare patients during the study years was $11,27.02 with a standard deviation of $117.20. The hospital with the highest average TOTC ($1538.15) was 2.08 times more expensive than the hospital with the lowest average TOTC ($743.45). The weighted mean per diem total cost (DTOC) of the study hospitals for Medicare patients during the sutdy years was $107.98 with a standard deviation of $15.18. The hospital with the highest average DTOC ($147.23) was 1.87 times more expensive than the hospital with the lowest average DTOC ($78.49). 3. The linear models for each of the six types of hospital costs were formulated using the casemix index and the eight other hospital variables as the determinants. These models explained variance to the extent of 68.7 percent of total case cost (TOTC), 63.5 percent of room and board cost (RMC), 66.2 percent of total ancillary service cost (TANC), 66.3 percent of per diem total cost (DTOC), 56.9 percent of per diem room and board cost (DRMC), and 65.5 percent of per diem ancillary service cost (DTANC). The casemix index alone explained approximately one half of interhospital cost variation: 59.1 percent for TOTC and 44.3 percent for DTOC. Thsee results demonstrate that the casemix index is the most importand determinant of interhospital cost variation Future research and policy implications in regard to the results of this study is envisioned in the following three areas: 1. Utilization of casemix related indices in the Medicare data systems. 2. Refinement of data for hospital cost evaluation. 3. Development of a system for reimbursement and cost control in hospitals.

  • PDF

A Case Study of Software Architecture Design by Applying the Quality Attribute-Driven Design Method (품질속성 기반 설계방법을 적용한 소프트웨어 아키텍처 설계 사례연구)

  • Suh, Yong-Suk;Hong, Seok-Boong;Kim, Hyeon-Soo
    • The KIPS Transactions:PartD
    • /
    • v.14D no.1 s.111
    • /
    • pp.121-130
    • /
    • 2007
  • in a software development, the design or architecture prior to implementing the software is essential for the success. This paper presents a case that we successfully designed a software architecture of radiation monitoring system (RMS) for HANARO research reactor currently operating in KAERI by applying the quality attribute-driven design method which is modified from the attribute-driven design (ADD) introduced by Bass[1]. The quality attribute-driven design method consists of following procedures: eliciting functionality and quality requirements of system as architecture drivers, selecting tactics to satisfy the drivers, determining architectures based on the tactics, and implementing and validating the architectures. The availability, maintainability, and interchangeability were elicited as duality requirements, hot-standby dual servers and weak-coupled modulization were selected as tactics, and client-server structure and object-oriented data processing structure were determined at architectures for the RMS. The architecture was implemented using Adroit which is a commercial off-the-shelf software tool and was validated based on performing the function-oriented testing. We found that the design method in this paper is an efficient method for a project which has constraints such as low budget and short period of development time. The architecture will be reused for the development of other RMS in KAERI. Further works are necessary to quantitatively evaluate the architecture.

The Development of Theoretical Model for Relaxation Mechanism of Sup erparamagnetic Nano Particles (초상자성 나노 입자의 자기이완 특성에 관한 이론적 연구)

  • 장용민;황문정
    • Investigative Magnetic Resonance Imaging
    • /
    • v.7 no.1
    • /
    • pp.39-46
    • /
    • 2003
  • Purpose : To develop a theoretical model for magnetic relaxation behavior of the superparamagnetic nano-particle agent, which demonstrates multi-functionality such as liver- and lymp node-specificity. Based on the developed model, the computer simulation was performed to clarify the relationship between relaxation time and the applied magnetic field strength. Materials and Methods : The ultrasmall superparamagnetic iron oxide (USPIO) was encapsulated with biocompatiable polymer, to develop a relaxation model based on outsphere mechanism, which was resulting from diffusion and/or electron spin fluctuation. In addition, Brillouin function was introduced to describe the full magnetization by considering the fact that the low-field approximation, which was adapted in paramagnetic case, is no longer valid. The developed model describes therefore the T1 and T2 relaxation behavior of superparamagnetic iron oxide both in low-field and in high-field. Based on our model, the computer simulation was performed to test the relaxation behavior of superparamagnetic contrast agent over various magnetic fields using MathCad (MathCad, U.S.A.), a symbolic computation software. Results : For T1 and T2 magnetic relaxation characteristics of ultrasmall superparamagnetic iron oxide, the theoretical model showed that at low field (<1.0 Mhz), $\tau_{S1}(\tau_{S2}$, in case of T2), which is a correlation time in spectral density function, plays a major role. This suggests that realignment of nano-magnetic particles is most important at low magnetic field. On the other hand, at high field, $\tau$, which is another correlation time in spectral density function, plays a major role. Since $\tau$ is closely related to particle size, this suggests that the difference in R1 and R2 over particle sizes, at high field, is resulting not from the realignment of particles but from the particle size itself. Within normal body temperature region, the temperature dependence of T1 and T2 relaxation time showed that there is no change in T1 and T2 relaxation times at high field. Especially, T1 showed less temperature dependence compared to T2. Conclusion : We developed a theoretical model of r magnetic relaxation behavior of ultrasmall superparamagnetic iron oxide (USPIO), which was reported to show clinical multi-functionality by utilizing physical properties of nano-magnetic particle. In addition, based on the developed model, the computer simulation was performed to investigate the relationship between relaxation time of USPIO and the applied magnetic field strength.

  • PDF

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.

A Study on the Component-based GIS Development Methodology using UML (UML을 활용한 컴포넌트 기반의 GIS 개발방법론에 관한 연구)

  • Park, Tae-Og;Kim, Kye-Hyun
    • Journal of Korea Spatial Information System Society
    • /
    • v.3 no.2 s.6
    • /
    • pp.21-43
    • /
    • 2001
  • The environment to development information system including a GIS has been drastically changed in recent years in the perspectives of the complexity and diversity of the software, and the distributed processing and network computing, etc. This leads the paradigm of the software development to the CBD(Component Based Development) based object-oriented technology. As an effort to support these movements, OGC has released the abstract and implementation standards to enable approaching to the service for heterogeneous geographic information processing. It is also common trend in domestic field to develop the GIS application based on the component technology for municipal governments. Therefore, it is imperative to adopt the component technology considering current movements, yet related research works have not been made. This research is to propose a component-based GIS development methodology-ATOM(Advanced Technology Of Methodology)-and to verify its adoptability through the case study. ATOM can be used as a methodology to develop component itself and enterprise GIS supporting the whole procedure for the software development life cycle based on conventional reusable component. ATOM defines stepwise development process comprising activities and work units of each process. Also, it provides input and output, standardized items and specs for the documentation, detailed instructions for the easy understanding of the development methodology. The major characteristics of ATOM would be the component-based development methodology considering numerous features of the GIS domain to generate a component with a simple function, the smallest size, and the maximum reusability. The case study to validate the adoptability of the ATOM showed that it proves to be a efficient tool for generating a component providing relatively systematic and detailed guidelines for the component development. Therefore, ATOM would lead to the promotion of the quality and the productivity for developing application GIS software and eventually contribute to the automatic production of the GIS software, the our final goal.

  • PDF

Effects of Fiscal Policy on Labor Markets: A Dynamic General Equilibrium Analysis (조세·재정정책이 노동시장에 미치는 영향: 동태적 일반균형분석)

  • Kim, Sun-Bin;Chang, Yongsung
    • KDI Journal of Economic Policy
    • /
    • v.30 no.2
    • /
    • pp.185-223
    • /
    • 2008
  • This paper considers a heterogeneous agent dynamic general equilibrium model and analyzes effects of an increase in labor income tax rate on labor market and the aggregate variables in Korea. The fiscal policy regarding how the government uses the additional tax revenue may take the two forms: 1) general transfer and 2) earned income tax credit (EITC). The model features are as follows: 1) Workers are heterogeneous in their productivity. 2)Labor is indivisible, hence the analysis focuses on the variation in labor supply through the extensive margin in response to a change in fiscal policy. 3) The incomplete markets are introduced, so individual workers can not perfectly insure themselves against risks related to stochastic changes in income or employment status. 4) The model is of general equilibrium, hence it is equiped to analyze the feedback effect of changes in aggregate variables on individual workers' decisions. In the case of general transfer policy, the government equally distributes the additional tax revenue to all workers regardless of their employment states. Under this policy, an increase in the labor income tax rate dampens work incentives of individual workers so that the aggregate employment rate decreases by 1% compared with the benchmark economy. In the case of EITC policy, only employed workers whose labor incomes are below a certain EITC ceiling are eligible for the EITC benefits. Unlike the general transfer policy, the EITC induces low-income workers to participate the labor market to be eligible for EITC benefits. Hence, the aggregate employment rate may increase by 2.7% at the maximum. As the EITC ceiling increases, too many workers can collect the EITC but the benefits per worker becomes too little so that the increase in employment rate is negligible. By and large, this study demonstrates that EITC may effectively raise the aggregate employment rate, and that it can be a useful policy tool in response to the decrease in the labor force due to population aging as observed in Korea recently.

  • PDF

Study on Oneself Developed to Apparatus Position of Measurement of BMD in the Distal Radius (자체 개발한 보조기구로 원위 요골의 골밀도 측정 자세 연구)

  • Han, Man-Seok;Song, Jae-Yong;Lee, Hyun-Kuk;Yu, Se-Jong;Kim, Yong-Kyun
    • Journal of radiological science and technology
    • /
    • v.32 no.4
    • /
    • pp.419-426
    • /
    • 2009
  • Purpose : The aim of this study was to evaluate the difference of bone mineral density according to distal radius rotation and to develop the supporting tool to measure rotation angles. Materials and Methods : CT scanning and the measurement of BMD by DXA of the appropriate position of the forearm were performed on 20 males. Twenty healthy volunteers without any history of operations, anomalies, or trauma were enrolled. The CT scan was used to evaluate the cross sectional structure and the rotation angle on the horizontal plane of the distal radius. The rotational angle was measured by the m-view program on the PACS monitor. The DXA was used in 20 dried radii of cadaveric specimens in pronation and supination with five and ten degrees, respectively, including a neutral position (zero degrees) to evaluate the changes of BMD according to the rotation. Results : The mean rotation angle of the distal radius on CT was 7.4 degrees of supination in 16 cases (80%), 3.3 degrees of pronation in three cases (15%), and zero degree of neutral in one case (9%), respectively. The total average rotation angle in 20 people was 5.4 degrees of supination. In the cadaveric study, the BMD of the distal radius was different according to the rotational angles. The lowest BMD was obtained at 3.3 degrees of supination. Conclusion : In the case of the measurement of BMD in the distal radius with a neutral position, the rotational angle of the distal radius is close to supination. Pronation is needed for the constant measurement of BMD in the distal radius with the rotation angle measuring at the lowest BMD and about five degrees of pronation of the distal radius is recommended.

  • PDF

A Case of Late-onset Episodic Myopathic Form with Intermittent Rhabdomyolysis of Very-long-chain acyl-coenzyme A Dehydrogenase (VLCAD) Deficiency Diagnosed by Multigene Panel Sequencing (유전자패널 시퀀싱으로 진단된 성인형 very-long-chain acyl-coenzyme A dehydrogenase (VLCAD) 결핍증 증례)

  • Sohn, Young Bae;Ahn, Sunhyun;Jang, Ja-Hyun;Lee, Sae-Mi
    • Journal of The Korean Society of Inherited Metabolic disease
    • /
    • v.19 no.1
    • /
    • pp.20-25
    • /
    • 2019
  • Very-long-chain acyl-CoA dehydrogenase (VLCAD) deficiency (OMIM#201475) is an autosomal recessively inherited metabolic disorder of mitochondrial long-chain fatty acid oxidation. The clinical features of VLCAD deficiency is classified by three clinical forms according to the severity. Here, we report a case of later-onset episodic myopathic form of VLCAD deficiency whose diagnosis was confirmed by plasma acylcarnitine analysis and" multigene panel multigene panel sequencing. A 34-year old female patient visited genetics clinic for genetic evaluation for history of recurrent myopathy with intermittent rhabdomyolysis. She suffered first episode of rhabdomyolysis with acute renal failure requiring hemodialysis at twelve years old. After then, she suffered several times of recurrent rhabdomyolysis provoked by prolonged exercise or fasting. Physical and neurologic exam was normal. Serum AST/ALT and creatinine kinase (CK) levels were mildly elevated. However, according to her previous medical records, her AST/ALT, CK were highly elevated when she had rhabdomyolysis. In suspicion of fatty acid oxidation disorder, multigene panel sequencing and plasma acylcarnitine analysis were performed in non-fasting, asymptomatic condition for the differential diagnosis. Plasma acylcarnitine analysis revealed elevated levels of C14:1 ($1.453{\mu}mol/L$; reference, 0.044-0.285), and C14:2 ($0.323{\mu}mol/L$; 0.032-0.301) and upper normal level of C14 ($0.841{\mu}mol/L$; 0.065 -0.920). Two heterozygous mutation in ACADVL were detected by multigene panel sequencing and confirmed by Sanger sequencing: c.[1202G>A(;) 1349G>A] (p.[(Ser 401Asn)(;)(Arg450His)]). Diagnosis of VLCAD deficiency was confirmed and frequent meal with low-fat diet was educated for preventing acute metabolic derangement. Fatty acid oxidation disorders have diagnostic challenges due to their intermittent clinical and laboratorial presentations, especially in milder late-onset forms. We suggest that multigene panel sequencing could be a useful diagnostic tool for the genetically and clinically heterogeneous fatty acid oxidation disorders.

  • PDF

The Usefulness of 18F-FDG PET to Differentiate Subtypes of Dementia: The Systematic Review and Meta-Analysis

  • Seunghee Na;Dong Woo Kang;Geon Ha Kim;Ko Woon Kim;Yeshin Kim;Hee-Jin Kim;Kee Hyung Park;Young Ho Park;Gihwan Byeon;Jeewon Suh;Joon Hyun Shin;YongSoo Shim;YoungSoon Yang;Yoo Hyun Um;Seong-il Oh;Sheng-Min Wang;Bora Yoon;Hai-Jeon Yoon;Sun Min Lee;Juyoun Lee;Jin San Lee;Hak Young Rhee;Jae-Sung Lim;Young Hee Jung;Juhee Chin;Yun Jeong Hong;Hyemin Jang;Hongyoon Choi;Miyoung Choi;Jae-Won Jang;Korean Dementia Association
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.1
    • /
    • pp.54-66
    • /
    • 2024
  • Background and Purpose: Dementia subtypes, including Alzheimer's dementia (AD), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD), pose diagnostic challenges. This review examines the effectiveness of 18F-Fluorodeoxyglucose Positron Emission Tomography (18F-FDG PET) in differentiating these subtypes for precise treatment and management. Methods: A systematic review following Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines was conducted using databases like PubMed and Embase to identify studies on the diagnostic utility of 18F-FDG PET in dementia. The search included studies up to November 16, 2022, focusing on peer-reviewed journals and applying the goldstandard clinical diagnosis for dementia subtypes. Results: From 12,815 articles, 14 were selected for final analysis. For AD versus FTD, the sensitivity was 0.96 (95% confidence interval [CI], 0.88-0.98) and specificity was 0.84 (95% CI, 0.70-0.92). In the case of AD versus DLB, 18F-FDG PET showed a sensitivity of 0.93 (95% CI 0.88-0.98) and specificity of 0.92 (95% CI, 0.70-0.92). Lastly, when differentiating AD from non-AD dementias, the sensitivity was 0.86 (95% CI, 0.80-0.91) and the specificity was 0.88 (95% CI, 0.80-0.91). The studies mostly used case-control designs with visual and quantitative assessments. Conclusions: 18F-FDG PET exhibits high sensitivity and specificity in differentiating dementia subtypes, particularly AD, FTD, and DLB. This method, while not a standalone diagnostic tool, significantly enhances diagnostic accuracy in uncertain cases, complementing clinical assessments and structural imaging.

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF