• Title/Summary/Keyword: Compatible

Search Result 2,807, Processing Time 0.122 seconds

Early Results of Heart Transplantaion: A Review of 20 Patients (심장이식술 20례의 조기성적)

  • Park, Chong-Bin;Song, Hyun;Song, Meong-Gun;Kim, Jae-Joong;Lee, Jay-Won;Seo, Dong-Man;Sohn, Kwang-Hyun
    • Journal of Chest Surgery
    • /
    • v.30 no.2
    • /
    • pp.164-171
    • /
    • 1997
  • Heart transplantation is now accepted as a definitive therapeutic modality in patients with terminal hear failure. The first successful heart transplantation in humans was done in 1967 and the first case in Korea was performed in november, 1992. Since the first case in 1992, more than 50 cases have been performed in Korea. A total of 20 patients underwent orthotopic heart transplantation since November, 1992 in Asan Medicla Center. The purpose of this study is to evaluate the early results and the follow-up course of 20 cases of heart transplantation done in Asan Medical Center. The average age of 20 patients was 39.9$\pm$11.8 years old(20~58). The mean follow-up duration was 14.4$\pm$11.2 months(1~41). All patients are alive till now. The blood type was identical in 14 and compatible in 6 patients. ihe original heart disease was dilated cardiomyopathy in 16, valvular heart disease in 2, ischemic cardiomyopathy in 1, and giant cell myocarditis in 1 patient. HLA cross matching for recipient and donor was done in 18 cases and the results were negative for T-cell and B-cell in 16 patients, pos tive for warm B-cell in 2 patients. Among 6 loci of A, B, and DR, one locus was matched in 8 cases, 2 loci in 5 cases, and 3 loci matched in 1 case. The number of acute allograft rejection averaged 2.8$\pm$0.5 (0~6) per case and the number of acute allograft rejection requiring treatment averaged 1.0$\pm$0.9 (1~3) per case. The time interval from operation to the first acute rejection requiring treatment was 35.5$\pm$20.4 days (5~60). Acute humoral rejection was suspected strongly in 1 case and was successfully treated. The left ventricular ejection fraction measured by echocardiography and/or MUGA scan was dramatically increased from 17.5$\pm$6.8 (9~32)% to 58.9$\pm$2.0 (55~62)% after heart transplantation. Temporary pacing was needed in 5 patients over 24 hours but normal sinus rhythm appeared within 7 days in all cases. One patient has been taken permanent pacemaker implantation due to complete AV block appearing 140 days after heart transplantaion. One patient had cyclosporine-associated n urotoxicity during the immediate postoperative period and was recovered after 27 hours. The heart transplantation of Asan Medical Center is on a developing stage but the early result is comparable to that of well established centers in other countries, even though the long-term follow-up result must be reevaluated. We can conclude that the heart transplantion is a promising therapeutic option in patients with terminal heart failure.

  • PDF

Analysis of Quantitative Indices in Tl-201 Myocardial Perfusion SPECT: Comparison of 4DM, QPS, and ECT Program (Tl-201 심근 관류 SPECT에서 4DM, QPS, ECT 프로그램의 정량적 지표 비교 분석)

  • Lee, Dong-Hun;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.67-75
    • /
    • 2009
  • Purpose: As to the analytical method of data, the various programs in which it is used for the quantitative rating of the Tl-201 myocardial perfusion SPECT are reported that there is a difference. Therefore, the measured value error of the mutual program is expected to be generated even if the quantitative analysis is made against data of the same patient. Using quantitative index that able to represent myocardial perfusion defect level, we aimed to determine correlation among three myocardial perfusion analysis programs 4DM (4DMSPECT), QPS (Quantitative Perfusion SPECT), ECT (Emory Cardiac Toolbox) that be used generally in most departments of Nuclear Medicine. Materials and Methods: We analyzed the 145 patients who were examined by Tl-201 gated myocardial perfusion SPECT in department of nuclear medicine at Asan Mediacal Center from December 1th 2008 to February 14th 2008. We sorted as normal group and abnormal group. Normal group consist of 80 patients (Male/Female=38/42, age:$65.1{\pm}9.9$) who have low possibility of cardiovascular disease. And abnormal group consist of 65 patients (Male/Female=45/20, age:$63.0{\pm}8.7$) who were diagnosed cardiovascular disease with reversible perfusion defect or fixed perfusion defect through myocardial perfusion SPECT results. Using the 4DM, QPS, and ECT programs, the total defect extent (TDE) such as LAD, LCX, RCA and the summed stress score (SSS) have been analysed for their correlations and statistical comparison with the paried t-test for the quantitative indices analysed from each group. Results: The correlation of 4DM:QPS, QPS:ECT, ECT:4DM each group result from 145 patients is 0.84, 0.86, 0.82 at SSS, 0.87, 0.84, 0.87 at TDE, and both index showed good correlation. In paired t-test and Bland-Altman analysis results showed no statistically significant difference in the comparison of QPS:ECT at the mean SSS and TDE, 4DM:QPS, ECT:4DM comparative analysis results showed statistically significant difference at SSS and TDE index. The correlation of 4DM:QPS, QPS:ECT, ECT:4DM program results from abnormal group (65 patients) is 0.72, 0.72, 0.70 at SSS and 0.77, 0.70, 0.77 at TDE and TDE and SSS has a good correlation. In abnormal group, paired t-test and Bland-Altman analysis results showed no statistically significant difference at QPS:ECT SSS (p=0.89) and TDE (p=0.23) comparison, 4DM:QPS, ECT:4DM comparative analysis results showed statistically significant difference at SSS and TDE index (p<0.01). In normal group (80 patients), paired t-test and Bland-Altman analysis results showed no statistically significant difference at QPS:ECT SSS (p=0.95) and TDE (p=0.73) comparison. And 4DM:QPS, ECT:4DM comparative analysis results showed statistically significant difference at SSS and TDE index (p<0.01). Conclusions: The perfusion defect of the Tl-201 myocardial perfusion SPECT was analyzed in not only the patient in whom it has the cardiovascular disease but also the patient in whom the possibility of the cardiovascular disease is few. In the comparison of the all group research, the mean of the TDE and SSS, 4DM was lower than QPS and ECT progrms. Each program showed good correlation and the results showed statistically significant difference. However, in this way, it is determined to be compatible about the analysis value in which the large-scale side between the programs uses each program a difference in a clinical in the Bland-Altman analyzed result in spite of the good correlation and cannot use. but, this analyzed result will be able to be usefully used as the reference material for the clinical read and is expected.

  • PDF

The relationships between lead exposure indicies and urinary δ-ALA by HPLC and colorimetric method in lead exposure workers (연노출근로자에 있어서 흡광광도법과 HPLC법에 의한 요중 δ-ALA 배설량과 연노출지표들 간의 관련성)

  • Ahn, Kyu-Dong;Lee, Sung-Soo;Hwangbo, Young;Lee, Gab-Soo;Yeon, You-Yong;Kim, Yong-Bae;Lee, Byung-Kook
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.6 no.1
    • /
    • pp.77-87
    • /
    • 1996
  • In order to compare the difference of the measurement of delta aminolevulinic acid(${\delta}$-ALA) in urine between HPLC method(HALA) and colorimetric method(CALA), and also to provide useful information for the new diagnostic criteria of ${\delta}$-ALA in urine in lead poisoning, if at all possible in the future, authors studied 234 male lead workers who were selected from 7 storage battery factories, 3 secondary smelting industries, and 2 litharge making industries. Study subjects were selected on the basis of blood Zinc protoporphyrin(ZPP) level from low to high concentration to cover wide range of lead exposure. Study variables for this study were ${\delta}$-ALA measured by two different methods, blood lead(PbB), and blood ZPP. The results were as follows: 1. There was very high correlation between ${\delta}$-ALA measured by two method(r = 0.989 : HALA = -0.8194 + 0.8110 ${\times}$ CALA), but the value of CALA was measured about 2mg/L greater than HALA. 2. While the correlations of ${\delta}$-ALA by two method with blood lead and blood ZPP were 0.46 and 0.37 respectively, they were increased to 0.63 and 0.57 if ${\delta}$-ALA values were log-transformed. 3. Simple linear regression of ${\delta}$-ALA measured by two method on ZPP were as follows: CALA = 2.0421 + 0.0341 ${\times}$ ZPP ($R^2=0.1385$ p = 0.0001) HALA = 0.8006 + 0.0280 ${\times}$ ZPP ($R^2=0.1389$ p = 0.0001) 4. Simple linear regression of ${\delta}$-ALA measured by two method on PbB were as follows: CALA = - 0.4134 + 0.1545 ${\times}$ PbB ($R^2=0.2085$ p = 0.0001) HALA = -1.2893 + 0.1287 PbB ($R^2=0.2154$ p = 0.0001), 5. Simple linear regression of log-transformed ${\delta}$-ALA by two method on ZPP and PbB were as follows: logHALA = 0.3078 + 0.0060 ZPP ($R^2=0.3329$ p = 0.0001) logCALA = 1.0189 + 0.0044 ZPP ($R^2=0.3290$ p = 0.0001) logHALA = -0.0221 + 0.0246 PbB ($R^2=0.4046$ p = 0.0001) logCALA = 0.7662 + 0.0184 PbB ($R^2=0.4108$ p = 0.0001) 6. The cumulative percent of colorimetric method to detect lead workers whose value of PbS and ZPP were over screening level such as $40{\mu}/dl$ and $100{\mu}/dl$ respectively was higher than HPLC method if cut-off level of ${\delta}$-ALA for screening of lead poisoning was 5 mg/L. But if cut-off level of ${\delta}$-ALA measured by HPLC was reduced to 3 mg/L which is compatible to 5 mg/L of ${\delta}$-ALA measured by colorimetric method, there were good agreement between two methods and showed dose-response relationship with other lead exposure indices such as PbB and ZPP.

  • PDF

The Effect of Retailer-Self Image Congruence on Retailer Equity and Repatronage Intention (자아이미지 일치성이 소매점자산과 고객의 재이용의도에 미치는 영향)

  • Han, Sang-Lin;Hong, Sung-Tai;Lee, Seong-Ho
    • Journal of Distribution Research
    • /
    • v.17 no.2
    • /
    • pp.29-62
    • /
    • 2012
  • As distribution environment is changing rapidly and competition is more intensive in the channel of distribution, the importance of retailer image and retailer equity is increasing as a different competitive advantages. Also, consumers are not functionally oriented and that their behavior is significantly affected by the symbols such as retailer image which identify retailer in the market place. That is, consumers do not choose products or retailers for their material utilities but consume the symbolic meaning of those products or retailers as expressed in their self images. The concept of self-image congruence has been utilized by marketers and researchers as an aid in better understanding how consumers identify themselves with the brands they buy and the retailer they patronize. Although self-image congruity theory has been tested across many product categories, the theory has not been tested extensively in the retailing. Therefore, this study attempts to investigate the impact of self image congruence between retailer image and self image of consumer on retailer equity such as retailer awareness, retailer association, perceived retailer quality, and retailer loyalty. The purpose of this study is to find out whether retailer-self image congruence can be a new antecedent of retailer equity. In addition, this study tries to examine how four-dimensional retailer equity constructs (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) affect customers' repatronage intention. For this study, data were gathered by survey and analyzed by structural equation modeling. The sample size in the present study was 254. The reliability of the all seven dimensions was estimated with Cronbach's alpha, composite reliability values and average variance extracted values. We determined whether the measurement model supports the convergent validity and discriminant validity by Exploratory factor analysis and Confirmatory Factor Analysis. For each pair of constructs, the square root of the average variance extracted values exceeded their correlations, thus supporting the discriminant validity of the constructs. Hypotheses were tested using the AMOS 18.0. As expected, the image congruence hypotheses were supported. The greater the degree of congruence between retailer image and self-image, the more favorable were consumers' retailer evaluations. The all two retailer-self image congruence (actual self-image congruence and ideal self-image congruence) affected customer based retailer equity. This result means that retailer-self image congruence is important cue for customers to estimate retailer equity. In other words, consumers are often more likely to prefer products and retail stores that have images similar to their own self-image. Especially, it appeared that effect for the ideal self-image congruence was consistently larger than the actual self-image congruence on the retailer equity. The results mean that consumers prefer or search for stores that have images compatible with consumer's perception of ideal-self. In addition, this study revealed that customers' estimations toward customer based retailer equity affected the repatronage intention. The results showed that all four dimensions (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) had positive effect on the repatronage intention. That is, management and investment to improve image congruence between retailer and consumers' self make customers' positive evaluation of retailer equity, and then the positive customer based retailer equity can enhance the repatonage intention. And to conclude, retailer's image management is an important part of successful retailer performance management, and the retailer-self image congruence is an important antecedent of retailer equity. Therefore, it is more important to develop and improve retailer's image similar to consumers' image. Given the pressure to provide increased image congruence, it is not surprising that retailers have made significant investments in enhancing the fit between retailer image and self image of consumer. The enhancing such self-image congruence may allow marketers to target customers who may be influenced by image appeals in advertising.

  • PDF

Determination of Cost and Measurement of nursing Care Hours for Hospice Patients Hospitalized in one University Hospital (일 대학병원 호스피스 병동 입원 환자의 간호활동시간 측정과 원가산정)

  • Kim, Kyeong-Uoon
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.6 no.3
    • /
    • pp.389-404
    • /
    • 2000
  • This study was designed to determine the cost and measurement of nursing care hours for hospice patients hostpitalized in one university hospital. 314 inpatients in the hospice unit 11 nursing manpower were enrolled. Study was taken place in C University Hospital from 8th to 28th, Nov, 1999. Researcher and investigator did pilot study for selecting compatible hospice patient classification indicators. After modifying patient classification indicators and nursing care details for general ward, approved of content validity by specialist. Using hospice patient classification indicators and per 5 min continuing observation method, researcher and investigator recorded direct nursing care hours, indirect nursing care hours, and personnel time on hospice nursing care hours, and personnel time on hospice nursing care activities sheet. All of the patients were classified into Class I(mildly ill), Class II (moderately ill), Class III (acutely ill), and Class IV (critically ill) by patient classification system (PCS) which had been carefully developed to be suitable for the Korean hospice ward. And then the elements of the nursing care cost was investigated. Based on the data from an accounting section (Riccolo, 1988), nursing care hours per patient per day in each class and nursing care cost per patient per hour were multiplied. And then the mean of the nursing care cost per patient per day in each class was calculated. Using SAS, The number of patients in class and nursing activities in duty for nursing care hours were calculated the percent, the mean, the standard deviation respectively. According to the ANOVA and the $Scheff{\'{e}$ test, direct nursing care hours per patient per day for the each class were analyzed. The results of this study were summarized as follows : 1. Distribution of patient class : class IN(33.5%) was the largest class the rest were class II(26.1%) class III(22.6%), class I(17.8%). Nursing care requirements of the inpatients in hospice ward were greater than that of the inpatients in general ward. 2. Direct nursing care activities : Measurement ${\cdot}$ observation 41.7%, medication 16.6%, exercise ${\cdot}$ safety 12.5%, education ${\cdot}$ communication 7.2% etc. The mean hours of direct nursing care per patient per day per duty were needed ; 69.3 min for day duty, 64.7 min for evening duty, 88.2 min for night duty, 38.7 min for shift duty. The mean hours of direct nursing care of night duty was longer than that of the other duty. Direct nursing care hours per patient per day in each class were needed ; 3.1 hrs for class I, 3.9 hrs for class II, 4.7 hrs for class III, and 5.2 hrs for class IV. The mean hours of direct nursing care per patient per day without the PCS was 4.1 hours. The mean hours of direct nursing care per patient per day in class was increased significantly according to increasing nursing care requirements of the inpatients(F=49.04, p=.0001). The each class was significantly different(p<0.05). The mean hours of direct nursing care of several direct nursing care activities in each class were increased according to increasing nursing care requirements of the inpatients(p<0.05) ; class III and class IV for medication and education ${\cdot}$ communication, class I, class III and class IV for measurement ${\cdot}$ observation, class I, class II and class IV for elimination ${\cdot}$ irrigation, all of class for exercise ${\cdot}$ safety. 3. Indirect nursing care activities and personnel time : Recognization 24.2%, house keeping activity 22.7%, charting 17.2%, personnel time 11.8% etc. The mean hours of indirect nursing care and personnel time per nursing manpower was 4.7 hrs. The mean hours of indirect nursing care and personnel time per duty were 294.8 min for day duty, 212.3 min for evening duty, 387.9 min for night duty, 143.3 min for shift duty. The mean of indirect nursing care hours and personnel time of night duty was longer than that of the other duty. 4. The mean hours of indirect nursing care and personnel time per patient per day was 2.5 hrs. 5. The mean hours of nursing care per patient per day in each class were class I 5.6 hrs, class II 6.4 hrs, class III 7.2 hrs, class IV 7.7 hrs. 6. The elements of the nursing care cost were composed of 2,212 won for direct nursing care cost, 267 won for direct material cost and 307 won for indirect cost. Sum of the elements of the nursing care cost was 2,786 won. 7. The mean cost of the nursing care per patient per day in each class were 15,601.6 won for class I, 17,830.4 won for class II, 20,259.2 won for class III, 21,452.2 won for class IV. As above, using modified hospice patient classification indicators and nursing care activity details, many critical ill patients were hospitalized in the hospice unit and it reflected that the more nursing care requirements of the patients, the more direct nursing care hours. Emotional ${\cdot}$ spiritual care, pain ${\cdot}$ symptom control, terminal care, education ${\cdot}$ communication, narcotics management and delivery, attending funeral ceremony, the major nursing care activities, were also the independent hospice service. But it is not compensated by the present medical insurance system. Exercise ${\cdot}$ safety, elimination ${\cdot}$ irrigation needed more nursing care hours as equal to that of intensive care units. The present nursing management fee in the medical insurance system compensated only a part of nursing car service in hospice unit, which rewarded lower cost that that of nursing care.

  • PDF

The Concentration of Economic Power in Korea (경제력집중(經濟力集中) : 기본시각(基本視角)과 정책방향(政策方向))

  • Lee, Kyu-uck
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.31-68
    • /
    • 1990
  • The concentration of economic power takes the form of one or a few firms controlling a substantial portion of the economic resources and means in a certain economic area. At the same time, to the extent that these firms are owned by a few individuals, resource allocation can be manipulated by them rather than by the impersonal market mechanism. This will impair allocative efficiency, run counter to a decentralized market system and hamper the equitable distribution of wealth. Viewed from the historical evolution of Western capitalism in general, the concentration of economic power is a paradox in that it is a product of the free market system itself. The economic principle of natural discrimination works so that a few big firms preempt scarce resources and market opportunities. Prominent historical examples include trusts in America, Konzern in Germany and Zaibatsu in Japan in the early twentieth century. In other words, the concentration of economic power is the outcome as well as the antithesis of free competition. As long as judgment of the economic system at large depends upon the value systems of individuals, therefore, the issue of how to evaluate the concentration of economic power will inevitably be tinged with ideology. We have witnessed several different approaches to this problem such as communism, fascism and revised capitalism, and the last one seems to be the only surviving alternative. The concentration of economic power in Korea can be summarily represented by the "jaebol," namely, the conglomerate business group, the majority of whose member firms are monopolistic or oligopolistic in their respective markets and are owned by particular individuals. The jaebol has many dimensions in its size, but to sketch its magnitude, the share of the jaebol in the manufacturing sector reached 37.3% in shipment and 17.6% in employment as of 1989. The concentration of economic power can be ascribed to a number of causes. In the early stages of economic development, when the market system is immature, entrepreneurship must fill the gap inherent in the market in addition to performing its customary managerial function. Entrepreneurship of this sort is a scarce resource and becomes even more valuable as the target rate of economic growth gets higher. Entrepreneurship can neither be readily obtained in the market nor exhausted despite repeated use. Because of these peculiarities, economic power is bound to be concentrated in the hands of a few entrepreneurs and their business groups. It goes without saying, however, that the issue of whether the full exercise of money-making entrepreneurship is compatible with social mores is a different matter entirely. The rapidity of the concentration of economic power can also be traced to the diversification of business groups. The transplantation of advanced technology oriented toward mass production tends to saturate the small domestic market quite early and allows a firm to expand into new markets by making use of excess capacity and of monopoly profits. One of the reasons why the jaebol issue has become so acute in Korea lies in the nature of the government-business relationship. The Korean government has set economic development as its foremost national goal and, since then, has intervened profoundly in the private sector. Since most strategic industries promoted by the government required a huge capacity in technology, capital and manpower, big firms were favored over smaller firms, and the benefits of industrial policy naturally accrued to large business groups. The concentration of economic power which occured along the way was, therefore, not necessarily a product of the market system. At the same time, the concentration of ownership in business groups has been left largely intact as they have customarily met capital requirements by means of debt. The real advantage enjoyed by large business groups lies in synergy due to multiplant and multiproduct production. Even these effects, however, cannot always be considered socially optimal, as they offer disadvantages to other independent firms-for example, by foreclosing their markets. Moreover their fictitious or artificial advantages only aggravate the popular perception that most business groups have accumulated their wealth at the expense of the general public and under the behest of the government. Since Korea stands now at the threshold of establishing a full-fledged market economy along with political democracy, the phenomenon called the concentration of economic power must be correctly understood and the roles of business groups must be accordingly redefined. In doing so, we would do better to take a closer look at Japan which has experienced a demise of family-controlled Zaibatsu and a success with business groups(Kigyoshudan) whose ownership is dispersed among many firms and ultimately among the general public. The Japanese case cannot be an ideal model, but at least it gives us a good point of departure in that the issue of ownership is at the heart of the matter. In setting the basic direction of public policy aimed at controlling the concentration of economic power, one must harmonize efficiency and equity. Firm size in itself is not a problem, if it is dictated by efficiency considerations and if the firm behaves competitively in the market. As long as entrepreneurship is required for continuous economic growth and there is a discrepancy in entrepreneurial capacity among individuals, a concentration of economic power is bound to take place to some degree. Hence, the most effective way of reducing the inefficiency of business groups may be to impose competitive pressure on their activities. Concurrently, unless the concentration of ownership in business groups is scaled down, the seed of social discontent will still remain. Nevertheless, the dispersion of ownership requires a number of preconditions and, consequently, we must make consistent, long-term efforts on many fronts. We can suggest a long list of policy measures specifically designed to control the concentration of economic power. Whatever the policy may be, however, its intended effects will not be fully realized unless business groups abide by the moral code expected of socially responsible entrepreneurs. This is especially true, since the root of the problem of the excessive concentration of economic power lies outside the issue of efficiency, in problems concerning distribution, equity, and social justice.

  • PDF

An Empirical Study on the Determinants of Supply Chain Management Systems Success from Vendor's Perspective (참여자관점에서 공급사슬관리 시스템의 성공에 영향을 미치는 요인에 관한 실증연구)

  • Kang, Sung-Bae;Moon, Tae-Soo;Chung, Yoon
    • Asia pacific journal of information systems
    • /
    • v.20 no.3
    • /
    • pp.139-166
    • /
    • 2010
  • The supply chain management (SCM) systems have emerged as strong managerial tools for manufacturing firms in enhancing competitive strength. Despite of large investments in the SCM systems, many companies are not fully realizing the promised benefits from the systems. A review of literature on adoption, implementation and success factor of IOS (inter-organization systems), EDI (electronic data interchange) systems, shows that this issue has been examined from multiple theoretic perspectives. And many researchers have attempted to identify the factors which influence the success of system implementation. However, the existing studies have two drawbacks in revealing the determinants of systems implementation success. First, previous researches raise questions as to the appropriateness of research subjects selected. Most SCM systems are operating in the form of private industrial networks, where the participants of the systems consist of two distinct groups: focus companies and vendors. The focus companies are the primary actors in developing and operating the systems, while vendors are passive participants which are connected to the system in order to supply raw materials and parts to the focus companies. Under the circumstance, there are three ways in selecting the research subjects; focus companies only, vendors only, or two parties grouped together. It is hard to find researches that use the focus companies exclusively as the subjects probably due to the insufficient sample size for statistic analysis. Most researches have been conducted using the data collected from both groups. We argue that the SCM success factors cannot be correctly indentified in this case. The focus companies and the vendors are in different positions in many areas regarding the system implementation: firm size, managerial resources, bargaining power, organizational maturity, and etc. There are no obvious reasons to believe that the success factors of the two groups are identical. Grouping the two groups also raises questions on measuring the system success. The benefits from utilizing the systems may not be commonly distributed to the two groups. One group's benefits might be realized at the expenses of the other group considering the situation where vendors participating in SCM systems are under continuous pressures from the focus companies with respect to prices, quality, and delivery time. Therefore, by combining the system outcomes of both groups we cannot measure the system benefits obtained by each group correctly. Second, the measures of system success adopted in the previous researches have shortcoming in measuring the SCM success. User satisfaction, system utilization, and user attitudes toward the systems are most commonly used success measures in the existing studies. These measures have been developed as proxy variables in the studies of decision support systems (DSS) where the contribution of the systems to the organization performance is very difficult to measure. Unlike the DSS, the SCM systems have more specific goals, such as cost saving, inventory reduction, quality improvement, rapid time, and higher customer service. We maintain that more specific measures can be developed instead of proxy variables in order to measure the system benefits correctly. The purpose of this study is to find the determinants of SCM systems success in the perspective of vendor companies. In developing the research model, we have focused on selecting the success factors appropriate for the vendors through reviewing past researches and on developing more accurate success measures. The variables can be classified into following: technological, organizational, and environmental factors on the basis of TOE (Technology-Organization-Environment) framework. The model consists of three independent variables (competition intensity, top management support, and information system maturity), one mediating variable (collaboration), one moderating variable (government support), and a dependent variable (system success). The systems success measures have been developed to reflect the operational benefits of the SCM systems; improvement in planning and analysis capabilities, faster throughput, cost reduction, task integration, and improved product and customer service. The model has been validated using the survey data collected from 122 vendors participating in the SCM systems in Korea. To test for mediation, one should estimate the hierarchical regression analysis on the collaboration. And moderating effect analysis should estimate the moderated multiple regression, examines the effect of the government support. The result shows that information system maturity and top management support are the most important determinants of SCM system success. Supply chain technologies that standardize data formats and enhance information sharing may be adopted by supply chain leader organization because of the influence of focal company in the private industrial networks in order to streamline transactions and improve inter-organization communication. Specially, the need to develop and sustain an information system maturity will provide the focus and purpose to successfully overcome information system obstacles and resistance to innovation diffusion within the supply chain network organization. The support of top management will help focus efforts toward the realization of inter-organizational benefits and lend credibility to functional managers responsible for its implementation. The active involvement, vision, and direction of high level executives provide the impetus needed to sustain the implementation of SCM. The quality of collaboration relationships also is positively related to outcome variable. Collaboration variable is found to have a mediation effect between on influencing factors and implementation success. Higher levels of inter-organizational collaboration behaviors such as shared planning and flexibility in coordinating activities were found to be strongly linked to the vendors trust in the supply chain network. Government support moderates the effect of the IS maturity, competitive intensity, top management support on collaboration and implementation success of SCM. In general, the vendor companies face substantially greater risks in SCM implementation than the larger companies do because of severe constraints on financial and human resources and limited education on SCM systems. Besides resources, Vendors generally lack computer experience and do not have sufficient internal SCM expertise. For these reasons, government supports may establish requirements for firms doing business with the government or provide incentives to adopt, implementation SCM or practices. Government support provides significant improvements in implementation success of SCM when IS maturity, competitive intensity, top management support and collaboration are low. The environmental characteristic of competition intensity has no direct effect on vendor perspective of SCM system success. But, vendors facing above average competition intensity will have a greater need for changing technology. This suggests that companies trying to implement SCM systems should set up compatible supply chain networks and a high-quality collaboration relationship for implementation and performance.

Leukocyte count and hypertension in the health screening data of some rural and urban residents (일부 농촌과 도시의 건강선별조사 자료로 본 백혈구수와 고혈압과의 관계)

  • Lee, Choong-Won;Yoon, Nung-Ki;Lee, Sung-Kwan
    • Journal of Preventive Medicine and Public Health
    • /
    • v.24 no.3 s.35
    • /
    • pp.363-372
    • /
    • 1991
  • We used the health screening data of some rural and urban residents to examine the cross-sectional association between leukocyte count and hypertension. The 206 male and 203 female rural residents were selected by multi-stage cluster sampling method in Kyungsan-Kun area of Kyungbuk province in 1985 and 600 urban residents were selected by the same sampling method as the rural residents in Daegu city of the same province in 1986 compatible with age-sex distribution of Daegu city of 1985 census, but of whom 384 actually responded. The rest of 600 were replaced by age and sex with those who were members of the medical insurance plan visiting the health management department of the university hospital to get the biannual preventive medical checkups. Excluded in the analysis were those having hypertensive history, diseases and extreme outlying values of the screening tests, leaving 373 rural and 571 urban residents. Leukocyte count was measured with ELT-8 Laser shadow method and the unit $cells/mm^3$, Blood pressures were determined with an aneroid sphygmomanometer with pre-standardized method and hypertensives were defined as those showing systolic blood pressure more than 140mmHg and/or diastolic blood pressure more than 90mmHg. Total residents pooled (N=944) showed a significant difference between hypertensives and normotensives ($6965.93{\pm}1997.01\;vs\;6490.61{\pm}1941.32,\;P=0.00$) and in rural residents was noted the similar significant difference (P=0.03). None of significant differences were noted in any stratum stratified by residency and sex. Compared to the lowest quintile of WBC, 2/5 quintile showed odds ratio 0.99 (95% Confidence interval, Ci 0.62-1.59), 3/5 quintile 1.41 (95% CI 0.90-2.21), 4/5 quintile 1.76 (95% CI. 1.14-2.72), and highest quintile 1.80 (1.15-2.82) in the total residents. Likelihood ratio test for linear trend for it indicated a significant trend ($X^2_{trend}=5.53,\;df=1,\;P<0.05$). There were no other significant odds ratios compared to the lowest quintile of WBC in strata stratified by residency and sex. The odds ratios in total residents which had showed significant odds ratios became nonsignificant and of reduced magnitude after controlling age, frequency of smoking and drinking with multiple logistic. regression. In each stratum, it changed magnitudes of odds ratios slightly and unstably. None of the trend tests showed any significant trend. These results suggest that the Friedman et al's finding of association between leukocyte count and hypertension may be due to an statistical type I error resulting from the data dredging in an exploratory study, in which more than 800 variables were screened as possible predictors of hypertension.

  • PDF

Application of MicroPACS Using the Open Source (Open Source를 이용한 MicroPACS의 구성과 활용)

  • You, Yeon-Wook;Kim, Yong-Keun;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Tae-Sung;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2009
  • Purpose: Recently, most hospitals are introducing the PACS system and use of the system continues to expand. But small-scaled PACS called MicroPACS has already been in use through open source programs. The aim of this study is to prove utility of operating a MicroPACS, as a substitute back-up device for conventional storage media like CDs and DVDs, in addition to the full-PACS already in use. This study contains the way of setting up a MicroPACS with open source programs and assessment of its storage capability, stability, compatibility and performance of operations such as "retrieve", "query". Materials and Methods: 1. To start with, we searched open source software to correspond with the following standards to establish MicroPACS, (1) It must be available in Windows Operating System. (2) It must be free ware. (3) It must be compatible with PET/CT scanner. (4) It must be easy to use. (5) It must not be limited of storage capacity. (6) It must have DICOM supporting. 2. (1) To evaluate availability of data storage, we compared the time spent to back up data in the open source software with the optical discs (CDs and DVD-RAMs), and we also compared the time needed to retrieve data with the system and with optical discs respectively. (2) To estimate work efficiency, we measured the time spent to find data in CDs, DVD-RAMs and MicroPACS. 7 technologists participated in this study. 3. In order to evaluate stability of the software, we examined whether there is a data loss during the system is maintained for a year. Comparison object; How many errors occurred in randomly selected data of 500 CDs. Result: 1. We chose the Conquest DICOM Server among 11 open source software used MySQL as a database management system. 2. (1) Comparison of back up and retrieval time (min) showed the result of the following: DVD-RAM (5.13,2.26)/Conquest DICOM Server (1.49,1.19) by GE DSTE (p<0.001), CD (6.12,3.61)/Conquest (0.82,2.23) by GE DLS (p<0.001), CD (5.88,3.25)/Conquest (1.05,2.06) by SIEMENS. (2) The wasted time (sec) to find some data is as follows: CD ($156{\pm}46$), DVD-RAM ($115{\pm}21$) and Conquest DICOM Server ($13{\pm}6$). 3. There was no data loss (0%) for a year and it was stored 12741 PET/CT studies in 1.81 TB memory. In case of CDs, On the other hand, 14 errors among 500 CDs (2.8%) is generated. Conclusions: We found that MicroPACS could be set up with the open source software and its performance was excellent. The system built with open source proved more efficient and more robust than back-up process using CDs or DVD-RAMs. We believe that the operation of the MicroPACS would be effective data storage device as long as its operators develop and systematize it.

  • PDF

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.