• Title/Summary/Keyword: Accuracy Standard

Search Result 2,426, Processing Time 0.031 seconds

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

A Study on the Establishment of Acceptable Range for Internal Quality Control of Radioimmunoassay (핵의학 검체검사 내부정도관리 허용범위 설정에 관한 고찰)

  • Young Ji, LEE;So Young, LEE;Sun Ho, LEE
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.43-47
    • /
    • 2022
  • Purpose Radioimmunoassay implement quality control by systematizing the internal quality control system for quality assurance of test results. This study aims to contribute to the quality assurance of radioimmunoassay results and to implement systematic quality control by measuring the average CV of internal quality control and external quality control by plenty of institutions for reference when setting the laboratory's own acceptable range. Materials and Methods We measured the average CV of internal quality control and the bounce rate of more than 10.0% for a total of 42 items from October 2020 to December 2021. According to the CV result, we classified and compared the upper group (5.0% or less), the middle group (5.0~10.0%) and the lower group (10.0% or more). The bounce rate of 10.0% or more was compared by classifying the item of five or more institutions into tumor markers, thyroid hormones and other hormones. The average CV was measured by the overall average and standard deviation of the external quality control results for 28 items from the first quarter to the fourth quarter of 2021. In addition, the average CV was measured by the overall average and standard deviation of the proficiency results between institutions for 13 items in the first half and the second half of 2021. The average CV of internal quality control and external quality control was compared by item so we compared and analyzed the items that implement well to quality control and the items that require attention to quality control. Results As a result of measuring the precision average of internal quality control for 42 items of six institutions, the top group (5.0% or less) are Ferritin, HGH, SHBG, and 25-OH-VitD, while the bottom group (≤10.0%) are cortisol, ATA, AMA, renin, and estradiol. When comparing more than 10.0% bounce rate of CV for tumor markers, CA-125 (6.7%), CA-19-9 (9.8%) implemented well, while SCC-Ag (24.3%), CA-15-3 (26.7%) were among the items that require attention to control. As a result of comparing the bounce rate of more than 10.0% of CV for thyroid hormones examination, free T4 (2.1%), T3 (9.3%) showed excellent performance and AMA (39.6%), ATA (51.6%) required attention to control. When comparing the bounce rate of 10.0% or more of CV for other hormones, IGF-1 (8.8%), FSH (9.1%), prolactin (9.2%) showed excellent performance, however estradiol (37.3%), testosterone (37.7%), cortisol (44.4%) required attention to control. As a result of measuring the average CV of the whole institutions participating at external quality control for 28 items, HGH and SCC-Ag were included in the top group (≤10.0%), however ATA, estradiol, TSI, and thyroglobulin included in bottom group (≥30.0%). Conclusion As a result of evaluating 42 items of six institutions, the average CV was 3.7~12.2% showing a 3.3 times difference between the upper group and the lower group. Cortisol, ATA, AMA, Renin and estradiol tests with high CV will require continuous improvement activities to improve precision. In addition, we measured and compared the overall average CV of the internal quality control, the external quality control and the proficiency between institutions participating of six institutions for 41 items excluding HBs-Ab. As a result, ATA, AMA, Renin and estradiol belong to the same subgroup so we require attention to control and consider setting a higher acceptable range. It is recommended to set and control the acceptable range standard of internal quality control CV in consideration of many things in the laboratory due to the different reagents and instruments, and the results vary depending on the test's proficiency and quality control materials. It is thought that the accuracy and reliability of radioimmunoassay results can be improved if systematic quality control is implemented based on the set acceptable range.

The Evaluation of Attenuation Difference and SUV According to Arm Position in Whole Body PET/CT (전신 PET/CT 검사에서 팔의 위치에 따른 감약 정도와 SUV 변화 평가)

  • Kwak, In-Suk;Lee, Hyuk;Choi, Sung-Wook;Suk, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.21-25
    • /
    • 2010
  • Purpose: For better PET imaging with accuracy the transmission scanning is inevitably required for attenuation correction. The attenuation is affected by condition of acquisition and patient position, consequently quantitative accuracy may be decreased in emission scan imaging. In this paper, the present study aims at providing the measurement for attenuation varying with the positions of the patient's arm in whole body PET/CT, further performing the comparative analysis over its SUV changes. Materials and Methods: NEMA 1994 PET phantom was filled with $^{18}F$-FDG and the concentration ratio of insert cylinder and background water fit to 4:1. Phantom images were acquired through emission scanning for 4min after conducting transmission scanning by using CT. In an attempt to acquire image at the state that the arm of the patient was positioned at the lower of ahead, image was acquired in away that two pieces of Teflon inserts were used additionally by fixing phantoms at both sides of phantom. The acquired imaged at a were reconstructed by applying the iterative reconstruction method (iteration: 2, subset: 28) as well as attenuation correction using the CT, and then VOI was drawn on each image plane so as to measure CT number and SUV and comparatively analyze axial uniformity (A.U=Standard deviation/Average SUV) of PET images. Results: It was found from the above phantom test that, when comparing two cases of whether Teflon insert was fixed or removed, the CT number of cylinder increased from -5.76 HU to 0 HU, while SUV decreased from 24.64 to 24.29 and A.U from 0.064 to 0.052. And the CT number of background water was identified to increase from -6.14 HU to -0.43 HU, whereas SUV decreased from 6.3 to 5.6 and A.U also decreased from 0.12 to 0.10. In addition, as for the patient image, CT number was verified to increase from 53.09 HU to 58.31 HU and SUV decreased from 24.96 to 21.81 when the patient's arm was positioned over the head rather than when it was lowered. Conclusion: When arms up protocol was applied, the SUV of phantom and patient image was decreased by 1.4% and 9.2% respectively. With the present study it was concluded that in case of PET/CT scanning against the whole body of a patient the position of patient's arm was not so much significant. Especially, the scanning under the condition that the arm is raised over to the head gives rise to more probability that the patient is likely to move due to long scanning time that causes the increase of uptake of $^{18}F$-FDG of brown fat at the shoulder part together with increased pain imposing to the shoulder and discomfort to a patient. As regarding consideration all of such factors, it could be rationally drawn that PET/CT scanning could be made with the arm of the subject lowered.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Effects of Anti-thyroglobulin Antibody on the Measurement of Thyroglobulin : Differences Between Immunoradiometric Assay Kits Available (면역방사계수법을 이용한 Thyroglobulin 측정시 항 Thyroglobulin 항체의 존재가 미치는 영향: Thyroglobulin 측정 키트에 따른 차이)

  • Ahn, Byeong-Cheol;Seo, Ji-Hyeong;Bae, Jin-Ho;Jeong, Shin-Young;Yoo, Jeong-Soo;Jung, Jin-Hyang;Park, Ho-Yong;Kim, Jung-Guk;Ha, Sung-Woo;Sohn, Jin-Ho;Lee, In-Kyu;Lee, Jae-Tae;Kim, Bo-Wan
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.4
    • /
    • pp.252-256
    • /
    • 2005
  • Purpose: Thyroglobulin (Tg) is a valuable and sensitive tool as a marker for diagnosis and follow-up for several thyroid disorders, especially, in the follow-up of patients with differentiated thyroid cancer (DTC). Often, clinical decisions rely entirely on the serum Tg concentration. But the Tg assay is one of the most challenging laboratory measurements to perform accurately owing to antithyroglobulin antibody (Anti-Tg). In this study, we have compared the degree of Anti-Tg effects on the measurement of Tg between availale Tg measuring kits. Materials and Methods: Measurement of Tg levels for standard Tg solution was performed with two different kits commercially available (A/B kits) using immunoradiometric assay technique either with absence or presence of three different concentrations of Anti-Tg. Measurement of Tg for patient's serum was also performed with the same kits. Patient's serum samples were prepared with mixtures of a serum containing high Tg levels and a serum containg high Anti-Tg concentrations. Results: In the measurements of standard Tg solution, presence of Anti-Tg resulted in falsely lower Tg level by both A and B kits. Degree of Tg underestimation by h kit was more prominent than B kit. The degree of underestimation by B kit was trivial therefore clinically insignificant, but statistically significant. Addition of Anti-Tg to patient serum resulted in falsely lower Tg levels with only A kit. Conclusion: Tg level could be underestimated in the presence of anti-Tg. Anti-Tg effect on Tg measurement was variable according to assay kit used. Therefore, accuracy test must be performed for individual Tg-assay kit.

Peak Expiratory Flow(PEF) Measured by Peak Flow Meter and Correlation Between PEF and Other Ventilatory Parameters in Healthy Children (정상 소아에서 최고호기유량계(peak flow meter)로 측정한 최고호기유량(PEF)와 기타 환기기능검사와의 상관관계)

  • Oak, Chul-Ho;Sohn, Kai-Hag;Park, Ki-Ryong;Cho, Hyun-Myung;Jang, Tae-Won;Jung, Maan-Hong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.51 no.3
    • /
    • pp.248-259
    • /
    • 2001
  • Background : In diagnosis or monitor of the airway obstruction in bronchial asthma, the measurement of $FEV_1$ in the standard method because of its reproducibility and accuracy. But the measurement of peak expiratory flow(PEF) by peak flow meter is much simpler and easier than that of $FEV_1$ especially in children. Yet there have been still no data of the predicted normal values of PEF measured by peak flow meter in Korean children. This study was conducted to provide equations to predict the normal value of PEF and correlation between PEF and $FEV_1$ in healthy children. Method : PEF was measured by MiniWright peak flow meter, and the forced expiratory volume and the maximum expiratory flow volume curves were measured by Microspiro HI 501(Chest Co.) in 346 healthy children(age:5-16 years, 194 boys and 152 girls) without any respiratory symptoms during 2 weeks before the study. The regression equations for various ventilatory parameters according to age and/or height, and the regression equations of $FEV_1$ by PEF were derived. Results : 1. The regression equation for PEF(L/min) was: $12.6{\times}$age(year)+$3.4{\times}$height(cm)-263($R^2=0.85$) in boys, and $6{\times}$age(year)+$3.9{\times}$height(cm)-293($R^2=0.82$) in girls. 2. The value of FEFmax(L/sec) derived from the maximum expiratory flow volume curves was multiplied by 60 to compare with PEF(L/min), and PEF was faster by 125 L/min in boys and 118 L/min in girls, respectively. 3. The regression equation for $FEV_1$(ml) by PEF(L/min) was:$7{\times}$PEF-550($R^2=0.82$) in boys, and $5.8{\times}$PEF-146 ($R^2=0.81$) in girls, respectively. Conclusion : This study provides regression equations predicting the normal values of PEF by age and/or height in children. And the equations for $FEV_1$, a gold standard of ventilatory function, was predicted by PEF. So, in taking care of children with airway obstruction, PEF measured by the peak flow meter can provide useful information.

  • PDF

Variation on Estimated Values of Radioactivity Concentration According to the Change of the Acquisition Time of SPECT/CT (SPECT/CT의 획득시간 증감에 따른 방사능농도 추정치의 변화)

  • Kim, Ji-Hyeon;Lee, Jooyoung;Son, Hyeon-Soo;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.15-24
    • /
    • 2021
  • Purpose SPECT/CT was noted for its excellent correction method and qualitative functions based on fusion images in the early stages of dissemination, and interest in and utilization of quantitative functions has been increasing with the recent introduction of companion diagnostic therapy(Theranostics). Unlike PET/CT, various conditions like the type of collimator and detector rotation are a challenging factor for image acquisition and reconstruction methods at absolute quantification of SPECT/CT. Therefore, in this study, We want to find out the effect on the radioactivity concentration estimate by the increase or decrease of the total acquisition time according to the number of projections and the acquisition time per projection among SPECT/CT imaging conditions. Materials and Methods After filling the 9,293 ml cylindrical phantom with sterile water and diluting 99mTc 91.76 MBq, the standard image was taken with a total acquisition time of 600 sec (10 sec/frame × 120 frames, matrix size 128 × 128) and also volume sensitivity and the calibration factor was verified. Based on the standard image, the comparative images were obtained by increasing or decreasing the total acquisition time. namely 60 (-90%), 150 (-75%), 300 (-50%), 450 (-25%), 900 (+50%), and 1200 (+100%) sec. For each image detail, the acquisition time(sec/frame) per projection was set to 1.0, 2.5, 5.0, 7.5, 15.0 and 20.0 sec (fixed number of projections: 120 frame) and the number of projection images was set to 12, 30, 60, 90, 180 and 240 frames(fixed time per projection:10 sec). Based on the coefficients measured through the volume of interest in each acquired image, the percentage of variation about the contrast to noise ratio (CNR) was determined as a qualitative assessment, and the quantitative assessment was conducted through the percentage of variation of the radioactivity concentration estimate. At this time, the relationship between the radioactivity concentration estimate (cps/ml) and the actual radioactivity concentration (Bq/ml) was compared and analyzed using the recovery coefficient (RC_Recovery Coefficients) as an indicator. Results The results [CNR, radioactivity Concentration, RC] by the change in the number of projections for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.5%, +3.90%, 1.04] at -90%, [-77.9%, +2.71%, 1.03] at -75%, [-55.6%, +1.85%, 1.02] at -50%, [-33.6%, +1.37%, 1.01] at -25%, [-33.7%, +0.71%, 1.01] at +50%, [+93.2%, +0.32%, 1.00] at +100%. and also The results [CNR, radioactivity Concentration, RC] by the acquisition time change for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.3%, -3.55%, 0.96] at - 90%, [-73.4%, -0.17%, 1.00] at -75%, [-49.6%, -0.34%, 1.00] at -50%, [-24.9%, 0.03%, 1.00] at -25%, [+49.3%, -0.04%, 1.00] at +50%, [+99.0%, +0.11%, 1.00] at +100%. Conclusion In SPECT/CT, the total coefficient obtained according to the increase or decrease of the total acquisition time and the resulting image quality (CNR) showed a pattern that changed proportionally. On the other hand, quantitative evaluations through absolute quantification showed a change of less than 5% (-3.55 to +3.90%) under all experimental conditions, maintaining quantitative accuracy (RC 0.96 to 1.04). Considering the reduction of the total acquisition time rather than the increasing of the image acquiring time, The reduction in total acquisition time is applicable to quantitative analysis without significant loss and is judged to be clinically effective. This study shows that when increasing or decreasing of total acquisition time, changes in acquisition time per projection have fewer fluctuations that occur in qualitative and quantitative condition changes than the change in the number of projections under the same scanning time conditions.

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

A Study on Qulity Perceptions and Satisfaction for Medical Service Marketing (의료서비스 마케팅을 위한 품질지각과 만족에 관한 연구)

  • Yoo, Dong-Keun
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.2 no.1
    • /
    • pp.97-114
    • /
    • 1996
  • INSTRODUCTION Service quality is, unlike goods quality, an abstract and elusive constuct. Service quality and its requirements are not easily understood by consumers, and also present some critical research problems. However, quality is very important to marketers and consumers in that it has many strategic benefits in contributing to profitability of marketing activities and consumers' problem-solving activities. Moreover, despite the phenomenal growth of medical service sector, few researchers have attempted to define and model medical service quality. Especially, little research has focused on the evaluation of medical service quality and patient satisfaction from the perspectives of both the provider and the patient. As competition intensifies and patients are demanding higher quality of medical service, medical service quality and patient satisfaction has emerged as a critical research topic. The major purpose of this article is to explore the concept of medical service quality and its evaluation from both nurse and patient perspectives. This article attempts to achieve its purpose by (1)classfying critical service attibutes into threecategories(satisfiers, hygiene factors, and performance factors). (2)measuring the relative importance of need criteria, (3)evaluating SERVPERF model and SERVQUAL model in medical service sector, and (4)identifying the relationship between perceived quality and overall patient satisfaction. METHOD Data were gathered from a sample of 217 patients and 179 nurses in Seoul-area general hospitals. From the review of previous literature, 50 survey items representing various facets of the medical service quality were developed to form a questionnaire. A five-point scale ranging from "Strongly Agree"(5) to "Strongly Disagree"(1) accompanied each statement(expectation statements, perception statements, and importance statements). To measure overall satisfaction, a seven-point scale was used, ranging from "Very Satisfied"(7) to "Very Dissatisfied"(1) with no verbal labels for scale points 2 through 6 RESULTS In explaining the relationship between perceived performance and overall satisfaction, only 31 variables out of original 50 survey items were proven to be statistically significant. Hence, a penalty-reward analysis was performed on theses 31 critical attributes to find out 17 satisfiers, 8 hygiene factors, and 4 performance factors in patient perspective. The role(category) of each service quality attribute in relation to patient satisfaction was com pared across two groups, that is, patients and nurses. They were little overlapped, suggesting that two groups had different sets of 'perceived quality' attributes. Principal components factor analyses of the patients' and nurses' responses were performed to identify the underlying dimensions for the set of performance(experience) statements. 28 variables were analyzed by using a varimax rotation after deleting three obscure variables. The number of factors to be extracted was determined by evaluating the eigenvalue scores. Six factors wereextracted, accounting for 57.1% of the total variance. Reliability analysis was performed to refine the factors further. Using coefficient alpha, scores of .84 to .65 were obtained. Individual-item analysis indicated that all statements in each of the factors should remain. On 26 attributes of 31 critical service quality attributes, there were gaps between actual patient's importance of need criteria and nurse perceptions of them. Those critical attributes could be classified into four categories based on the relative importance of need criteria and perceived performance from the perspective of patient. This analysis is useful in developing strategic plans for performance improvement. (1) top priorities(high importance and low performance) (in this study)- more health-related information -accuracy in billing - quality of food - appointments at my convenience - information about tests and treatments - prompt service of business office -adequacy of accommodations(elevators, etc) (2) current strengths(high importance and high performance) (3)unnecessary strengths(low importance and high performance) (4) low priorities(low importance and low performance) While 26 service quality attributes of SERPERF model were significantly related to patient satisfation, only 13 attributes of SERVQUAL model were significantly related. This result suggested that only experience-based norms(SERVPERF model) were more appropriate than expectations to serve as a benchmark against which service experiences were compared(SERVQUAL model). However, it must be noted that the degree of association to overall satisfaction was not consistent. There were some gaps between nurse percetions and patient perception of medical service performance. From the patient's viewpoint, "personal likability", "technical skill/trust", and "cares about me" were most significant positioning factors that contributed patient satisfaction. DISCUSSION This study shows that there are inconsistencies between nurse perceptions and patient perceptions of medical service attributes. Also, for service quality improvement, it is most important for nurses to understand what satisfiers, hygiene factors, and performance factors are through two-way communications. Patient satisfaction should be measured, and problems identified should be resolved for survival in intense competitive market conditions. Hence, patient satisfaction monitoring is now becoming a standard marketing tool for healthcare providers and its role is expected to increase.

  • PDF

Development of Regularized Expectation Maximization Algorithms for Fan-Beam SPECT Data (부채살 SPECT 데이터를 위한 정칙화된 기댓값 최대화 재구성기법 개발)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Soo-Jin;Kim, Kyeong-Min;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.464-472
    • /
    • 2005
  • Purpose: SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. Materials and Methods: The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam protection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. Results: for the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Conclusion: Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.