• Title/Summary/Keyword: Value evaluation

Search Result 8,585, Processing Time 0.041 seconds

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Evaluation of Proper Image Acquisition Time by Change of Infusion dose in PET/CT (PET/CT 검사에서 주입선량의 변화에 따른 적정한 영상획득시간의 평가)

  • Kim, Chang Hyeon;Lee, Hyun Kuk;Song, Chi Ok;Lee, Gi Heun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.2
    • /
    • pp.22-27
    • /
    • 2014
  • Purpose There is the recent PET/CT scan in tendency that use low dose to reduce patient's exposure along with development of equipments. We diminished $^{18}F$-FDG dose of patient to reduce patient's exposure after setting up GE Discovery 690 PET/CT scanner (GE Healthcare, Milwaukee, USA) establishment at this hospital in 2011. Accordingly, We evaluate acquisition time per proper bed by change of infusion dose to maintain quality of image of PET/CT scanner. Materials and Methods We inserted Air, Teflon, hot cylinder in NEMA NU2-1994 phantom and maintained radioactivity concentration based on the ratio 4:1 of hot cylinder and back ground activity and increased hot cylinder's concentration to 3, 4.3, 5.5, 6.7 MBq/kg, after acquisition image as increase acquisition time per bed to 30 seconds, 1 minute, 1 minute 30 seconds, 2 minute, 2 minutes 30 seconds, 3 minutes, 3 minutes 30 seconds, 4 minutes, 4 minutes 30 seconds, 5 minutes, 5 minutes 30 seconds, 10 minutes, 20 minutes, and 30 minutes, ROI was set up on hot cylinder and back radioactivity region. We computated standard deviation of Signal to Noise Ratio (SNR) and BKG (Background), compared with hot cylinder's concentration and change by acquisition time per bed, after measured Standard Uptake Value maximum ($SUV_{max}$). Also, we compared each standard deviation of $SUV_{max}$, SNR, BKG following in change of inspection waiting time (15minutes and 1 hour) by using 4.3 MBq phantom. Results The radioactive concentration per unit mass was increased to 3, 4.3, 5.5, 6.7 MBqs. And when we increased time/bed of each concentration from 1 minute 30 seconds to 30 minutes, we found that the $SUV_{max}$ of hot cylinder acquisition time per bed changed seriously according to each radioactive concentration in up to 18.3 to at least 7.3 from 30 seconds to 2 minutes. On the other side, that displayed changelessly at least 5.6 in up to 8 from 2 minutes 30 seconds to 30 minutes. SNR by radioactive change per unit mass was fixed to up to 0.49 in at least 0.41 in 3 MBqs and accroding as acquisition time per bed increased, rose to up to 0.59, 0.54 in each at least 0.23, 0.39 in 4.3 MBqs and in 5.5 MBqs. It was high to up to 0.59 from 30 seconds in radioactivity concentration 6.7 MBqs, but kept fixed from 0.43 to 0.53. Standard deviation of BKG (Background) was low from 0.38 to 0.06 in 3 MBqs and from 2 minutes 30 seconds after, low from 0.38 to 0 in 4.3 MBqs and 5.5 MBqs from 1 minute 30 seconds after, low from 0.33 to 0.05 in 6.7 MBqs at all section from 30 seconds to 30 minutes. In result that was changed the inspection waiting time to 15 minutes and 1 hour by 4.3 MBq phantoms, $SUV_{max}$ represented each other fixed values from 2 minutes 30 seconds of acquisition time per bed and SNR shown similar values from 1 minute 30 seconds. Conclusion As shown in the above, when we increased radioactive concentration per unit mass by 3, 4.3, 5.5, 6.7 MBqs, the values of $SUV_{max}$ and SNR was kept changelessly each other more than 2 minutes 30 seconds of acquisition time per bed. In the same way, in the change of inspection waiting time (15 minutes and 1 hour), we could find that the values of $SUV_{max}$ and SNR was kept changelessly each other more than 2 minutes 30 seconds of acquisition time per bed. In the result of this NEMA NU2-1994 phantom experiment, we found that the minimum acquisition time per bed was 2 minutes 30 seconds for evaluating values of fixed $SUV_{max}$ and SNR even in change of inserting radioactive concentration. However, this acquisition time can be different according to features and qualities of equipment.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

A Study of the shade of between maxillary and mandibular anterior teeth in the Korean (한국인의 상하악 전치부 색조에 관한 연구)

  • Kim, Tae-Jin; Kwon, Kung-Rock;Kim, Hyeong-Seob;Woo, Yi-Hyung
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.4
    • /
    • pp.343-350
    • /
    • 2008
  • Purpose: The purpose of this study was to spectrophotometrically evaluate the shade difference between of maxillary and mandibular anterior teeth in the Korean by the standard of vita classical shade guide using $SpectroShade^{TM}$. Material and methods: In this study, the shades of healthy anterior teeth were examined and analyzed using the digital shade analysis of $SpectroShade^{TM}$. This study examined 80 individuals in their twenties, thirties, fourties, fifities ages and 40 males and 40 females, presenting 12 healthy, unrestored maxillary and mandibular anterior teeth. Tooth brushing and oral prophylaxis were performed prior to evaluation. The $SpectroShade^{TM}$ was used to acquire images of the 12 maxillary and mandibular anterior teeth. These images were analyzed using $SpectroShade^{TM}$ Software, and shade maps of each tooth were acquired. The shade difference of upper and lower, and gender differences and ages difference were investigated and analyzed with CIE $L^{*}a^{*}b^{*}$ color order system. One-Way ANOVA test was used to find out if there were significant differences between groups tested and Sheffe multiple comparison was used to identify where the differences were. Results: 1. Shade differences were significant (P < .05) between maxillary and mandibular central incisor, lateral incisor, canine. 2. No significant differences in shade distribution were seen between lateral incisors and central incisors. 3. Canine's shade difference were more significant than central incisor's and lateral incisors's. 4. No significant differences in shade distribution were seen between genders in maxillary and mandibulr central incisor, lateral incisor, canine. 5. No significant differences in shade distribution were seen in order of years in maxillary and mandibulr central incisor, lateral incisor, canine. Conclusions: The results of this study show that 1. Shade difference was founded in maxillary and mandibular anterior teeth and ${\Delta}E^{*}$ value was more than 2.0. 2. Canine's shade difference were more significant than central incisor's and lateral incisors's and between central incisors and lateral incisors shade differences were no significant. 3. No significant differences in shade distribution were seen between genders in maxillary and mandibular anterior teeth. 4. No significant differences in shade distribution were seen in order of years grade in maxillary and mandibular anterior teeth.

Analysis on the source characteristics of three earthquakes nearby the Gyeongju area of the South Korea in 1999 (1999년 경주 인근에서 3차례 발생한 지진들의 지진원 특성 분석)

  • Choi, Ho-Seon;Shim, Taek-Mo
    • The Journal of Engineering Geology
    • /
    • v.19 no.4
    • /
    • pp.509-515
    • /
    • 2009
  • Three earthquakes with local magnitude ($M_L$) greater than 3.0 occurred on April 24, June 2 and September 12 in 1999 nearby the Gyeongju area. Redetermined epicenters were located within the radius of 1 km. We carried out waveform inversion analysis to estimate focal mechanism of June 2 event, and P and S wave polarity and their amplitude ratio analysis to estimate focal mechanisms of April 24 and September 12 events. June 2 and September 12 events had similar fault plane solutions each other. The fault plane solution of April 24 event included those of other 2 events, but its distribution range was relatively broad. Focal mechanisms of those events had a strike slip faulting with a small normal component. P-axes of those events were ENE-WSW which were similar to previous studies on the P-axis of the Korean Peninsula. Considering distances between epicenters, similarities of seismic waves and sameness of polarities of seismic data recorded at common seismic stations, these events might occurred at the same fault. The seismic moment of June 2 event was estimated to be $3.9\;{\times}\;10^{14}\;N{\cdot}m$ and this value corresponded to the moment magnitude ($M_W$) 3.7. The moment magnitude estimated by spectral analysis was 3.8, which was similar to that estimated by waveform inversion analysis. The average stress drop was estimated to be 7.5 MPa. Moment magnitudes of April 24 and September 12 events were estimated to be 3.2 and 3.4 by comparing the spectrum of those events recorded at common single seismic station.

The Evaluation of TrueX Reconstruction Method in Low Dose (저선량에서의 TrueX 재구성 방법에 의한 유용성 평가)

  • Oh, Se-Moon;Kim, Kye-Hwan;Kim, Seung-Jeong;Lee, Hong-Jae;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.83-87
    • /
    • 2011
  • Purpose: Recently in diagnostics area PET/CT is using a variety of areas including oncology, as well as in cardiology, neurology, etc. While increasing in the importance of PET/CT, there are various researches in the image quality related to reconstruction method. We compared and tested Iterative 2D Reconstruction Method with True X Reconstruction method by Siemens through phantom experiment, so we can see increasing of clinical usefulness of PET/CT. Materials and Methods: We measured contrast ratio and FWHM due to evaluating images on dose and experiment using Biograph 40 True Point PET/CT (Siemens, Germany). Getting a result of contrast ratio and FWHM, we used NEMA IEC PET body phantom (Data Spectrum Corp.) and capillary tube. We used the current TrueX and the previous Iterative 2D algorithm for all images which have 10 minutes long. Also, a clinical suitability of parameter for Iterative 2D and a recommended parameter by Siemens for True X are applied to the experiment. Results: We tested FWHM using capillary tube. As a result, TrueX was less than Iterative 2D. Also, the differences of FWHM get bigger in low dose. On the other hand, we tested contrasts ratio using NEMA IEC PET body phantom. As a result, TrueX was better aspect than Iterative 2D. However, there was no difference in dose. Conclusion: In this experiment, TrueX get higher results of contrast ratio and spatial resolution than Itertive 2D through experiment. Also, in the reconstruction result through TrueX, TrueX had better aspect of resolution than Iterative 2D in low dose. However, contrast ratio had no specific difference. In other words, TrueX reconstruction method in PET/CT had higher clinical value in use because TrueX can reduce exposure of patient and had a better quality of screen.

  • PDF

The Evaluation of Clinical Usefulness on Application of Myocardial Extract in Quantitative Perfusion SPECT (QPS 프로그램에서 Myocardial extract 적용에 따른 임상적 유용성 평가)

  • Yun, Jong-Jun;Lim, Yeong-Hyeon;Lee, Mu-Seok;Song, Hyeon-Seok;Jeong, Ji-Uk;Park, Se-Yun;Kim, Jae-Hwan;Kim, Jeong-Uk
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.88-93
    • /
    • 2011
  • Purpose: As to analytical method of data, the AutoQUANT software in which it is used quantitative rating of the myocardial perfusion SPECT are reported that there is a difference. Therefore the measured value error of the mutual program is expected to be generated even if the quantitative analysis is made data of the same patient. The purpose of this study is to offer the comparative analysis of myocardial extract method in Quantitative Perfusion SPECT. Materials and methods: We analyzed the 51 patients who were examined by Tc-99m MIBI gated myocardial SPECT in nuclear medicine department of Pusan National University Hospital from June to December 2010(34 men, 17 women, mean age $66.5{\pm}9.9$). We acquired the extracted image in myocardial extract protocol. QPS program that uses the AutoQUANT software measured TID(Transient Ischemic Dilation), ESD(Extent of Stress Defect), SSS(Summed Stress Score). Then analyzed the results. Results: The correlation of appyling myocardial extract is TID(r=0.98), ESD(r=0.99), SSS(r=0.99). In the 95% confidence limit, there was no satistically significant difference(TID p=0.78, ESD p=0.31, SSS p=0.19). After blinding test with a physician for making a qualitative analysis, there was no difference. Conclusion: Quantitative indices in QPS program showed good correlation and the results showed no statistically signigicant difference. The variance between method was small. therefore, the functional parameters by each method can be used interchangeably. Also, we expect patient's satisfaction.

  • PDF

The Evaluation of Proficiency Test between Radioimmunoassay and Chemiluminescence Immunoassay (방사면역측정법과 화학발광면역측정법간의 숙련도 비교평가)

  • Noh, Gyeong-Woon;Kim, Tae-Hoon;Kim, Ji-Young;Kim, Hyun-Joo;Lee, Ho-Young;Choi, Joon-Young;Lee, Byoeng-Il;Choe, Jae-Gol;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.116-124
    • /
    • 2011
  • Purpose: To establish accurate external quality assurance (EQA) test, cross institutional and modality tests were performed using WHO certificated reference material (CRM) and same pooled patients serum. Materials and Methods: Accuracy and precision were evaluated using CRM and pooled patients' serum for AFP, CEA, PSA, CA 125, CA 19-9, T3, T4, Tg, TSH. To evaluate the accuracy and precision, recover test and coefficient variation were measured. RIA test were performed in major 5 RIA laboratory and EIA (CLIA) test were done in 5 major EIA laboratory. same sample of CRM and pooled serum were delivered to each laboratory. Results: In 2009, mean precision of total tumor marker of RIA was $14.8{\pm}4.2%$ and that of EIA(CLIA) was $19.2{\pm}6.9%$. In 2010, mean precision of 5 tumor marker and T3, T4, Tg, TSH was $13.8{\pm}6.1%$ in RIA and $15.5{\pm}7.7%$ in EIA (CLIA). There was no significant difference between RIA and EIA. In RIA, the coefficient variations (CV) of AFP, CEA, PSA, CA 125, T3, T4, TSH were within 20%. The CV of CA 19-9 was over 20% but there was no significant difference with EIA (CLIA) (p=0.345). In recovery test using CRM, AFP, PSA, T4, TSH showed 92~103% of recovery in RIA. In recovery test using commercial material, CEA, CA 125, CA 19-9 showed relatively lower recovery than CRM but there was no significant difference between RIA and EIA (CLIA). Conclusion: By evaluating the precision and accuracy of each test, EQA test could more accurately measured the quality of each test and performance of laboratory.

  • PDF

Evaluation of SUV Which was Estimated Using Mini PACS by PET/CT Scanners (PET/CT 장비 별 mini PACS에서 측정한 표준섭취계수(SUV)의 유용성 평가)

  • Park, Seung-Yong;Ko, Hyun-Soo;Kim, Jung-Sun;Jung, Woo-Young
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.47-52
    • /
    • 2011
  • Purpose: Facilities use own sever or mini PACS system for storage and analysis of the PET/CT data. Mini PACS can storage scan data as well as measuring SUV. Therefore, the study was performed to confirm whether or not measured SUV on mini PACS is measured equally on PET/CT workstation. Materials and Methods: In February 2011, 30 patients who were performed $^{18}F$-FDG wholebody PET/CT scan in Biograph 16, Biograph 40 and Discovery Ste 8 were enrolled. First, using each workstation, SUV in liver and aorta of mediastinum level was measured. Second, using mini PACS, SUV was measured by same method. Result: The correlation coefficient of SUV in liver between PET/CT scanner and min PACS in Biograph 16, Biograph 40, Discovery Ste 8 was 0.99, 0.98, 0.64 respectably, the correlation coefficient of SUV in aorta was 0.98, 0.98, 0.66, and these were showed positive correlation coefficient. Difference of SUV between Biograph workstation and mini PACS was not showed statistical significant difference at 5% level of significance. Difference of SUV between Discovery Ste 8 workstation and mini PACS was showed statistical significant difference at 5% level of significance. Conclusion: In case that patient was scanned by the other scanner, if the correction of SUV formula in mini PACS for each scanners is performed, mini PACS will be usefully used to provide consistently quantitative assessment.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.