• 제목/요약/키워드: Subset

Search Result 1,636, Processing Time 0.026 seconds

The Evaluation of Reconstruction Method Using Attenuation Correction Position Shifting in 3D PET/CT (PET/CT 3D 영상에서 감쇠보정 위치 변화 방법을 이용한 영상 재구성법의 평가)

  • Hong, Gun-Chul;Park, Sun-Myung;Jung, Eun-Kyung;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.172-176
    • /
    • 2010
  • Purpose: The patients' moves occurred at PET/CT scan will cause the decline of correctness in results by resulting in inconsistency of Attenuation Correction (AC) and effecting on quantitative evaluation. This study has evaluated the utility of reconstruction method using AC position changing method when having inconsistency of AC depending on the position change of emission scan after transmission scan in obtaining PET/CT 3D image. Materials and Methods: We created 1 mL syringe injection space up to ${\pm}2$, 6, 10 cm toward x and y axis based on central point of polystyrene ($20{\times}20110$ cm) into GE Discovery STE16 equipment. After projection of syringe with $^{18}F$-FDG 5 kBq/mL, made an emission by changing the position and obtained the image by using AC depending on the position change. Reconstruction method is an iteration reconstruction method and is applied two times of iteration and 20 of subset, and for every emission data, decay correction depending on time pass is applied. Also, after setting ROI to the position of syringe, compared %Difference (%D) at each position to radioactivity concentrations (kBq/mL) and central point. Results: Radioactivity concentrations of central point of emission scan is 2.30 kBq/mL and is indicated as 1.95, 1.82 and 1.75 kBq/mL, relatively for +x axis, as 2.07, 1.75 and 1.65 kBq/mL for -x axis, as 2.07, 1.87 and 1.90 kBq/mL for +y axis and as 2.17, 1.85 and 1.67 kBq/mL for -y axis. Also, %D is yield as 15, 20, 23% for +x axis, as 9, 23, 28% for -x axis, as 12, 21, 20% for +y axis and as 8, 22, 29% for -y axis. When using AC position changing method, it is indicated as 2.00, 1.95 and 1.80 kBq/mL, relatively for +x axis, as 2.25, 2.15 and 1.90 kBq/mL for -x axis, as 2.07, 1.90 and 1.90 kBq/mL for +y axis, and as 2.10, 2.02, and 1.72 kBq/mL for -y axis. Also, %D is yield as 13, 15, 21% for +x axis, as 2, 6, 17% for -x axis, as 9, 17, 17% for +y axis, and as 8, 12, 25% for -y axis. Conclusion: When in inconsistency of AC, radioactivity concentrations for using AC position changing method increased average of 0.14, 0.03 kBq/mL at x, y axis and %D was improved 6.1, 4.2%. Also, it is indicated that the more far from the central point and the further position from the central point under the features that spatial resolution is lowered, the higher in lowering of radioactivity concentrations. However, since in actual clinic, attenuation degree increases more, it is considered that when in inconsistency, such tolerance will be increased. Therefore, at the lesion of the part where AC is not inconsistent, the tolerance of radioactivity concentrations will be reduced by applying AC position changing method.

  • PDF

Prediction of Life Expectancy for Terminally Ill Cancer Patients Based on Clinical Parameters (말기 암 환자에서 임상변수를 이용한 생존 기간 예측)

  • Yeom, Chang-Hwan;Choi, Youn-Seon;Hong, Young-Seon;Park, Yong-Gyu;Lee, Hye-Ree
    • Journal of Hospice and Palliative Care
    • /
    • v.5 no.2
    • /
    • pp.111-124
    • /
    • 2002
  • Purpose : Although the average life expectancy has increased due to advances in medicine, mortality due to cancer is on an increasing trend. Consequently, the number of terminally ill cancer patients is also on the rise. Predicting the survival period is an important issue in the treatment of terminally ill cancer patients since the choice of treatment would vary significantly by the patents, their families, and physicians according to the expected survival. Therefore, we investigated the prognostic factors for increased mortality risk in terminally ill cancer patients to help treat these patients by predicting the survival period. Methods : We investigated 31 clinical parameters in 157 terminally ill cancer patients admitted to in the Department of Family Medicine, National Health Insurance Corporation Ilsan Hospital between July 1, 2000 and August 31, 2001. We confirmed the patients' survival as of October 31, 2001 based on medical records and personal data. The survival rates and median survival times were estimated by the Kaplan-Meier method and Log-rank test was used to compare the differences between the survival rates according to each clinical parameter. Cox's proportional hazard model was used to determine the most predictive subset from the prognostic factors among many clinical parameters which affect the risk of death. We predicted the mean, median, the first quartile value and third quartile value of the expected lifetimes by Weibull proportional hazard regression model. Results : Out of 157 patients, 79 were male (50.3%). The mean age was $65.1{\pm}13.0$ years in males and was $64.3{\pm}13.7$ years in females. The most prevalent cancer was gastric cancer (36 patients, 22.9%), followed by lung cancer (27, 17.2%), and cervical cancer (20, 12.7%). The survival time decreased with to the following factors; mental change, anorexia, hypotension, poor performance status, leukocytosis, neutrophilia, elevated serum creatinine level, hypoalbuminemia, hyperbilirubinemia, elevated SGPT, prolonged prothrombin time (PT), prolonged activated partial thromboplastin time (aPTT), hyponatremia, and hyperkalemia. Among these factors, poor performance status, neutrophilia, prolonged PT and aPTT were significant prognostic factors of death risk in these patients according to the results of Cox's proportional hazard model. We predicted that the median life expectancy was 3.0 days when all of the above 4 factors were present, $5.7{\sim}8.2$ days when 3 of these 4 factors were present, $11.4{\sim}20.0$ days when 2 of the 4 were present, and $27.9{\sim}40.0$ when 1 of the 4 was present, and 77 days when none of these 4 factors were present. Conclusions : In terminally ill cancer patients, we found that the prognostic factors related to reduced survival time were poor performance status, neutrophilia, prolonged PT and prolonged am. The four prognostic factors enabled the prediction of life expectancy in terminally ill cancer patients.

  • PDF

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.

Clinical Implication of Cyclooxygenase-2 Expression for Rectal Cancer Patients with Lymph Node Involvement (림프절 전이를 동반한 직장암 환자들에서 Cyclooxygenase-2 발현의 임상적 의미)

  • Lee, Hyung-Sik;Choi, Young-Min;Hur, Won-Joo;Kim, Su-Jin;Kim, Dae-Cheol;Roh, Mee-Sook;Hong, Young-Seoub;Park, Ki-Jae
    • Radiation Oncology Journal
    • /
    • v.27 no.4
    • /
    • pp.210-217
    • /
    • 2009
  • Purpose: To assess the influence of cyclooxygenase-2 (COX-2) expression on the survival of patients with a combination of rectal cancer and lymph node metastasis. Materials and Methods: The study included rectal cancer patients treated by radical surgery and postoperative radiotherapy at the Dong-A university hospital from 1998 to 2004. A retrospective analysis was performed on a subset of patients that also had lymph node metastasis. After excluding eight of 86 patients, due to missing tissue samples in three, malignant melanoma in one, treatment of gastric cancer around one year before diagnosis in one, detection of lung cancer after one year of diagnosis in one, liver metastasis in one, and refusal of radiotherapy after 720 cGy in one, 78 patients were analyzed. The immunohistochemistry for COX-2 was conducted with an autostainer (BenchMark; Ventana, Tucson, AZ, USA). An image analyzer (TissueMine; Bioimagene, Cupertino, CA, USA) was used for analysis after scanning (ScanScope; Aperio, Vista, CA, USA). A survival analysis was performed using the Kaplan Meier method and significance was evaluated using the log rank test. Results: COX-2 was stained positively in 62 patients (79.5%) and negatively in 16 (20.5%). A total of 6 (7.7%), 15 (19.2%), and 41 (52.6%) patients were of grades 1, 2, and 3, respectively for COX-2 expression. No correlation was found between being positive of COX-2 patient characteristics, which include age (<60-year old vs. $\geq$60), sex, operation methods (abdominoperineal resection vs. lower anterior resection), degrees of differentiation, tumor size (<5 cm vs. $\geq$5 cm), T stages, N stages, and stages (IIIa, IIIb, IIIc). The 5-year overall and 5-year disease free survival rates for the entire patient population were 57.0% and 51.6%, respectively. The 5-year overall survival rates for the COX-2 positive and negative patients were 53.0% and 72.9%, respectively (p=0.146). Further, the 5-year disease free survival rates for the COX-2 positive and negative patients were 46.3% and 72.7%, respectively (p=0.118). The 5-year overall survival rates were significantly different (p<0.05) for the degree of differentiation, N stage, and stage, whereas the 5-year disease free survival rates were significant for N stage and stage. Conclusion: Being positive for and the degree of COX-2 expression did not have a significant influence on the survival of rectal cancer patients with lymph node metastasis. However, N stage and stage did significantly influence the rateof survival. Further analysis of a greater sample size is necessary for the verification of the effect of COX-2 expression on the survival of rectal cancer patients with lymph node involvement.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.