• Title/Summary/Keyword: Subtraction method

Search Result 433, Processing Time 0.024 seconds

$^{99m}Tc-MAG_3$ Elimination Index on Normal Functioning Transplanted Kidney ($^{99m}Tc-MAG_3$ 제거지수를 이용한 이식신장의 기능평가)

  • Jeon, Woo-Jin;Kim, Ju-Heon;Park, Mi-Ok;Lee, Hee-Jung;Hyun, Jung-Ae;Zeon, Seok-Kil
    • The Korean Journal of Nuclear Medicine
    • /
    • v.29 no.1
    • /
    • pp.79-83
    • /
    • 1995
  • Purpose : We analysed $^{99m}Tc-MAG_3$ renal scans to evaluate renal function of transplanted kidney and to detect various renal transplant complications, measuring the ratio of renal radioactivity at three minutes to that at 20 minutes(elimination index). Material and Methods : The fifty seven renal transplantation recipients were studied. There were 50 normal functioning transplanted kidneys as group I and 7 abnormal function-ing transplanted kidney, including 5 cases of acute renal rejection, 2 cases of acute tubular necrosis as group IIl. The protocol consisted of: (1) $^{99m}Tc-MAG_3$ 740MBq injection intravenously : (2) sequential imaging for 2min(60two-second images) followed by 30min(30 sixty-second images) : (3) drawing of region of interest(ROI) on renal imaging; (4) time-activity corves were generated from renal ROI after background subtraction, and time of maximum activity($T_{max}$) and half time of maximal peak radioactivity($T_{1/2}$) were produced in the renogram curve. (5) EI through Bischof-Delaloye method as determined on the renogram curve. Results : Normal group( I ) shows mean EI of 2.21(95.0% Confidence limit of 2.01-2.87), $T_{max}$ of 154 sec, $T_{1/2}$ of 1,139 sec. Abnormal group(II) shows mean EI of 0.74, $T_{max}$ of 1,466 sec, $T_{1/2}$ of 19,224 sec. The EI, $T_{max}$, $T_{1/2}$, BUN and serum creatinine values are significantly different between normal group(I) and abnormal group(II) (p<0.0001). Conclusion : By measuring EI with $^{99m}Tc-MAG_3$, renal function of transplanted kidney could be easily evaluated and various complications could be detected early.

  • PDF

Utility Evaluation of Supportive Devices for Interventional Lower Extremity Angiography (인터벤션 하지 혈관조영검사를 위한 보조기구의 유용성 평가)

  • Kong, Chang gi;Song, Jong Nam;Jeong, Moon Taek;Han, Jae Bok
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.613-621
    • /
    • 2019
  • The purpose of this study is to evaluate the effectiveness of supportive devices which are for minimizing the patient's movement during lower extremity angiography and to verify image quality of phantom by analyzing of Mask image, DSA image and Roadmap image into SNR and CNR. As a result of comparing SNR with CNR of mask image obtained by DSA technique using the phantom alone and phantom placed on the supportive devices, there was no significant difference between about 0~0.06 for SNR and about 0~0.003 for CNR. The study showed about 0.11~0.35 for SNR and 0.016~0.031 for CNR of DSA imaging by DSA technique about only water phantom of the blood vessel model and the water phantom placed on the device. Analyzing SNR and CNR of Roadmap technique about water phantom on the auxiliary device (hardboard paper, pomax, polycarbonate, acrylic) and water phantom alone, there was no significant difference between 0.02~0.05 for SNR and 0.002~0.004 for CNR. In conclusion, there was no significant difference on image quality by using supportive devices made by hardboard paper, pomax, polycarbonate or acryl regardless of whether using supportive devices or not. Supportive devices to minimize of the patient's movement may reduce the total amount of contrast, exam-time, radiation exposure and eliminate risk factors during angiogram. Supportive devices made by hardboard paper can be applied easily during angiogram due to advantages of reasonable price and simple processing. It is considered that will be useful to consider cost efficiency and types of materials and their properties in accordance with purpose and method of the study when the operator makes and uses supportive devices.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.