DOI QR코드

DOI QR Code

A New Hyper Parameter of Hounsfield Unit Range in Liver Segmentation

  • Kim, Kangjik (Dept. of Computer Science and Engineering, Kyonggi University) ;
  • Chun, Junchul (Dept. of Computer Science and Engineering, Kyonggi University)
  • Received : 2020.01.31
  • Accepted : 2020.04.21
  • Published : 2020.06.30

Abstract

Liver cancer is the most fatal cancer that occurs worldwide. In order to diagnose liver cancer, the patient's physical condition was checked by using a CT technique using radiation. Segmentation was needed to diagnose the liver on the patient's abdominal CT scan, which the radiologists had to do manually, which caused tremendous time and human mistakes. In order to automate, researchers attempted segmentation using image segmentation algorithms in computer vision field, but it was still time-consuming because of the interactive based and the setting value. To reduce time and to get more accurate segmentation, researchers have begun to attempt to segment the liver in CT images using CNNs, which show significant performance in various computer vision fields. The pixel value, or numerical value, of the CT image is called the Hounsfield Unit (HU) value, which is a relative representation of the transmittance of radiation, and usually ranges from about -2000 to 2000. In general, deep learning researchers reduce or limit this range and use it for training to remove noise and focus on the target organ. Here, we observed that the range of HU values was limited in many studies but different in various liver segmentation studies, and assumed that performance could vary depending on the HU range. In this paper, we propose the possibility of considering HU value range as a hyper parameter. U-Net and ResUNet were used to compare and experiment with different HU range limit preprocessing of CHAOS dataset under limited conditions. As a result, it was confirmed that the results are different depending on the HU range. This proves that the range limiting the HU value itself can be a hyper parameter, which means that there are HU ranges that can provide optimal performance for various models.

Keywords

1. Introduction

Liver cancer is one of the most common cancers in the world and one of the most fatal diseases. The location and size of the liver in the patient's body is important for diagnose or surgery. It is very helpful for doctors to know the location of the liver during surgery, and knowing the size of the liver can make an appropriate judgment in the situation of cutting and treating the liver. For these reasons, segmentation to determine the location and size of organs such as liver and liver cancer was an issue in the medical field.

In general, a commonly used technique for identifying a patient's organ is a CT (Computed Tomography) image using radiation and MRI (Magnetic Resonance Imaging) image using magnetic resonance. In this paper, we focus on CT-based images. When CT scans of a patient's body, the patient's abdomen can be obtained with a uniform volume of tomography. Initially, radiologists manually bounded the location of the liver for each slice of the CT images. However, this manual method of marking boundaries is very time-consuming per slice, and humans are very likely to make mistakes. Accordingly, methods for segmenting organs in images accurately and automatically using computers have been studied in the field of medical imaging.

Segmentation of the liver in CT image is considered to be a difficult problem for several reasons in the medical imaging field. First, the size and position of the liver is very different for each patient. Some patients have small liver sizes, while some patients have abnormal livers that half the size of their abdomen. The second is due to the characteristics of the CT image. General images we are familiar with are 3 channel images of RGB with values of 0 ~ 255 for each channel. However, the CT image is a one channel grayscale image and is very different from a normal RGB image because its value ranges between -2000 and 2000. In addition, the range may vary from hospital to hospital. Third, the contrast between organs in the abdominal CT image is very low. The low contrast makes it very difficult to distinguish organs from each other. Fourth, close organs. The organs in CT images are so closely attached that they are difficult to distinguish. Especially, since the heart is located very closely to the liver, it is often difficult to clearly distinguish the border between the heart and the liver. In Figure 1, There is low contrast between organs and heart which located at right side is closely attached at liver on left side.

OTJBCD_2020_v21n3_103_f0001.png 이미지

(Figure 1) CT(Computed Tomography) image of patient’s abdomen. Bones are white color and liver is located at left side with gray color.

Previous studies segmented liver by using information such as threshold, color, and edge on CT images. This saved a lot of effort and time in direct division. However, they had a problem that setting the constant values of the algorithms manually, and they had to be set differently each time the image was different. In addition, the segmentation performance of the algorithm did not meet as expected, and some algorithm was interactive methods that need humans, so it was not fully automated and still hand crafted.

In recent years, Convolutional Neural Network(CNN) which is deep learning has solved many problems or improved performance in the field of computer vision. This has shown the possibility of solving problems by applying deep learning to many fields such as image classification[1], object detection[2] and image segmentation[3]. In this trend, the study of segmenting the liver began to apply deep learning.

OTJBCD_2020_v21n3_103_f0006.png 이미지

(Figure 2) (Left) CT image target (Right) liver mask.

Representative medical image segmentation networks are FCN[4] and U-Net[5], and many researchers have conducted research based on U-Net when segmenting the liver. Here, many researchers preprocess the CT image data set by many ways before training it in the network, but among them, limiting range of the HU (Hounsfield Unit)[6] value is often performed. Hounsfield unit is a numerical value of CT, which is a relative expression of the degree of absorption by density at each site as the radiation passes through the body. For example, 0 for water, 1000 for bone, and -1000 for air. There are specific numerical ranges of organs in the abdomen, and other organs also have their own ranges. By limiting the HU range, the organs in the range become more visible, which is a common method for analyzing and visualizing CT images. It is used for even when training deep learning networks, researchers limit the data set's HU range and then use it for training. Here we observe that most studies limits the HU ranges by subjective judgment, therefor each studies HU ranges varies a lot. In this paper, we propose the possibility to consider the range of HU values as a hyper parameter. We assume that the range of HU values also has a specific range that can be optimized for deep learning models. Therefore, we limited the range of HU values used in various studies that tried to segment liver, and then compared trained network under the limited conditions. This method can improve the performance of deep learning networks by finding a good range with preprocessing considering dataset in medical imaging. As a result of performing experiments according to the HU range using U-Net and ResUNet[7], it was confirmed that the learning process and the results were different for each range. The best range for U-Net was [-150, 250] and the best range for ResUNet was [-130, 220]. This is evidence of the effect of the range of HU values in the CT image on the model, and the difference in the best range in each model means that the optimal range of HU values in each model is different. Limiting HU value range should also be considered as an optimization problem and is a universal method because it can be applied to other organs.

2. Related Works

2.1 Previous Studies

Liver cancer is a fatal and dangerous cancer worldwide. Knowing the size and location of the patient's liver is a great help when the doctor makes a diagnosis or operates on the patient. Traditionally, radiologists have manually segmented the borders of patients' livers in abdomen CT images. However, it took so much time that researchers began to apply computer vision segmentation algorithms for automation. In [8], the level-set algorithm is applied, and in [9], the Graph cut algorithm is applied. In [10], they approached using threshold. However, these attempts still required many interactions with people, so they were not fully automated and did not perform as expected in accuracy.

2.2 Deep Learning based Method

In recent years, deep learning has solved many image-related problems, and efforts have been made to solve problems with deep learning in the medical imaging field. The most representative network in medical imaging is U-Net[5]. U-Net is able to maintain high resolution features through long skip connections, resulting in good results for many medical image data. Previously, U-Net was a 2D U-Net learning with Slice-by-Slice. However, CT images of the patient's abdomen were sequentially related to each other, which 2D U-Net could not take into account. Considering that radiologists divide the CT image by referring to the before image and after image, 3D U-Net [11]or V-Net[12], which can be learned in volumetric form, appeared. 3D U-Net has the advantage of being able to learn sequentially by volumetric learning, but it has the disadvantage that the computation cost and memory usage are very high. In RA-UNet[13], 3D U-Net was composed by using residual block and attention module to solve the deepening of network due to computation cost. In H-DenseUNet[14], 2D and 3D information can be used together by proposing framework. In addition, they proposed a DenseUNet which replaced the convolution part of U-Net with dense block.

We observed that there were studies of various methods based on U-Net, but the methods of preprocessing the HU ranges were very different. We observed that there were studies of various methods based on U-Net, but the methods of preprocessing the HU ranges were very different. [15] ranged HU value as [-128, 128], [16] ranged [-200, 200], [17] ranged [-100, 400], [18] ranged [-150, 250], [19] ranged [-200, 250]. We assume that there is a difference in performance depending on the range of HU values, and the comparison suggests the possibility of one hyper parameter to consider HU range as an optimization. In addition, through this, we propose a method for improving the performance of a general deep learning network in the medical imaging field.

The experiment was conducted by varying the range of HU values while fixing all the hyper parameters of U-Net, and the same experiment was conducted on ResUNet to prove the difference in performance according to the change of the range of HU values in other models. This means that there is a range of different HU values for each model.

3. Method

In this paper, we propose the possibility of optimizing the range of HU values by comparing the network performance according to the preprocessing of various HU values range. The baseline network is U-Net and the training was conducted under same conditions.

Section 3.1 describes the dataset used for training and testing, and Section 3.2 describes the Hounsfield Unit (HU). Section 3.3 describes how to preprocess the HU value range, and Section 3.4 describes the deep learning networks used for comparison. Section 3.5 describes the loss function used for training.

3.1 Dataset

Datasets related to liver segmentation include SLiver07, 3DIRCADB, CHAOS (Combined Healthy Abdominal Organ Segmentation) Challenge, and LiTS (Liver and Tumor Segmentation) Challenge. In this study, we experimented with data set provided by CHAOS Challenge. The CHAOS data set provides CT and MRI data sets, but the CT data set is used in this study. The CT data set consists of abdominal DICOM format data from 40 patients with 512 x 512 resolution, with an average of 90 CT images per patient. We set the dataset ratio to training data to 7, verification data to 2, and testing data to 1.

3.2 Hounsfield Unit (HU)

The CT value indicates the relative degree of attenuation of the substance when the X-ray is taken by CT scan. It is known that the bone is 1000, the water is 0, the air is -1000, and the liver is between 40 and 60. The CT values here are called Hounsfield figures, after Godfrey N. Hounsfield, who invented the CT. The liver of a particular patient may have different values depending on the CT scanner. HU value of liver of each CT Scanner may have a value of 100 or more and may be less than 100. CT images range from -2000 to 2000 and are 1 channel grayscale images. There are about 4000 gray levels in CT images. It is known that humans cannot recognize 1 level units and can recognize color differences of about 20 levels. Doctors and radiologists often limit their scope to targets. In this regard, various deep learning researchers who study medical images often limit the range of HU values, but very subjective. In this study, this HU value was considered as one optimization variable and the comparison and analysis were performed by taking the HU range of other studies.

3.3 Preprocessing

In this paper, we compared the range of HU values that were limited in various studies. There were six ranges of HU values preprocessed in various studies : [-130, 220], [-128, 128], [-200, 200], [-100, 400], [-150, 250], [-200, 250]. All training set including the validation set and the test set were all limited in the same range of HU values. We also observed empirically reducing the size of the image to produce similar results whether compared or at the original resolution. Therefore, to speed up the training, we reduced the size of the image from 512 to 256. In addition, the verification data including the training data was shuffled for smooth training. There was no such normalization applied. In Figure 3, the images are CT images of each range mapped to grayscale images of 0 to 255. As the HU ranges are similar, images may be similar and difficult to distinguish. But when preprocessing, it only contains values ​​within that range.

OTJBCD_2020_v21n3_103_f0002.png 이미지

(Figure 3) Six CT images according to the range of HU values. The order in which images are placed is from the top right to the bottom right. [-130, 220], [-100, 400], [-128, 128], [-150, 250], [-200, 200], [-200, 250].

3.4 Neural Network

The segmentation network to compare the range of HU values is U-Net[5], which is the most representative in the field of medical imaging. U-Net is in the form of encoder and decoder and plays a role of keeping the resolutions falling as the layer deepens through a long skip connection. This network train by end-to-end way. Figure 4 show simple architecture of U-Net. When the 256 x 256 image enters the input, the encoder section performs down sampling after the convolution filter. And in the decoder part, it goes back to the original resolution through the convolution filter and up sampling. The feature map of the encoder part, which is the same resolution of the decoder part, makes the feature stronger by using long skip connection. In order to prove the hypothesis of this study, a comparison was also performed at ResUNet[7] which uses residual module in U-Net structure.

Feature extraction used in ResNet applies Resnet-18 and uses pre-trained weights for ImageNet. The remaining parts are identical to those in [7].

OTJBCD_2020_v21n3_103_f0003.png 이미지

(Figure 4) Simple representation of U-Net architecture. Input size is 256. Blue arrows indicate convolution operations, and blue squares indicate feature maps. The orange arrow is the long skip connection and the orange square is the feature map of the encoder section which is added to the decoder section.

3.5 Loss Function

The most commonly used loss in medical image segmentation is dice coefficient loss [20]. Dice coefficient loss represents the degree of overlap of two images in binary image. However, dice coefficient loss is sometimes not so accurate. For example, if you make a prediction against an empty background, dice coefficient loss is very strange. In order to prevent this, some studies have used Jaccard loss [21], BCE [22] or jointed together with dice loss. In this study, the binary cross entropy was jointly defined as a loss function to compensate for the dice coefficient loss. Binary cross entropy is the average of both the ground truth image and the cross entropy of each pixel in the predicted image. The final loss is composed by adding 50% of the binary cross entropy loss and 50% of the dice coefficient loss, which enables more stable training.

4. Experimental Results

We preprocessed CHAOS datasets with different HU range values in U-Net and ResUNet, and then trained and evaluated them for comparison and experiment. Section 4.1 introduces the optimization algorithms, including the hyper parameters of each network. Section 4.2 shows the comparison results from each network experiment.

4.1 Implementation

U-Net and ResUNet differ only in structure but all other hyper parameters are identical. The epoch is 50 and the optimizer uses Adam[23] optimizer. The learning rate was 1.0001. We performed all experiments on a single GTX 1080Ti, and the Pytorch deep Learning framework was used.

4.2 Comparison of HU value

In this paper, we compared the range of HU values in U-Net[5] and ResUNet[7].

OTJBCD_2020_v21n3_103_f0004.png 이미지

(Figure 5) Dice and BCE joint loss per epoch at U-Net. The range of HU value is indicated for each color. The speed and the degree of convergence differ depending on the range.

Figure 5 shows the convergence results per epoch according to the range of each HU value. From Figure 5 we can see that the convergence can vary slightly depending on the HU value. This shows that the results may differ depending on the preprocessing method that limits the range of HU values, which means that the HU values we mentioned above can be thought of as hyper parameters. Evaluation of the model was measured as the average of the dice coefficient score[20] in the test set. Similar to dice coefficient loss in section 3.4, dice coefficient loss is a negative score of the Dice Score. The dice coefficient score accuracy according to the limited range of HU value in U-Net can be seen in Table 1.

As shown in Table 1, HU [-150, 250] showed the best performance with 99.4415%, and the worst range was HU [-130, 220]. The difference between the highest and lowest accuracy was about 0.02%. This means that results can vary and can improve performance. Here we tested different preprocessed ranges in the model for each HU range.

(Table 1) Test set accuracy according to HU range at U-Net

OTJBCD_2020_v21n3_103_t0001.png 이미지

As a result, as shown in Table 2, all the accuracy was different for each model and data. It is interesting to note that the highest accuracy range for training was [-150, 250], which was the most accurate for data preprocessed to [-200, 250] when testing. It can also be seen that different parts have different results. This means that performance may improve even if the preprocessing is different between training and testing.

(Table 2) Accuracy of data preprocessed in different ranges to models trained for each HU range in U-Net. The horizontal title in the table refers to the model trained in each range, and the vertical title is the test data for each range to be tested in the model. Unlike the other tables, the subtraction of 100 is done as a loss function to reveal subtle differences in accuracy. In other words, the lower the value, the more accurate.

OTJBCD_2020_v21n3_103_t0002.png 이미지

OTJBCD_2020_v21n3_103_f0005.png 이미지

(Figure 6) Dice and BCE joint loss per epoch at ResUNet. ResUNet tends to train more stable than U-Net.

Figure 6 shows the range of Dice Loss per epoch when training with each range in ResUNet, and Table 3 shows the accuracy of each range as in the U-Net experiment. In the case of ResUNet, the results are more appear clearly. As can be seen from the table, like U-Net, ResUNet showed different accuracy when tested, and the best accuracy was in the range of [-130, 220], showing 99.88821%.

This can be seen as evidence of our hypothesis in U-Net as well as in other models. Also, the different ranges in U-Net and ResUNet suggest that we have different ranges for better performance for each network. As can be seen from Table 3, ResUNet shows that best is about 0.08% more than worst.

As in Table 2, Table 4 shows the accuracy results of preprocessing the models trained in ResUNet in different HU ranges. We can see that all the preprocessed dataset show different accuracy at each model. What is unique here is that, unlike U-Net, ResUNet tends to produce better results for test data that has been subjected to the same preprocessing.

(Table 3) Test set accuracy according to HU range at ResUNet

OTJBCD_2020_v21n3_103_t0003.png 이미지

5. Conclusion

In this study, a comparison of the methods of limiting the HU range for segmenting the liver in the abdominal CT images of patients was performed. CHAOS Challenge dataset were used, and U-Net and ResUNet were used for the comparison model. The preprocessing method for limiting the range of HU values was different in various studies, so we assumed that the optimal range of HU exists and can be considered as a hyper parameter. As a result of the experiment, Dice accuracy was different for each HU range in U-Net, and the best range was [-150, 250]. This shows that our assumptions are not wrong. In addition, the performance of ResUNet was different for each range, and the [-130, 220] range showed the best accuracy. The different ranges that achieve the best performance of U-Net and ResUNet suggest that the optimal preprocessing method according to the network is not always the same. The comparison results in this study proved that the optimal HU value limitation range in CT images may differ from model to model and that one hyper parameter should be fully considered.

The difference in the results from the experiment can be thought of as a small number. But note that Currently compared ranges have a lot of similar ranges and only 6 ranges, so there are few comparisons. In fact, the range that can limit the HU value is enormous. We can't compare all of these ranges, so we compared only six ranges, but the difference shows that our assumptions are correct and gives clues that can help us find optimal values in the future.

(Table 4) Accuracy of data preprocessed in different ranges to models trained for each HU range in ResUNet. The horizontal and vertical axes are same as Table 2.

OTJBCD_2020_v21n3_103_t0004.png 이미지

It is not easy to always optimize the preprocessing in the field of medical imaging where the modality is much more various than general images. In particular, the HU range values are difficult to find optimal. In future studies, AutoML[24] will be applied to automatically find the optimal preprocessing method for CT imaging in the medical imaging field.

References

  1. Yann Lecun, Leon Bottou, T. Bengio, Patrick Haffner, "Gradient based learning applied to document recognition", IEEE, Vol.86, No.11, pp 2274-2324, 1998. https://doi.org/10.1109/5.726791
  2. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. "You Only Look Once: Unified, Real-Time Object Detection", CVPR, 2016. https://doi.org/10.1109/cvpr.2016.91
  3. Zhang, W., Li, R., Deng, H., Wang, L., Lin, W., Ji, S., Shen D.,"Deep convolutional neural networks for multi-modality isointense infant brain image segmentation", NeuroImage, Vol.108, pp. 214-224, 2015. https://doi.org/10.1016/j.neuroimage.2014.12.061
  4. Long, J., Shelhamer, E., & Darrell, T. , "Fully Convolutional Networks for Semantic Segmentation", CVPR, 2015. https://doi.org/10.1109/cvpr.2015.7298965
  5. UNet : Olaf Ronneberger, Philipp Fischer, Thomas Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation", MICCAI, pp.234-341, 2015. https://doi.org/10.1007/978-3-319-24574-4_28
  6. Kamaruddin, N., Rajion, Z. A., Yusof, A., & Aziz, M. E., "Relationship between Hounsfield unit in CT scan and gray scale", CBCT, 2016. https://doi.org/10.1063/1.4968860
  7. Xu, W., Liu, H., Wang, X., & Qian, Y., "Liver Segmentation in CT based on ResUNet with 3D Probabilistic and Geometric Post Process", ICSIP, 2019. https://doi.org/10.1109/siprocess.2019.8868690
  8. Lee, J., Kim, N., Lee, H., Seo, J. B., Won, H. J., Shin, Y. M.,. "Efficient liver segmentation using a level-set method with optimal detection of the initial liver boundary from level-set speed images", Computer Methods and Programs in Biomedicine, Vol.88, No.1, pp.26-38, 2007. https://doi.org/10.1016/j.cmpb.2007.07.005
  9. Lu, F., Wu, F., Hu, P., Peng, Z., & Kong, D."Automatic 3D liver location and segmentation via convolutional neural network and graph cut", International Journal of Computer Assisted Radiology and Surgery, Vol.12, No.2, pp.171-182, 2016. https://doi.org/10.1007/s11548-016-1467-3
  10. Seo, K.-S., Kim, H.-B., Park, T., Kim, P.-K., & Park, J.-A., "Automatic Liver Segmentation of Contrast Enhanced CT Images Based on Histogram Processing", Lecture Notes in Computer Science, pp. 1027-1030, 2005. https://doi.org/10.1007/11539087_135
  11. Ozgun Cicek, Ahmed Abdulkadir, "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation", MICCAI, Vol.9901, pp.424-432, 2016. https://doi.org/10.1007/978-3-319-46723-8_49
  12. Milletari, F., Navab, N., & Ahmadi, S.-A. (2016). V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016. https://doi.org/10.1109/3dv.2016.79
  13. Qiangguo Jin, Zhaopeng Meng, Changming Sun, Leyi Wei, Ran Su,"A hybrid deep attention-aware network to extract liver and tumor in CT scans", Arxiv preprint 1811.01328, 2018. https://arxiv.org/pdf/1811.01328.pdf
  14. Xiaomeng Li, Hao Chen, "H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes", IEEE Transactions on Medical Imaging, Vol.37, No.12, pp. 2663-267, 2018. https://doi.org/10.1109/tmi.2018.2845918
  15. Fang Lu, Fa Wu, "Automatic 3D liver location and segmentation via convolutional neural networks and graph cut", International Journal of Computer Assisted Radiology and Surgery, Vol.12, No.2, pp.171-182, 2016. https://doi.org/10.1007/s11548-016-1467-3
  16. Zhe Liu, Yu-Qing Song, "Liver CT sequence segmentation based with improved U-Net and graph cut", Expert Systems with Applications, Vol.126, pp.54-63, 2019. https://doi.org/10.1016/j.eswa.2019.01.055
  17. Patrick Ferdinand Christ, Florian Ettlinger, "Automatic Liver Tumor Segmentation of CT and MRI Volumes Using Cascaded Fully Convolutional Neural Networks", Arxiv preprint 1702.05970, 2017. https://arxiv.org/abs/1702.05970
  18. Patrick Ferdinand Christ, Mohamed Ezzeldin A. Elshaer, Florian Ettlinger, "Automatic Liver and Lesion segmentation in CT Using Cascaded Fully Convolutinoal Neural Networks and 3D Conditional Random Fields", MICCAI, pp.415-423, 2016. https://doi.org/10.1007/978-3-319-46723-8_48
  19. Miriam Bellver, Kevis-Kokitsi Maninis, et al., "Detection-aided liver lesion segmentation using deep learning", Arxiv preprint 1711.11069, 2017. https://arxiv.org/abs/1711.11069
  20. Soomro, T. A., Hellwich, O., Afifi, A. J., Paul, M., Gao, J., & Zheng, L., "Strided U-Net Model: Retinal Vessels Segmentation using Dice Loss", 2018 Digital Image Computing: Techniques and Applications (DICTA), 2018. https://doi.org/10.1109/dicta.2018.8615770
  21. Yuan, Y., Chao, M., & Lo, Y.-C., "Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance", IEEE Transactions on Medical Imaging, Vol.36, No.9, pp.1876-1886, 2017. https://doi.org/10.1109/tmi.2017.2695227
  22. Antonia Creswell, Kai Arulkumaran, Anil A. Bharath, "On denoising autoencoders trained to minimise binary cross-entropy", Arxiv Preprint 1708.08487, 2017. https://arxiv.org/pdf/1708.08487.pdf
  23. hang, Z.,"Improved Adam Optimizer for Deep Neural Networks", International Symposium on Quality of Service (IWQoS), 2018. https://doi.org/10.1109/iwqos.2018.8624183
  24. Barret Zoph, Quoc V. Le, "Neural architecture search with reinforcement learning", Arxiv Preprint 1611.01578, 2016. https://arxiv.org/pdf/1611.01578.pdf
  25. Yoon-Jin Lee, Myoung-Hoon Lee, In-June Jo, "GNUnet improvement for anonymity supporting in large multimedia file", Journal of Internet Computing and Services, Vol.7, No.5, pp.71-80, 2006. http://www.jics.or.kr/digital-library/422
  26. Nguyen Tran Lan Anh, Guee-Sang Lee, "Color Object Segmentation using Distance Regularized Level Set", Journal of Internet Computing and Services, Vol.13, No.4, pp.53-62, 2012. http://www.jics.or.kr/digital-library/947 https://doi.org/10.7472/jksii.2012.13.4.53
  27. Jaejoon Seo, Junchul Chun, Jin-Sung Lee, "An Automatic Mobile Cell Counting System for the Analysis of Biological Image", Journal of Internet Computing and Services, Vol.16, No.1, pp.39-46, 2015. http://www.jics.or.kr/digital-library/1120 https://doi.org/10.7472/jksii.2015.16.1.39

Cited by

  1. Pothole Classification Model Using Edge Detection in Road Image vol.10, pp.19, 2020, https://doi.org/10.3390/app10196662