• Title/Summary/Keyword: Deep Learning Dataset

Search Result 776, Processing Time 0.031 seconds

Deep Learning-Based Assessment of Functional Liver Capacity Using Gadoxetic Acid-Enhanced Hepatobiliary Phase MRI

  • Hyo Jung Park;Jee Seok Yoon;Seung Soo Lee;Heung-Il Suk;Bumwoo Park;Yu Sub Sung;Seung Baek Hong;Hwaseong Ryu
    • Korean Journal of Radiology
    • /
    • v.23 no.7
    • /
    • pp.720-731
    • /
    • 2022
  • Objective: We aimed to develop and test a deep learning algorithm (DLA) for fully automated measurement of the volume and signal intensity (SI) of the liver and spleen using gadoxetic acid-enhanced hepatobiliary phase (HBP)-magnetic resonance imaging (MRI) and to evaluate the clinical utility of DLA-assisted assessment of functional liver capacity. Materials and Methods: The DLA was developed using HBP-MRI data from 1014 patients. Using an independent test dataset (110 internal and 90 external MRI data), the segmentation performance of the DLA was measured using the Dice similarity score (DSS), and the agreement between the DLA and the ground truth for the volume and SI measurements was assessed with a Bland-Altman 95% limit of agreement (LOA). In 276 separate patients (male:female, 191:85; mean age ± standard deviation, 40 ± 15 years) who underwent hepatic resection, we evaluated the correlations between various DLA-based MRI indices, including liver volume normalized by body surface area (LVBSA), liver-to-spleen SI ratio (LSSR), MRI parameter-adjusted LSSR (aLSSR), LSSR × LVBSA, and aLSSR × LVBSA, and the indocyanine green retention rate at 15 minutes (ICG-R15), and determined the diagnostic performance of the DLA-based MRI indices to detect ICG-R15 ≥ 20%. Results: In the test dataset, the mean DSS was 0.977 for liver segmentation and 0.946 for spleen segmentation. The Bland-Altman 95% LOAs were 0.08% ± 3.70% for the liver volume, 0.20% ± 7.89% for the spleen volume, -0.02% ± 1.28% for the liver SI, and -0.01% ± 1.70% for the spleen SI. Among DLA-based MRI indices, aLSSR × LVBSA showed the strongest correlation with ICG-R15 (r = -0.54, p < 0.001), with area under receiver operating characteristic curve of 0.932 (95% confidence interval, 0.895-0.959) to diagnose ICG-R15 ≥ 20%. Conclusion: Our DLA can accurately measure the volume and SI of the liver and spleen and may be useful for assessing functional liver capacity using gadoxetic acid-enhanced HBP-MRI.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Dual CNN Structured Sound Event Detection Algorithm Based on Real Life Acoustic Dataset (실생활 음향 데이터 기반 이중 CNN 구조를 특징으로 하는 음향 이벤트 인식 알고리즘)

  • Suh, Sangwon;Lim, Wootaek;Jeong, Youngho;Lee, Taejin;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.855-865
    • /
    • 2018
  • Sound event detection is one of the research areas to model human auditory cognitive characteristics by recognizing events in an environment with multiple acoustic events and determining the onset and offset time for each event. DCASE, a research group on acoustic scene classification and sound event detection, is proceeding challenges to encourage participation of researchers and to activate sound event detection research. However, the size of the dataset provided by the DCASE Challenge is relatively small compared to ImageNet, which is a representative dataset for visual object recognition, and there are not many open sources for the acoustic dataset. In this study, the sound events that can occur in indoor and outdoor are collected on a larger scale and annotated for dataset construction. Furthermore, to improve the performance of the sound event detection task, we developed a dual CNN structured sound event detection system by adding a supplementary neural network to a convolutional neural network to determine the presence of sound events. Finally, we conducted a comparative experiment with both baseline systems of the DCASE 2016 and 2017.

CNN-based Recommendation Model for Classifying HS Code (HS 코드 분류를 위한 CNN 기반의 추천 모델 개발)

  • Lee, Dongju;Kim, Gunwoo;Choi, Keunho
    • Management & Information Systems Review
    • /
    • v.39 no.3
    • /
    • pp.1-16
    • /
    • 2020
  • The current tariff return system requires tax officials to calculate tax amount by themselves and pay the tax amount on their own responsibility. In other words, in principle, the duty and responsibility of reporting payment system are imposed only on the taxee who is required to calculate and pay the tax accurately. In case the tax payment system fails to fulfill the duty and responsibility, the additional tax is imposed on the taxee by collecting the tax shortfall and imposing the tax deduction on For this reason, item classifications, together with tariff assessments, are the most difficult and could pose a significant risk to entities if they are misclassified. For this reason, import reports are consigned to customs officials, who are customs experts, while paying a substantial fee. The purpose of this study is to classify HS items to be reported upon import declaration and to indicate HS codes to be recorded on import declaration. HS items were classified using the attached image in the case of item classification based on the case of the classification of items by the Korea Customs Service for classification of HS items. For image classification, CNN was used as a deep learning algorithm commonly used for image recognition and Vgg16, Vgg19, ResNet50 and Inception-V3 models were used among CNN models. To improve classification accuracy, two datasets were created. Dataset1 selected five types with the most HS code images, and Dataset2 was tested by dividing them into five types with 87 Chapter, the most among HS code 2 units. The classification accuracy was highest when HS item classification was performed by learning with dual database2, the corresponding model was Inception-V3, and the ResNet50 had the lowest classification accuracy. The study identified the possibility of HS item classification based on the first item image registered in the item classification determination case, and the second point of this study is that HS item classification, which has not been attempted before, was attempted through the CNN model.

Automated Lung Segmentation on Chest Computed Tomography Images with Extensive Lung Parenchymal Abnormalities Using a Deep Neural Network

  • Seung-Jin Yoo;Soon Ho Yoon;Jong Hyuk Lee;Ki Hwan Kim;Hyoung In Choi;Sang Joon Park;Jin Mo Goo
    • Korean Journal of Radiology
    • /
    • v.22 no.3
    • /
    • pp.476-488
    • /
    • 2021
  • Objective: We aimed to develop a deep neural network for segmenting lung parenchyma with extensive pathological conditions on non-contrast chest computed tomography (CT) images. Materials and Methods: Thin-section non-contrast chest CT images from 203 patients (115 males, 88 females; age range, 31-89 years) between January 2017 and May 2017 were included in the study, of which 150 cases had extensive lung parenchymal disease involving more than 40% of the parenchymal area. Parenchymal diseases included interstitial lung disease (ILD), emphysema, nontuberculous mycobacterial lung disease, tuberculous destroyed lung, pneumonia, lung cancer, and other diseases. Five experienced radiologists manually drew the margin of the lungs, slice by slice, on CT images. The dataset used to develop the network consisted of 157 cases for training, 20 cases for development, and 26 cases for internal validation. Two-dimensional (2D) U-Net and three-dimensional (3D) U-Net models were used for the task. The network was trained to segment the lung parenchyma as a whole and segment the right and left lung separately. The University Hospitals of Geneva ILD dataset, which contained high-resolution CT images of ILD, was used for external validation. Results: The Dice similarity coefficients for internal validation were 99.6 ± 0.3% (2D U-Net whole lung model), 99.5 ± 0.3% (2D U-Net separate lung model), 99.4 ± 0.5% (3D U-Net whole lung model), and 99.4 ± 0.5% (3D U-Net separate lung model). The Dice similarity coefficients for the external validation dataset were 98.4 ± 1.0% (2D U-Net whole lung model) and 98.4 ± 1.0% (2D U-Net separate lung model). In 31 cases, where the extent of ILD was larger than 75% of the lung parenchymal area, the Dice similarity coefficients were 97.9 ± 1.3% (2D U-Net whole lung model) and 98.0 ± 1.2% (2D U-Net separate lung model). Conclusion: The deep neural network achieved excellent performance in automatically delineating the boundaries of lung parenchyma with extensive pathological conditions on non-contrast chest CT images.

Real-time Artificial Neural Network for High-dimensional Medical Image (고차원 의료 영상을 위한 실시간 인공 신경망)

  • Choi, Kwontaeg
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.8
    • /
    • pp.637-643
    • /
    • 2016
  • Due to the popularity of artificial intelligent, medical image processing using artificial neural network is increasingly attracting the attention of academic and industry researches. Deep learning with a convolutional neural network has been proved to very effective representation of images. However, the training process requires high performance H/W platform. Thus, the realtime learning of a large number of high dimensional samples within low-power devices is a challenging problem. In this paper, we attempt to establish this possibility by presenting a realtime neural network method on Raspberry pi using online sequential extreme learning machine. Our experiments on high-dimensional dataset show that the proposed method records an almost real-time execution.

Two-phase flow pattern online monitoring system based on convolutional neural network and transfer learning

  • Hong Xu;Tao Tang
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4751-4758
    • /
    • 2022
  • Two-phase flow may almost exist in every branch of the energy industry. For the corresponding engineering design, it is very essential and crucial to monitor flow patterns and their transitions accurately. With the high-speed development and success of deep learning based on convolutional neural network (CNN), the study of flow pattern identification recently almost focused on this methodology. Additionally, the photographing technique has attractive implementation features as well, since it is normally considerably less expensive than other techniques. The development of such a two-phase flow pattern online monitoring system is the objective of this work, which seldom studied before. The ongoing preliminary engineering design (including hardware and software) of the system are introduced. The flow pattern identification method based on CNNs and transfer learning was discussed in detail. Several potential CNN candidates such as ALexNet, VggNet16 and ResNets were introduced and compared with each other based on a flow pattern dataset. According to the results, ResNet50 is the most promising CNN network for the system owing to its high precision, fast classification and strong robustness. This work can be a reference for the online monitoring system design in the energy system.

Deep Image Annotation and Classification by Fusing Multi-Modal Semantic Topics

  • Chen, YongHeng;Zhang, Fuquan;Zuo, WanLi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.392-412
    • /
    • 2018
  • Due to the semantic gap problem across different modalities, automatically retrieval from multimedia information still faces a main challenge. It is desirable to provide an effective joint model to bridge the gap and organize the relationships between them. In this work, we develop a deep image annotation and classification by fusing multi-modal semantic topics (DAC_mmst) model, which has the capacity for finding visual and non-visual topics by jointly modeling the image and loosely related text for deep image annotation while simultaneously learning and predicting the class label. More specifically, DAC_mmst depends on a non-parametric Bayesian model for estimating the best number of visual topics that can perfectly explain the image. To evaluate the effectiveness of our proposed algorithm, we collect a real-world dataset to conduct various experiments. The experimental results show our proposed DAC_mmst performs favorably in perplexity, image annotation and classification accuracy, comparing to several state-of-the-art methods.

An Implementation of Feeding Time Detection System for Smart Fish Farm Using Deep Neural Network (심층신경망을 이용한 스마트 양식장용 사료 공급 시점 감지 시스템 구현)

  • Joo-Hyeon Jeon;Yoon-Ho Lee;Moon G. Joo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.1
    • /
    • pp.19-24
    • /
    • 2023
  • In traditional fish farming way, the workers have to observe all of the pools every time and every day to feed at the right timing. This method causes tremendous stress on workers and wastes time. To solve this problem, we implemented an automatic detection system for feeding time using deep neural network. The detection system consists of two steps: classification of the presence or absence of feed and checking DO (Dissolved Oxygen) of the pool. For the classification, the pretrained ResNet18 model and transfer learning with custom dataset are used. DO is obtained from the DO sensor in the pool through HTTP in real time. For better accuracy, the next step, checking DO proceeds when the result of the classification is absence of feed several times in a row. DO is checked if it is higher than a DO reference value that is set by the workers. These actions are performed automatically in the UI programs developed with LabVIEW.

Counterfactual image generation by disentangling data attributes with deep generative models

  • Jieon Lim;Weonyoung Joo
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.589-603
    • /
    • 2023
  • Deep generative models target to infer the underlying true data distribution, and it leads to a huge success in generating fake-but-realistic data. Regarding such a perspective, the data attributes can be a crucial factor in the data generation process since non-existent counterfactual samples can be generated by altering certain factors. For example, we can generate new portrait images by flipping the gender attribute or altering the hair color attributes. This paper proposes counterfactual disentangled variational autoencoder generative adversarial networks (CDVAE-GAN), specialized for data attribute level counterfactual data generation. The structure of the proposed CDVAE-GAN consists of variational autoencoders and generative adversarial networks. Specifically, we adopt a Gaussian variational autoencoder to extract low-dimensional disentangled data features and auxiliary Bernoulli latent variables to model the data attributes separately. Also, we utilize a generative adversarial network to generate data with high fidelity. By enjoying the benefits of the variational autoencoder with the additional Bernoulli latent variables and the generative adversarial network, the proposed CDVAE-GAN can control the data attributes, and it enables producing counterfactual data. Our experimental result on the CelebA dataset qualitatively shows that the generated samples from CDVAE-GAN are realistic. Also, the quantitative results support that the proposed model can produce data that can deceive other machine learning classifiers with the altered data attributes.