• Title/Summary/Keyword: Learning Modalities

Search Result 45, Processing Time 0.028 seconds

Revolutionizing Brain Tumor Segmentation in MRI with Dynamic Fusion of Handcrafted Features and Global Pathway-based Deep Learning

  • Faizan Ullah;Muhammad Nadeem;Mohammad Abrar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.105-125
    • /
    • 2024
  • Gliomas are the most common malignant brain tumor and cause the most deaths. Manual brain tumor segmentation is expensive, time-consuming, error-prone, and dependent on the radiologist's expertise and experience. Manual brain tumor segmentation outcomes by different radiologists for the same patient may differ. Thus, more robust, and dependable methods are needed. Medical imaging researchers produced numerous semi-automatic and fully automatic brain tumor segmentation algorithms using ML pipelines and accurate (handcrafted feature-based, etc.) or data-driven strategies. Current methods use CNN or handmade features such symmetry analysis, alignment-based features analysis, or textural qualities. CNN approaches provide unsupervised features, while manual features model domain knowledge. Cascaded algorithms may outperform feature-based or data-driven like CNN methods. A revolutionary cascaded strategy is presented that intelligently supplies CNN with past information from handmade feature-based ML algorithms. Each patient receives manual ground truth and four MRI modalities (T1, T1c, T2, and FLAIR). Handcrafted characteristics and deep learning are used to segment brain tumors in a Global Convolutional Neural Network (GCNN). The proposed GCNN architecture with two parallel CNNs, CSPathways CNN (CSPCNN) and MRI Pathways CNN (MRIPCNN), segmented BraTS brain tumors with high accuracy. The proposed model achieved a Dice score of 87% higher than the state of the art. This research could improve brain tumor segmentation, helping clinicians diagnose and treat patients.

Imaging Evaluation of Peritoneal Metastasis: Current and Promising Techniques

  • Chen Fu;Bangxing Zhang;Tiankang Guo;Junliang Li
    • Korean Journal of Radiology
    • /
    • v.25 no.1
    • /
    • pp.86-102
    • /
    • 2024
  • Early diagnosis, accurate assessment, and localization of peritoneal metastasis (PM) are essential for the selection of appropriate treatments and surgical guidance. However, available imaging modalities (computed tomography [CT], conventional magnetic resonance imaging [MRI], and 18fluorodeoxyglucose positron emission tomography [PET]/CT) have limitations. The advent of new imaging techniques and novel molecular imaging agents have revealed molecular processes in the tumor microenvironment as an application for the early diagnosis and assessment of PM as well as real-time guided surgical resection, which has changed clinical management. In contrast to clinical imaging, which is purely qualitative and subjective for interpreting macroscopic structures, radiomics and artificial intelligence (AI) capitalize on high-dimensional numerical data from images that may reflect tumor pathophysiology. A predictive model can be used to predict the occurrence, recurrence, and prognosis of PM, thereby avoiding unnecessary exploratory surgeries. This review summarizes the role and status of different imaging techniques, especially new imaging strategies such as spectral photon-counting CT, fibroblast activation protein inhibitor (FAPI) PET/CT, near-infrared fluorescence imaging, and PET/MRI, for early diagnosis, assessment of surgical indications, and recurrence monitoring in patients with PM. The clinical applications, limitations, and solutions for fluorescence imaging, radiomics, and AI are also discussed.

Multi-parametric MRIs based assessment of Hepatocellular Carcinoma Differentiation with Multi-scale ResNet

  • Jia, Xibin;Xiao, Yujie;Yang, Dawei;Yang, Zhenghan;Lu, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5179-5196
    • /
    • 2019
  • To explore an effective non-invasion medical imaging diagnostics approach for hepatocellular carcinoma (HCC), we propose a method based on adopting the multiple technologies with the multi-parametric data fusion, transfer learning, and multi-scale deep feature extraction. Firstly, to make full use of complementary and enhancing the contribution of different modalities viz. multi-parametric MRI images in the lesion diagnosis, we propose a data-level fusion strategy. Secondly, based on the fusion data as the input, the multi-scale residual neural network with SPP (Spatial Pyramid Pooling) is utilized for the discriminative feature representation learning. Thirdly, to mitigate the impact of the lack of training samples, we do the pre-training of the proposed multi-scale residual neural network model on the natural image dataset and the fine-tuning with the chosen multi-parametric MRI images as complementary data. The comparative experiment results on the dataset from the clinical cases show that our proposed approach by employing the multiple strategies achieves the highest accuracy of 0.847±0.023 in the classification problem on the HCC differentiation. In the problem of discriminating the HCC lesion from the non-tumor area, we achieve a good performance with accuracy, sensitivity, specificity and AUC (area under the ROC curve) being 0.981±0.002, 0.981±0.002, 0.991±0.007 and 0.999±0.0008, respectively.

A Review on Detection of COVID-19 Cases from Medical Images Using Machine Learning-Based Approach

  • Noof Al-dieef;Shabana Habib
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.3
    • /
    • pp.59-70
    • /
    • 2024
  • Background: The COVID-19 pandemic (the form of coronaviruses) developed at the end of 2019 and spread rapidly to almost every corner of the world. It has infected around 25,334,339 of the world population by the end of September 1, 2020 [1] . It has been spreading ever since, and the peak specific to every country has been rising and falling and does not seem to be over yet. Currently, the conventional RT-PCR testing is required to detect COVID-19, but the alternative method for data archiving purposes is certainly another choice for public departments to make. Researchers are trying to use medical images such as X-ray and Computed Tomography (CT) to easily diagnose the virus with the aid of Artificial Intelligence (AI)-based software. Method: This review paper provides an investigation of a newly emerging machine-learning method used to detect COVID-19 from X-ray images instead of using other methods of tests performed by medical experts. The facilities of computer vision enable us to develop an automated model that has clinical abilities of early detection of the disease. We have explored the researchers' focus on the modalities, images of datasets for use by the machine learning methods, and output metrics used to test the research in this field. Finally, the paper concludes by referring to the key problems posed by identifying COVID-19 using machine learning and future work studies. Result: This review's findings can be useful for public and private sectors to utilize the X-ray images and deployment of resources before the pandemic can reach its peaks, enabling the healthcare system with cushion time to bear the impact of the unfavorable circumstances of the pandemic is sure to cause

Basic Implementation of Multi Input CNN for Face Recognition (얼굴인식을 위한 다중입력 CNN의 기본 구현)

  • Cheema, Usman;Moon, Seungbin
    • Annual Conference of KIPS
    • /
    • 2019.10a
    • /
    • pp.1002-1003
    • /
    • 2019
  • Face recognition is an extensively researched area of computer vision. Visible, infrared, thermal, and 3D modalities have been used against various challenges of face recognition such as illumination, pose, expression, partial information, and disguise. In this paper we present a multi-modal approach to face recognition using convolutional neural networks. We use visible and thermal face images as two separate inputs to a multi-input deep learning network for face recognition. The experiments are performed on IRIS visible and thermal face database and high face verification rates are achieved.

REVIEW OF DIFFUSION MODELS: THEORY AND APPLICATIONS

  • HYUNGJIN CHUNG;HYELIN NAM;JONG CHUL YE
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.28 no.1
    • /
    • pp.1-21
    • /
    • 2024
  • This review comprehensively explores the evolution, theoretical underpinnings, variations, and applications of diffusion models. Originating as a generative framework, diffusion models have rapidly ascended to the forefront of machine learning research, owing to their exceptional capability, stability, and versatility. We dissect the core principles driving diffusion processes, elucidating their mathematical foundations and the mechanisms by which they iteratively refine noise into structured data. We highlight pivotal advancements and the integration of auxiliary techniques that have significantly enhanced their efficiency and stability. Variants such as bridges that broaden the applicability of diffusion models to wider domains are introduced. We put special emphasis on the ability of diffusion models as a crucial foundation model, with modalities ranging from image, 3D assets, and video. The role of diffusion models as a general foundation model leads to its versatility in many of the downstream tasks such as solving inverse problems and image editing. Through this review, we aim to provide a thorough and accessible compendium for both newcomers and seasoned researchers in the field.

Analysis and Study for Appropriate Deep Neural Network Structures and Self-Supervised Learning-based Brain Signal Data Representation Methods (딥 뉴럴 네트워크의 적절한 구조 및 자가-지도 학습 방법에 따른 뇌신호 데이터 표현 기술 분석 및 고찰)

  • Won-Jun Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.137-142
    • /
    • 2024
  • Recently, deep learning technology has become those methods as de facto standards in the area of medical data representation. But, deep learning inherently requires a large amount of training data, which poses a challenge for its direct application in the medical field where acquiring large-scale data is not straightforward. Additionally, brain signal modalities also suffer from these problems owing to the high variability. Research has focused on designing deep neural network structures capable of effectively extracting spectro-spatio-temporal characteristics of brain signals, or employing self-supervised learning methods to pre-learn the neurophysiological features of brain signals. This paper analyzes methodologies used to handle small-scale data in emerging fields such as brain-computer interfaces and brain signal-based state prediction, presenting future directions for these technologies. At first, this paper examines deep neural network structures for representing brain signals, then analyzes self-supervised learning methodologies aimed at efficiently learning the characteristics of brain signals. Finally, the paper discusses key insights and future directions for deep learning-based brain signal analysis.

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park;Jong Chul Ye;Eun Sun Lee;Gyeongme Cho;Jin Woo Yoon;Joo Hyeok Choi;Ijin Joo;Yoon Jin Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.6
    • /
    • pp.541-552
    • /
    • 2023
  • Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.

Effects of Web-based Simulation and High-fidelity Simulation of Acute Heart Disease Patient Care (급성 심장질환자 간호에 대한 웹기반 시뮬레이션과 고충실도 시뮬레이션 교육 효과)

  • Chu, Min Sun;Hwang, Yoon Young
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.23 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • Purpose: The aim of this study was to assess the efficacy of web-based simulation and high-fidelity simulation on acute heart disease patient care. Methods: The project used a comparative study design with two simulation-based training modalities. A total of 144 nursing students participated in this study: 76 students in a web-based simulation, and 68 students in a high-fidelity simulation. Participants rated their self-efficacy, problem-solving ability, interest in learning, level of stress, satisfaction with the simulation experience, and level of difficulty of the simulation. Results: The scores for self-efficacy, problem-solving ability, and interest in learning including interest in clinical training in the high-fidelity simulation group was higher than in the web-based simulation group. However, there were no significant differences in interest in learning, including interest in nursing knowledge, and in lab training, level of stress, satisfaction with the simulation experience, and level of difficulty of the simulation. Conclusion: A high-fidelity simulation of acute heart disease patient care might be beneficial to developing many more abilities for nursing students than would a web-based simulation. Also, since the web-based simulation improved interest in nursing knowledge, it could be a viable alternative to high-fidelity simulation. Further study is needed to verify the effects of varied levels of simulation-based care with more rigorous outcomes.

Digital Transformation of Education Brought by COVID-19 Pandemic

  • Kim, Hye-jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.183-193
    • /
    • 2021
  • In this paper, the author found and analyzed the problems caused by the change of traditional teaching methods to online in classrooms and laboratories. Looking at the analysis of major problems, first, there were various technical problems, including not all environments and facilities being connected to the Internet. Second, the effectiveness of virtual classes, which were suddenly switched online, could also be questioned. Finally, in the face of a new environment, the stress of teachers to adapt rapidly to the new teaching methodology was a problem. The author proposed digital transformation as a way to address these problems. The author analyzed educational changes, learning modalities and various technical tools, and various tasks to enable digital transformation. First, the author investigated, analyzed, and presented the factors necessary to efficiently operate the classroom environment that will change to online. Next, the author analyzed the factors and problems needed to make the students' classes reliable and efficient, and proposed solutions. Finally, the author pointed out the problem that during online lectures, the responsibility of learning is excessively transferred from teachers to students, and proposed a solution to this problem. Subsequently, the author proposed future studies.