• Title/Summary/Keyword: Learning Modalities

Search Result 45, Processing Time 0.023 seconds

DCNN Optimization Using Multi-Resolution Image Fusion

  • Alshehri, Abdullah A.;Lutz, Adam;Ezekiel, Soundararajan;Pearlstein, Larry;Conlen, John
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4290-4309
    • /
    • 2020
  • In recent years, advancements in machine learning capabilities have allowed it to see widespread adoption for tasks such as object detection, image classification, and anomaly detection. However, despite their promise, a limitation lies in the fact that a network's performance quality is based on the data which it receives. A well-trained network will still have poor performance if the subsequent data supplied to it contains artifacts, out of focus regions, or other visual distortions. Under normal circumstances, images of the same scene captured from differing points of focus, angles, or modalities must be separately analysed by the network, despite possibly containing overlapping information such as in the case of images of the same scene captured from different angles, or irrelevant information such as images captured from infrared sensors which can capture thermal information well but not topographical details. This factor can potentially add significantly to the computational time and resources required to utilize the network without providing any additional benefit. In this study, we plan to explore using image fusion techniques to assemble multiple images of the same scene into a single image that retains the most salient key features of the individual source images while discarding overlapping or irrelevant data that does not provide any benefit to the network. Utilizing this image fusion step before inputting a dataset into the network, the number of images would be significantly reduced with the potential to improve the classification performance accuracy by enhancing images while discarding irrelevant and overlapping regions.

Enhancing Acute Kidney Injury Prediction through Integration of Drug Features in Intensive Care Units

  • Gabriel D. M. Manalu;Mulomba Mukendi Christian;Songhee You;Hyebong Choi
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.434-442
    • /
    • 2023
  • The relationship between acute kidney injury (AKI) prediction and nephrotoxic drugs, or drugs that adversely affect kidney function, is one that has yet to be explored in the critical care setting. One contributing factor to this gap in research is the limited investigation of drug modalities in the intensive care unit (ICU) context, due to the challenges of processing prescription data into the corresponding drug representations and a lack in the comprehensive understanding of these drug representations. This study addresses this gap by proposing a novel approach that leverages patient prescription data as a modality to improve existing models for AKI prediction. We base our research on Electronic Health Record (EHR) data, extracting the relevant patient prescription information and converting it into the selected drug representation for our research, the extended-connectivity fingerprint (ECFP). Furthermore, we adopt a unique multimodal approach, developing machine learning models and 1D Convolutional Neural Networks (CNN) applied to clinical drug representations, establishing a procedure which has not been used by any previous studies predicting AKI. The findings showcase a notable improvement in AKI prediction through the integration of drug embeddings and other patient cohort features. By using drug features represented as ECFP molecular fingerprints along with common cohort features such as demographics and lab test values, we achieved a considerable improvement in model performance for the AKI prediction task over the baseline model which does not include the drug representations as features, indicating that our distinct approach enhances existing baseline techniques and highlights the relevance of drug data in predicting AKI in the ICU setting.

3DentAI: U-Nets for 3D Oral Structure Reconstruction from Panoramic X-rays (3DentAI: 파노라마 X-ray로부터 3차원 구강구조 복원을 위한 U-Nets)

  • Anusree P.Sunilkumar;Seong Yong Moon;Wonsang You
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.326-334
    • /
    • 2024
  • Extra-oral imaging techniques such as Panoramic X-rays (PXs) and Cone Beam Computed Tomography (CBCT) are the most preferred imaging modalities in dental clinics owing to its patient convenience during imaging as well as their ability to visualize entire teeth information. PXs are preferred for routine clinical treatments and CBCTs for complex surgeries and implant treatments. However, PXs are limited by the lack of third dimensional spatial information whereas CBCTs inflict high radiation exposure to patient. When a PX is already available, it is beneficial to reconstruct the 3D oral structure from the PX to avoid further expenses and radiation dose. In this paper, we propose 3DentAI - an U-Net based deep learning framework for 3D reconstruction of oral structure from a PX image. Our framework consists of three module - a reconstruction module based on attention U-Net for estimating depth from a PX image, a realignment module for aligning the predicted flattened volume to the shape of jaw using a predefined focal trough and ray data, and lastly a refinement module based on 3D U-Net for interpolating the missing information to obtain a smooth representation of oral cavity. Synthetic PXs obtained from CBCT by ray tracing and rendering were used to train the networks without the need of paired PX and CBCT datasets. Our method, trained and tested on a diverse datasets of 600 patients, achieved superior performance to GAN-based models even with low computational complexity.

2023 Survey on User Experience of Artificial Intelligence Software in Radiology by the Korean Society of Radiology

  • Eui Jin Hwang;Ji Eun Park;Kyoung Doo Song;Dong Hyun Yang;Kyung Won Kim;June-Goo Lee;Jung Hyun Yoon;Kyunghwa Han;Dong Hyun Kim;Hwiyoung Kim;Chang Min Park;Radiology Imaging Network of Korea for Clinical Research (RINK-CR)
    • Korean Journal of Radiology
    • /
    • v.25 no.7
    • /
    • pp.613-622
    • /
    • 2024
  • Objective: In Korea, radiology has been positioned towards the early adoption of artificial intelligence-based software as medical devices (AI-SaMDs); however, little is known about the current usage, implementation, and future needs of AI-SaMDs. We surveyed the current trends and expectations for AI-SaMDs among members of the Korean Society of Radiology (KSR). Materials and Methods: An anonymous and voluntary online survey was open to all KSR members between April 17 and May 15, 2023. The survey was focused on the experiences of using AI-SaMDs, patterns of usage, levels of satisfaction, and expectations regarding the use of AI-SaMDs, including the roles of the industry, government, and KSR regarding the clinical use of AI-SaMDs. Results: Among the 370 respondents (response rate: 7.7% [370/4792]; 340 board-certified radiologists; 210 from academic institutions), 60.3% (223/370) had experience using AI-SaMDs. The two most common use-case of AI-SaMDs among the respondents were lesion detection (82.1%, 183/223), lesion diagnosis/classification (55.2%, 123/223), with the target imaging modalities being plain radiography (62.3%, 139/223), CT (42.6%, 95/223), mammography (29.1%, 65/223), and MRI (28.7%, 64/223). Most users were satisfied with AI-SaMDs (67.6% [115/170, for improvement of patient management] to 85.1% [189/222, for performance]). Regarding the expansion of clinical applications, most respondents expressed a preference for AI-SaMDs to assist in detection/diagnosis (77.0%, 285/370) and to perform automated measurement/quantification (63.5%, 235/370). Most respondents indicated that future development of AI-SaMDs should focus on improving practice efficiency (81.9%, 303/370) and quality (71.4%, 264/370). Overall, 91.9% of the respondents (340/370) agreed that there is a need for education or guidelines driven by the KSR regarding the use of AI-SaMDs. Conclusion: The penetration rate of AI-SaMDs in clinical practice and the corresponding satisfaction levels were high among members of the KSR. Most AI-SaMDs have been used for lesion detection, diagnosis, and classification. Most respondents requested KSR-driven education or guidelines on the use of AI-SaMDs.

The Surgical Outcome for Gastric Submucosal Tumors: Laparoscopy vs. Open Surgery (위 점막하 종양에 대한 개복 및 복강경 위 절제술의 비교)

  • Lim, Chai-Sun;Lee, Sang-Lim;Park, Jong-Min;Jin, Sung-Ho;Jung, In-Ho;Cho, Young-Kwan;Han, Sang-Uk
    • Journal of Gastric Cancer
    • /
    • v.8 no.4
    • /
    • pp.225-231
    • /
    • 2008
  • Purpose: Laparoscopic gastric resection (LGR) is increasingly being used instead of open gastric resection (OGR) as the standard surgical treatment for gastric submucosal tumors. Yet there are few reports on which technique shows better postoperative outcomes. This study was performed to compare these two treatment modalities for gastric submucosal tumors by evaluating the postoperative outcomes. We also provide an analysis of the learning curve for LGR. Materials and Methods: Between 2003.4 and 2008.8, 103 patients with a gastric submucosal tumor underwent either LGR (N=78) or OGR (n=25). A retrospective review was performed on a prospectively obtained database of 103 patients. We reviewed the data with regard to the operative time, the blood loss during the operation, the time to the first soft diet, the postoperative hospital stay, the tumor size and the tumor location. Results: The clinicopatholgic and tumor characteristics of the patients were similar for both groups. There was no open conversion in the LGR group. The mean operation time and the bleeding loss were not different between the LGR group and the OWR group. The time to first soft diet (3.27 vs. 6.16 days, P<0.001) and the length of the postoperative hospital stay (7.37 vs. 8.88 days, P=0.002) were shorter in the LGR group compared to the OGR group. The tumor size was bigger in the OGR group than that in the LGR group (6.44 vs. 3.65 cm, P<0.001). When performing laparoscopic gastric resection of gastric SMT, the surgeon was able to decrease the operation time and bleeding loss with gaining more experience. We separated the total cases into 3 periods to compare the operation time, the bleeding losses and the complications. The third period showed the shortest operation time, the least bleeding loss and the fewest complications. Conclusion: LGR for treating a gastric submucosal tumor was superior to OGR in terms of the postoperative outcomes. An operator needs some experience to perform a complete laparoscopic gastric resection. Laparoscopic resection could be considered the first-line treatment for gastric submucosal tumors.

  • PDF