• Title/Summary/Keyword: deep Learning

Search Result 5,763, Processing Time 0.03 seconds

Reporting Quality of Research Studies on AI Applications in Medical Images According to the CLAIM Guidelines in a Radiology Journal With a Strong Prominence in Asia

  • Dong Yeong Kim;Hyun Woo Oh;Chong Hyun Suh
    • Korean Journal of Radiology
    • /
    • v.24 no.12
    • /
    • pp.1179-1189
    • /
    • 2023
  • Objective: We aimed to evaluate the reporting quality of research articles that applied deep learning to medical imaging. Using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines and a journal with prominence in Asia as a sample, we intended to provide an insight into reporting quality in the Asian region and establish a journal-specific audit. Materials and Methods: A total of 38 articles published in the Korean Journal of Radiology between June 2018 and January 2023 were analyzed. The analysis included calculating the percentage of studies that adhered to each CLAIM item and identifying items that were met by ≤ 50% of the studies. The article review was initially conducted independently by two reviewers, and the consensus results were used for the final analysis. We also compared adherence rates to CLAIM before and after December 2020. Results: Of the 42 items in the CLAIM guidelines, 12 items (29%) were satisfied by ≤ 50% of the included articles. None of the studies reported handling missing data (item #13). Only one study respectively presented the use of de-identification methods (#12), intended sample size (#19), robustness or sensitivity analysis (#30), and full study protocol (#41). Of the studies, 35% reported the selection of data subsets (#10), 40% reported registration information (#40), and 50% measured inter and intrarater variability (#18). No significant changes were observed in the rates of adherence to these 12 items before and after December 2020. Conclusion: The reporting quality of artificial intelligence studies according to CLAIM guidelines, in our study sample, showed room for improvement. We recommend that the authors and reviewers have a solid understanding of the relevant reporting guidelines and ensure that the essential elements are adequately reported when writing and reviewing the manuscripts for publication.

Evaluation and Prediction of Post-Hepatectomy Liver Failure Using Imaging Techniques: Value of Gadoxetic Acid-Enhanced Magnetic Resonance Imaging

  • Keitaro Sofue;Ryuji Shimada;Eisuke Ueshima;Shohei Komatsu;Takeru Yamaguchi;Shinji Yabe;Yoshiko Ueno;Masatoshi Hori;Takamichi Murakami
    • Korean Journal of Radiology
    • /
    • v.25 no.1
    • /
    • pp.24-32
    • /
    • 2024
  • Despite improvements in operative techniques and perioperative care, post-hepatectomy liver failure (PHLF) remains the most serious cause of morbidity and mortality after surgery, and several risk factors have been identified to predict PHLF. Although volumetric assessment using imaging contributes to surgical simulation by estimating the function of future liver remnants in predicting PHLF, liver function is assumed to be homogeneous throughout the liver. The combination of volumetric and functional analyses may be more useful for an accurate evaluation of liver function and prediction of PHLF than only volumetric analysis. Gadoxetic acid is a hepatocyte-specific magnetic resonance (MR) contrast agent that is taken up by hepatocytes via the OATP1 transporter after intravenous administration. Gadoxetic acid-enhanced MR imaging (MRI) offers information regarding both global and regional functions, leading to a more precise evaluation even in cases with heterogeneous liver function. Various indices, including signal intensity-based methods and MR relaxometry, have been proposed for the estimation of liver function and prediction of PHLF using gadoxetic acid-enhanced MRI. Recent developments in MR techniques, including high-resolution hepatobiliary phase images using deep learning image reconstruction and whole-liver T1 map acquisition, have enabled a more detailed and accurate estimation of liver function in gadoxetic acid-enhanced MRI.

A high-density gamma white spots-Gaussian mixture noise removal method for neutron images denoising based on Swin Transformer UNet and Monte Carlo calculation

  • Di Zhang;Guomin Sun;Zihui Yang;Jie Yu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.715-727
    • /
    • 2024
  • During fast neutron imaging, besides the dark current noise and readout noise of the CCD camera, the main noise in fast neutron imaging comes from high-energy gamma rays generated by neutron nuclear reactions in and around the experimental setup. These high-energy gamma rays result in the presence of high-density gamma white spots (GWS) in the fast neutron image. Due to the microscopic quantum characteristics of the neutron beam itself and environmental scattering effects, fast neutron images typically exhibit a mixture of Gaussian noise. Existing denoising methods in neutron images are difficult to handle when dealing with a mixture of GWS and Gaussian noise. Herein we put forward a deep learning approach based on the Swin Transformer UNet (SUNet) model to remove high-density GWS-Gaussian mixture noise from fast neutron images. The improved denoising model utilizes a customized loss function for training, which combines perceptual loss and mean squared error loss to avoid grid-like artifacts caused by using a single perceptual loss. To address the high cost of acquiring real fast neutron images, this study introduces Monte Carlo method to simulate noise data with GWS characteristics by computing the interaction between gamma rays and sensors based on the principle of GWS generation. Ultimately, the experimental scenarios involving simulated neutron noise images and real fast neutron images demonstrate that the proposed method not only improves the quality and signal-to-noise ratio of fast neutron images but also preserves the details of the original images during denoising.

Comparison of Solar Power Generation Forecasting Performance in Daejeon and Busan Based on Preprocessing Methods and Artificial Intelligence Techniques: Using Meteorological Observation and Forecast Data (전처리 방법과 인공지능 모델 차이에 따른 대전과 부산의 태양광 발전량 예측성능 비교: 기상관측자료와 예보자료를 이용하여)

  • Chae-Yeon Shim;Gyeong-Min Baek;Hyun-Su Park;Jong-Yeon Park
    • Atmosphere
    • /
    • v.34 no.2
    • /
    • pp.177-185
    • /
    • 2024
  • As increasing global interest in renewable energy due to the ongoing climate crisis, there is a growing need for efficient technologies to manage such resources. This study focuses on the predictive skill of daily solar power generation using weather observation and forecast data. Meteorological data from the Korea Meteorological Administration and solar power generation data from the Korea Power Exchange were utilized for the period from January 2017 to May 2023, considering both inland (Daejeon) and coastal (Busan) regions. Temperature, wind speed, relative humidity, and precipitation were selected as relevant meteorological variables for solar power prediction. All data was preprocessed by removing their systematic components to use only their residuals and the residual of solar data were further processed with weighted adjustments for homoscedasticity. Four models, MLR (Multiple Linear Regression), RF (Random Forest), DNN (Deep Neural Network), and RNN (Recurrent Neural Network), were employed for solar power prediction and their performances were evaluated based on predicted values utilizing observed meteorological data (used as a reference), 1-day-ahead forecast data (referred to as fore1), and 2-day-ahead forecast data (fore2). DNN-based prediction model exhibits superior performance in both regions, with RNN performing the least effectively. However, MLR and RF demonstrate competitive performance comparable to DNN. The disparities in the performance of the four different models are less pronounced than anticipated, underscoring the pivotal role of fitting models using residuals. This emphasizes that the utilized preprocessing approach, specifically leveraging residuals, is poised to play a crucial role in the future of solar power generation forecasting.

Segmentation of Mammography Breast Images using Automatic Segmen Adversarial Network with Unet Neural Networks

  • Suriya Priyadharsini.M;J.G.R Sathiaseelan
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.151-160
    • /
    • 2023
  • Breast cancer is the most dangerous and deadly form of cancer. Initial detection of breast cancer can significantly improve treatment effectiveness. The second most common cancer among Indian women in rural areas. Early detection of symptoms and signs is the most important technique to effectively treat breast cancer, as it enhances the odds of receiving an earlier, more specialist care. As a result, it has the possible to significantly improve survival odds by delaying or entirely eliminating cancer. Mammography is a high-resolution radiography technique that is an important factor in avoiding and diagnosing cancer at an early stage. Automatic segmentation of the breast part using Mammography pictures can help reduce the area available for cancer search while also saving time and effort compared to manual segmentation. Autoencoder-like convolutional and deconvolutional neural networks (CN-DCNN) were utilised in previous studies to automatically segment the breast area in Mammography pictures. We present Automatic SegmenAN, a unique end-to-end adversarial neural network for the job of medical image segmentation, in this paper. Because image segmentation necessitates extensive, pixel-level labelling, a standard GAN's discriminator's single scalar real/fake output may be inefficient in providing steady and appropriate gradient feedback to the networks. Instead of utilising a fully convolutional neural network as the segmentor, we suggested a new adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local attributes that collect long- and short-range spatial relations among pixels. We demonstrate that an Automatic SegmenAN perspective is more up to date and reliable for segmentation tasks than the state-of-the-art U-net segmentation technique.

Research on a system for determining the timing of shipment based on artificial intelligence-based crop maturity checks and consideration of fluctuations in agricultural product market prices (인공지능 기반 농작물 성숙도 체크와 농산물 시장가격 변동을 고려한 출하시기 결정시스템 연구)

  • LI YU;NamHo Kim
    • Smart Media Journal
    • /
    • v.13 no.1
    • /
    • pp.9-17
    • /
    • 2024
  • This study aims to develop an integrated agricultural distribution network management system to improve the quality, profit, and decision-making efficiency of agricultural products. We adopt two key techniques: crop maturity detection based on the YOLOX target detection algorithm and market price prediction based on the Prophet model. By training the target detection model, it was possible to accurately identify crops of various maturity stages, thereby optimizing the shipment timing. At the same time, by collecting historical market price data and predicting prices using the Prophet model, we provided reliable price trend information to shipping decision makers. According to the results of the study, it was found that the performance of the model considering the holiday factor was significantly superior to that of the model that did not, proving that the effect of the holiday on the price was strong. The system provides strong tools and decision support to farmers and agricultural distribution managers, helping them make smart decisions during various seasons and holidays. In addition, it is possible to optimize the distribution network of agricultural products and improve the quality and profit of agricultural products.

Autoencoder Based Fire Detection Model Using Multi-Sensor Data (다중 센서 데이터를 활용한 오토인코더 기반 화재감지 모델)

  • Taeseong Kim;Hyo-Rin Choi;Young-Seon Jeong
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.23-32
    • /
    • 2024
  • Large-scale fires and their consequential damages are becoming increasingly common, but confidence in fire detection systems is waning. Recently, widely-used chemical fire detectors frequently generate lots of false alarms, while video-based deep learning fire detection is hampered by its time-consuming and expensive nature. To tackle these issues, this study proposes a fire detection model utilizing an autoencoder approach. The objective is to minimize false alarms while achieving swift and precise fire detection. The proposed model, employing an autoencoder methodology, can exclusively learn from normal data without the need for fire-related data, thus enhancing its adaptability to diverse environments. By amalgamating data from five distinct sensors, it facilitates rapid and accurate fire detection. Through experiments with various hyperparameter combinations, the proposed model demonstrated that out of 14 scenarios, only one encountered false alarm issues. Experimental results underscore its potential to curtail fire-related losses and bolster the reliability of fire detection systems.

Anomaly Detection in Livestock Environmental Time Series Data Using LSTM Autoencoders: A Comparison of Performance Based on Threshold Settings (LSTM 오토인코더를 활용한 축산 환경 시계열 데이터의 이상치 탐지: 경계값 설정에 따른 성능 비교)

  • Se Yeon Chung;Sang Cheol Kim
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.48-56
    • /
    • 2024
  • In the livestock industry, detecting environmental outliers and predicting data are crucial tasks. Outliers in livestock environment data, typically gathered through time-series methods, can signal rapid changes in the environment and potential unexpected epidemics. Prompt detection and response to these outliers are essential to minimize stress in livestock and reduce economic losses for farmers by early detection of epidemic conditions. This study employs two methods to experiment and compare performances in setting thresholds that define outliers in livestock environment data outlier detection. The first method is an outlier detection using Mean Squared Error (MSE), and the second is an outlier detection using a Dynamic Threshold, which analyzes variability against the average value of previous data to identify outliers. The MSE-based method demonstrated a 94.98% accuracy rate, while the Dynamic Threshold method, which uses standard deviation, showed superior performance with 99.66% accuracy.

Comparison of Data Reconstruction Methods for Missing Value Imputation (결측값 대체를 위한 데이터 재현 기법 비교)

  • Cheongho Kim;Kee-Hoon Kang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.603-608
    • /
    • 2024
  • Nonresponse and missing values are caused by sample dropouts and avoidance of answers to surveys. In this case, problems with the possibility of information loss and biased reasoning arise, and a replacement of missing values with appropriate values is required. In this paper, as an alternative to missing values imputation, we compare several replacement methods, which use mean, linear regression, random forest, K-nearest neighbor, autoencoder and denoising autoencoder based on deep learning. These methods of imputing missing values are explained, and each method is compared by using continuous simulation data and real data. The comparison results confirm that in most cases, the performance of the random forest imputation method and the denoising autoencoder imputation method are better than the others.

Three-Dimensional Convolutional Vision Transformer for Sign Language Translation (수어 번역을 위한 3차원 컨볼루션 비전 트랜스포머)

  • Horyeor Seong;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.140-147
    • /
    • 2024
  • In the Republic of Korea, people with hearing impairments are the second-largest demographic within the registered disability community, following those with physical disabilities. Despite this demographic significance, research on sign language translation technology is limited due to several reasons including the limited market size and the lack of adequately annotated datasets. Despite the difficulties, a few researchers continue to improve the performacne of sign language translation technologies by employing the recent advance of deep learning, for example, the transformer architecture, as the transformer-based models have demonstrated noteworthy performance in tasks such as action recognition and video classification. This study focuses on enhancing the recognition performance of sign language translation by combining transformers with 3D-CNN. Through experimental evaluations using the PHOENIX-Wether-2014T dataset [1], we show that the proposed model exhibits comparable performance to existing models in terms of Floating Point Operations Per Second (FLOPs).