• Title/Summary/Keyword: Deep Learning AI

Search Result 643, Processing Time 0.026 seconds

Characterization of Deep Learning-Based and Hybrid Iterative Reconstruction for Image Quality Optimization at Computer Tomography Angiography (전산화단층촬영조영술에서 화질 최적화를 위한 딥러닝 기반 및 하이브리드 반복 재구성의 특성분석)

  • Pil-Hyun, Jeon;Chang-Lae, Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.1-9
    • /
    • 2023
  • For optimal image quality of computer tomography angiography (CTA), different iodine concentrations and scan parameters were applied to quantitatively evaluate the image quality characteristics of filtered back projection (FBP), hybrid-iterative reconstruction (hybrid-IR), and deep learning reconstruction (DLR). A 320-row-detector CT scanner scanned a phantom with various iodine concentrations (1.2, 2.9, 4.9, 6.9, 10.4, 14.3, 18.4, and 25.9 mg/mL) located at the edge of a cylindrical water phantom with a diameter of 19 cm. Data obtained using each reconstruction technique was analyzed through noise, coefficient of variation (COV), and root mean square error (RMSE). As the iodine concentration increased, the CT number value increased, but the noise change did not show any special characteristics. COV decreased with increasing iodine concentration for FBP, adaptive iterative dose reduction (AIDR) 3D, and advanced intelligent clear-IQ engine (AiCE) at various tube voltages and tube currents. In addition, when the iodine concentration was low, there was a slight difference in COV between the reconstitution techniques, but there was little difference as the iodine concentration increased. AiCE showed the characteristic that RMSE decreased as the iodine concentration increased but rather increased after a specific concentration (4.9 mg/mL). Therefore, the user will have to consider the characteristics of scan parameters such as tube current and tube voltage as well as iodine concentration according to the reconstruction technique for optimal CTA image acquisition.

Development of Deep Learning Based Ensemble Land Cover Segmentation Algorithm Using Drone Aerial Images (드론 항공영상을 이용한 딥러닝 기반 앙상블 토지 피복 분할 알고리즘 개발)

  • Hae-Gwang Park;Seung-Ki Baek;Seung Hyun Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.71-80
    • /
    • 2024
  • In this study, a proposed ensemble learning technique aims to enhance the semantic segmentation performance of images captured by Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in fields such as urban planning, there has been active development of techniques utilizing deep learning segmentation methods for land cover segmentation. The study suggests a method that utilizes prominent segmentation models, namely U-Net, DeepLabV3, and Fully Convolutional Network (FCN), to improve segmentation prediction performance. The proposed approach integrates training loss, validation accuracy, and class score of the three segmentation models to enhance overall prediction performance. The method was applied and evaluated on a land cover segmentation problem involving seven classes: buildings,roads, parking lots, fields, trees, empty spaces, and areas with unspecified labels, using images captured by UAVs. The performance of the ensemble model was evaluated by mean Intersection over Union (mIoU), and the results of comparing the proposed ensemble model with the three existing segmentation methods showed that mIoU performance was improved. Consequently, the study confirms that the proposed technique can enhance the performance of semantic segmentation models.

Application of Reinforcement Learning in Detecting Fraudulent Insurance Claims

  • Choi, Jung-Moon;Kim, Ji-Hyeok;Kim, Sung-Jun
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.125-131
    • /
    • 2021
  • Detecting fraudulent insurance claims is difficult due to small and unbalanced data. Some research has been carried out to better cope with various types of fraudulent claims. Nowadays, technology for detecting fraudulent insurance claims has been increasingly utilized in insurance and technology fields, thanks to the use of artificial intelligence (AI) methods in addition to traditional statistical detection and rule-based methods. This study obtained meaningful results for a fraudulent insurance claim detection model based on machine learning (ML) and deep learning (DL) technologies, using fraudulent insurance claim data from previous research. In our search for a method to enhance the detection of fraudulent insurance claims, we investigated the reinforcement learning (RL) method. We examined how we could apply the RL method to the detection of fraudulent insurance claims. There are limited previous cases of applying the RL method. Thus, we first had to define the RL essential elements based on previous research on detecting anomalies. We applied the deep Q-network (DQN) and double deep Q-network (DDQN) in the learning fraudulent insurance claim detection model. By doing so, we confirmed that our model demonstrated better performance than previous machine learning models.

Can AI-generated EUV images be used for determining DEMs of solar corona?

  • Park, Eunsu;Lee, Jin-Yi;Moon, Yong-Jae;Lee, Kyoung-Sun;Lee, Harim;Cho, Il-Hyun;Lim, Daye
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.60.2-60.2
    • /
    • 2021
  • In this study, we determinate the differential emission measure(DEM) of solar corona using three SDO/AIA EUV channel images and three AI-generated ones. To generate the AI-generated images, we apply a deep learning model based on multi-layer perceptrons by assuming that all pixels in solar EUV images are independent of one another. For the input data, we use three SDO/AIA EUV channels (171, 193, and 211). For the target data, we use other three SDO/AIA EUV channels (94, 131, and 335). We train the model using 358 pairs of SDO/AIA EUV images at every 00:00 UT in 2011. We use SDO/AIA pixels within 1.2 solar radii to consider not only the solar disk but also above the limb. We apply our model to several brightening patches and loops in SDO/AIA images for the determination of DEMs. Our main results from this study are as follows. First, our model successfully generates three solar EUV channel images using the other three channel images. Second, the noises in the AI-generated EUV channel images are greatly reduced compared to the original target ones. Third, the estimated DEMs using three SDO/AIA images and three AI-generated ones are similar to those using three SDO/AIA images and three stacked (50 frames) ones. These results imply that our deep learning model is able to analyze temperature response functions of SDO/AIA channel images, showing a sufficient possibility that AI-generated data can be used for multi-wavelength studies of various scientific fields. SDO: Solar Dynamics Observatory AIA: Atmospheric Imaging Assembly EUV: Extreme Ultra Violet DEM: Diffrential Emission Measure

  • PDF

AI-BASED Monitoring Of New Plant Growth Management System Design

  • Seung-Ho Lee;Seung-Jung Shin
    • International journal of advanced smart convergence
    • /
    • v.12 no.3
    • /
    • pp.104-108
    • /
    • 2023
  • This paper deals with research on innovative systems using Python-based artificial intelligence technology in the field of plant growth monitoring. The importance of monitoring and analyzing the health status and growth environment of plants in real time contributes to improving the efficiency and quality of crop production. This paper proposes a method of processing and analyzing plant image data using computer vision and deep learning technologies. The system was implemented using Python language and the main deep learning framework, TensorFlow, PyTorch. A camera system that monitors plants in real time acquires image data and provides it as input to a deep neural network model. This model was used to determine the growth state of plants, the presence of pests, and nutritional status. The proposed system provides users with information on plant state changes in real time by providing monitoring results in the form of visual or notification. In addition, it is also used to predict future growth conditions or anomalies by building data analysis and prediction models based on the collected data. This paper is about the design and implementation of Python-based plant growth monitoring systems, data processing and analysis methods, and is expected to contribute to important research areas for improving plant production efficiency and reducing resource consumption.

Analysis of deep learning-based deep clustering method (딥러닝 기반의 딥 클러스터링 방법에 대한 분석)

  • Hyun Kwon;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.4
    • /
    • pp.61-70
    • /
    • 2023
  • Clustering is an unsupervised learning method that involves grouping data based on features such as distance metrics, using data without known labels or ground truth values. This method has the advantage of being applicable to various types of data, including images, text, and audio, without the need for labeling. Traditional clustering techniques involve applying dimensionality reduction methods or extracting specific features to perform clustering. However, with the advancement of deep learning models, research on deep clustering techniques using techniques such as autoencoders and generative adversarial networks, which represent input data as latent vectors, has emerged. In this study, we propose a deep clustering technique based on deep learning. In this approach, we use an autoencoder to transform the input data into latent vectors, and then construct a vector space according to the cluster structure and perform k-means clustering. We conducted experiments using the MNIST and Fashion-MNIST datasets in the PyTorch machine learning library as the experimental environment. The model used is a convolutional neural network-based autoencoder model. The experimental results show an accuracy of 89.42% for MNIST and 56.64% for Fashion-MNIST when k is set to 10.

Physical-Layer Technology Trend and Prospect for AI-based Mobile Communication (AI 기반 이동통신 물리계층 기술 동향과 전망)

  • Chang, K.;Ko, Y.J.;Kim, I.G.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.14-29
    • /
    • 2020
  • The 6G mobile communication system will become a backbone infrastructure around 2030 for the future digital world by providing distinctive services such as five-sense holograms, ultra-high reliability/low-latency, ultra-high-precision positioning, ultra-massive connectivity, and gigabit-per-second data rate for aerial and maritime terminals. The recent remarkable advances in machine learning (ML) technology have recognized its efficiency in wireless networking fields such as resource management and cell-configuration optimization. Further innovation in ML is expected to play an important role in solving new problems arising from 6G network management and service delivery. In contrast, an approach to apply ML to a physical-layer (PHY) target tackles the basic problems in radio links, such as overcoming signal distortion and interference. This paper reviews the methodologies of ML-based PHY, relevant industrial trends, and candiate technologies, including future research directions and standardization impacts.

Real - Time Applications of Video Compression in the Field of Medical Environments

  • K. Siva Kumar;P. Bindhu Madhavi;K. Janaki
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.73-76
    • /
    • 2023
  • We introduce DCNN and DRAE appraoches for compression of medical videos, in order to decrease file size and storage requirements, there is an increasing need for medical video compression nowadays. Using a lossy compression technique, a higher compression ratio can be attained, but information will be lost and possible diagnostic mistakes may follow. The requirement to store medical video in lossless format results from this. The aim of utilizing a lossless compression tool is to maximize compression because the traditional lossless compression technique yields a poor compression ratio. The temporal and spatial redundancy seen in video sequences can be successfully utilized by the proposed DCNN and DRAE encoding. This paper describes the lossless encoding mode and shows how a compression ratio greater than 2 (2:1) can be achieved.

The Role of Artificial Intelligence in Gastric Cancer: Surgical and Therapeutic Perspectives: A Comprehensive Review

  • JunHo Lee;Hanna Lee ;Jun-won Chung
    • Journal of Gastric Cancer
    • /
    • v.23 no.3
    • /
    • pp.375-387
    • /
    • 2023
  • Stomach cancer has a high annual mortality rate worldwide necessitating early detection and accurate treatment. Even experienced specialists can make erroneous judgments based on several factors. Artificial intelligence (AI) technologies are being developed rapidly to assist in this field. Here, we aimed to determine how AI technology is used in gastric cancer diagnosis and analyze how it helps patients and surgeons. Early detection and correct treatment of early gastric cancer (EGC) can greatly increase survival rates. To determine this, it is important to accurately determine the diagnosis and depth of the lesion and the presence or absence of metastasis to the lymph nodes, and suggest an appropriate treatment method. The deep learning algorithm, which has learned gastric lesion endoscopyimages, morphological characteristics, and patient clinical information, detects gastric lesions with high accuracy, sensitivity, and specificity, and predicts morphological characteristics. Through this, AI assists the judgment of specialists to help select the correct treatment method among endoscopic procedures and radical resections and helps to predict the resection margins of lesions. Additionally, AI technology has increased the diagnostic rate of both relatively inexperienced and skilled endoscopic diagnosticians. However, there were limitations in the data used for learning, such as the amount of quantitatively insufficient data, retrospective study design, single-center design, and cases of non-various lesions. Nevertheless, this assisted endoscopic diagnosis technology that incorporates deep learning technology is sufficiently practical and future-oriented and can play an important role in suggesting accurate treatment plans to surgeons for resection of lesions in the treatment of EGC.

Interaction art using Video Synthesis Technology

  • Kim, Sung-Soo;Eom, Hyun-Young;Lim, Chan
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.195-200
    • /
    • 2019
  • Media art, which is a combination of media technology and art, is making a lot of progress in combination with AI, IoT and VR. This paper aims to meet people's needs by creating a video that simulates the dance moves of an object that users admire by using media art that features interactive interactions between users and works. The project proposed a universal image synthesis system that minimizes equipment constraints by utilizing a deep running-based Skeleton estimation system and one of the deep-running neural network structures, rather than a Kinect-based Skeleton image. The results of the experiment showed that the images implemented through the deep learning system were successful in generating the same results as the user did when they actually danced through inference and synthesis of motion that they did not actually behave.