• Title/Summary/Keyword: deep Learning

Search Result 5,763, Processing Time 0.035 seconds

Comparison of Pre-processed Brain Tumor MR Images Using Deep Learning Detection Algorithms

  • Kwon, Hee Jae;Lee, Gi Pyo;Kim, Young Jae;Kim, Kwang Gi
    • Journal of Multimedia Information System
    • /
    • v.8 no.2
    • /
    • pp.79-84
    • /
    • 2021
  • Detecting brain tumors of different sizes is a challenging task. This study aimed to identify brain tumors using detection algorithms. Most studies in this area use segmentation; however, we utilized detection owing to its advantages. Data were obtained from 64 patients and 11,200 MR images. The deep learning model used was RetinaNet, which is based on ResNet152. The model learned three different types of pre-processing images: normal, general histogram equalization, and contrast-limited adaptive histogram equalization (CLAHE). The three types of images were compared to determine the pre-processing technique that exhibits the best performance in the deep learning algorithms. During pre-processing, we converted the MR images from DICOM to JPG format. Additionally, we regulated the window level and width. The model compared the pre-processed images to determine which images showed adequate performance; CLAHE showed the best performance, with a sensitivity of 81.79%. The RetinaNet model for detecting brain tumors through deep learning algorithms demonstrated satisfactory performance in finding lesions. In future, we plan to develop a new model for improving the detection performance using well-processed data. This study lays the groundwork for future detection technologies that can help doctors find lesions more easily in clinical tasks.

Deep learning platform architecture for monitoring image-based real-time construction site equipment and worker (이미지 기반 실시간 건설 현장 장비 및 작업자 모니터링을 위한 딥러닝 플랫폼 아키텍처 도출)

  • Kang, Tae-Wook;Kim, Byung-Kon;Jung, Yoo-Seok
    • Journal of KIBIM
    • /
    • v.11 no.2
    • /
    • pp.24-32
    • /
    • 2021
  • Recently, starting with smart construction research, interest in technology that automates construction site management using artificial intelligence technology is increasing. In order to automate construction site management, it is necessary to recognize objects such as construction equipment or workers, and automatically analyze the relationship between them. For example, if the relationship between workers and construction equipment at a construction site can be known, various use cases of site management such as work productivity, equipment operation status monitoring, and safety management can be implemented. This study derives a real-time object detection platform architecture that is required when performing construction site management using deep learning technology, which has recently been increasingly used. To this end, deep learning models that support real-time object detection are investigated and analyzed. Based on this, a deep learning model development process required for real-time construction site object detection is defined. Based on the defined process, a prototype that learns and detects construction site objects is developed, and then platform development considerations and architecture are derived from the results.

Performance Enhancement Technique of Visible Communication Systems based on Deep-Learning (딥러닝 기반 가시광 통신 시스템의 성능 향상 기법)

  • Seo, Sung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.51-55
    • /
    • 2021
  • In this paper, we propose the deep learning based interference cancellation scheme algorithm for visible light communication (VLC) systems in smart building. The proposed scheme estimates the channel noise information by applying a deep learning model. Then, the estimated channel noise is updated in database. In the modulator, the channel noise which reduces the VLC performance is effectively removed through interference cancellation technique. The performance is evaluated in terms of bit error rate (BER). From the simulation results, it is confirmed that the proposed scheme has better BER performance. Consequently, the proposed interference cancellation with deep learning improves the signal quality of VLC systems by effectively removing the channel noise. The results of the paper can be applied to VLC for smart building and general communication systems.

Implementation of Deep Learning-based Label Inspection System Applicable to Edge Computing Environments (엣지 컴퓨팅 환경에서 적용 가능한 딥러닝 기반 라벨 검사 시스템 구현)

  • Bae, Ju-Won;Han, Byung-Gil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.2
    • /
    • pp.77-83
    • /
    • 2022
  • In this paper, the two-stage object detection approach is proposed to implement a deep learning-based label inspection system on edge computing environments. Since the label printed on the products during the production process contains important information related to the product, it is significantly to check the label information is correct. The proposed system uses the lightweight deep learning model that able to employ in the low-performance edge computing devices, and the two-stage object detection approach is applied to compensate for the low accuracy relatively. The proposed Two-Stage object detection approach consists of two object detection networks, Label Area Detection Network and Character Detection Network. Label Area Detection Network finds the label area in the product image, and Character Detection Network detects the words in the label area. Using this approach, we can detect characters precise even with a lightweight deep learning models. The SF-YOLO model applied in the proposed system is the YOLO-based lightweight object detection network designed for edge computing devices. This model showed up to 2 times faster processing time and a considerable improvement in accuracy, compared to other YOLO-based lightweight models such as YOLOv3-tiny and YOLOv4-tiny. Also since the amount of computation is low, it can be easily applied in edge computing environments.

Deep Learning Based Electricity Demand Prediction and Power Grid Operation according to Urbanization Rate and Industrial Differences (도시화율 및 산업 구성 차이에 따른 딥러닝 기반 전력 수요 변동 예측 및 전력망 운영)

  • KIM, KAYOUNG;LEE, SANGHUN
    • Journal of Hydrogen and New Energy
    • /
    • v.33 no.5
    • /
    • pp.591-597
    • /
    • 2022
  • Recently, technologies for efficient power grid operation have become important due to climate change. For this reason, predicting power demand using deep learning is being considered, and it is necessary to understand the influence of characteristics of each region, industrial structure, and climate. This study analyzed the power demand of New Jersey in US, with a high urbanization rate and a large service industry, and West Virginia in US, a low urbanization rate and a large coal, energy, and chemical industries. Using recurrent neural network algorithm, the power demand from January 2020 to August 2022 was learned, and the daily and weekly power demand was predicted. In addition, the power grid operation based on the power demand forecast was discussed. Unlike previous studies that have focused on the deep learning algorithm itself, this study analyzes the regional power demand characteristics and deep learning algorithm application, and power grid operation strategy.

Vibration-based structural health monitoring using CAE-aided unsupervised deep learning

  • Minte, Zhang;Tong, Guo;Ruizhao, Zhu;Yueran, Zong;Zhihong, Pan
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.557-569
    • /
    • 2022
  • Vibration-based structural health monitoring (SHM) is crucial for the dynamic maintenance of civil building structures to protect property security and the lives of the public. Analyzing these vibrations with modern artificial intelligence and deep learning (DL) methods is a new trend. This paper proposed an unsupervised deep learning method based on a convolutional autoencoder (CAE), which can overcome the limitations of conventional supervised deep learning. With the convolutional core applied to the DL network, the method can extract features self-adaptively and efficiently. The effectiveness of the method in detecting damage is then tested using a benchmark model. Thereafter, this method is used to detect damage and instant disaster events in a rubber bearing-isolated gymnasium structure. The results indicate that the method enables the CAE network to learn the intact vibrations, so as to distinguish between different damage states of the benchmark model, and the outcome meets the high-dimensional data distribution characteristics visualized by the t-SNE method. Besides, the CAE-based network trained with daily vibrations of the isolating layer in the gymnasium can precisely recover newly collected vibration and detect the occurrence of the ground motion. The proposed method is effective at identifying nonlinear variations in the dynamic responses and has the potential to be used for structural condition assessment and safety warning.

Design and Implementation of a Face Authentication System (딥러닝 기반의 얼굴인증 시스템 설계 및 구현)

  • Lee, Seungik
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.63-68
    • /
    • 2020
  • This paper proposes a face authentication system based on deep learning framework. The proposed system is consisted of face region detection and feature extraction using deep learning algorithm, and performed the face authentication using joint-bayesian matrix learning algorithm. The performance of proposed paper is evaluated by various face database , and the face image of one person consists of 2 images. The face authentication algorithm was performed by measuring similarity by applying 2048 dimension characteristic and combined Bayesian algorithm through Deep Neural network and calculating the same error rate that failed face certification. The result of proposed paper shows that the proposed system using deep learning and joint bayesian algorithms showed the equal error rate of 1.2%, and have a good performance compared to previous approach.

A Deep Learning Method for Brain Tumor Classification Based on Image Gradient

  • Long, Hoang;Lee, Suk-Hwan;Kwon, Seong-Geun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1233-1241
    • /
    • 2022
  • Tumors of the brain are the deadliest, with a life expectancy of only a few years for those with the most advanced forms. Diagnosing a brain tumor is critical to developing a treatment plan to help patients with the disease live longer. A misdiagnosis of brain tumors will lead to incorrect medical treatment, decreasing a patient's chance of survival. Radiologists classify brain tumors via biopsy, which takes a long time. As a result, the doctor will need an automatic classification system to identify brain tumors. Image classification is one application of the deep learning method in computer vision. One of the deep learning's most powerful algorithms is the convolutional neural network (CNN). This paper will introduce a novel deep learning structure and image gradient to classify brain tumors. Meningioma, glioma, and pituitary tumors are the three most popular forms of brain cancer represented in the Figshare dataset, which contains 3,064 T1-weighted brain images from 233 patients. According to the numerical results, our method is more accurate than other approaches.

Image-based rainfall prediction from a novel deep learning method

  • Byun, Jongyun;Kim, Jinwon;Jun, Changhyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.183-183
    • /
    • 2021
  • Deep learning methods and their application have become an essential part of prediction and modeling in water-related research areas, including hydrological processes, climate change, etc. It is known that application of deep learning leads to high availability of data sources in hydrology, which shows its usefulness in analysis of precipitation, runoff, groundwater level, evapotranspiration, and so on. However, there is still a limitation on microclimate analysis and prediction with deep learning methods because of deficiency of gauge-based data and shortcomings of existing technologies. In this study, a real-time rainfall prediction model was developed from a sky image data set with convolutional neural networks (CNNs). These daily image data were collected at Chung-Ang University and Korea University. For high accuracy of the proposed model, it considers data classification, image processing, ratio adjustment of no-rain data. Rainfall prediction data were compared with minutely rainfall data at rain gauge stations close to image sensors. It indicates that the proposed model could offer an interpolation of current rainfall observation system and have large potential to fill an observation gap. Information from small-scaled areas leads to advance in accurate weather forecasting and hydrological modeling at a micro scale.

  • PDF

Pragmatic Assessment of Optimizers in Deep Learning

  • Ajeet K. Jain;PVRD Prasad Rao ;K. Venkatesh Sharma
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.115-128
    • /
    • 2023
  • Deep learning has been incorporating various optimization techniques motivated by new pragmatic optimizing algorithm advancements and their usage has a central role in Machine learning. In recent past, new avatars of various optimizers are being put into practice and their suitability and applicability has been reported on various domains. The resurgence of novelty starts from Stochastic Gradient Descent to convex and non-convex and derivative-free approaches. In the contemporary of these horizons of optimizers, choosing a best-fit or appropriate optimizer is an important consideration in deep learning theme as these working-horse engines determines the final performance predicted by the model. Moreover with increasing number of deep layers tantamount higher complexity with hyper-parameter tuning and consequently need to delve for a befitting optimizer. We empirically examine most popular and widely used optimizers on various data sets and networks-like MNIST and GAN plus others. The pragmatic comparison focuses on their similarities, differences and possibilities of their suitability for a given application. Additionally, the recent optimizer variants are highlighted with their subtlety. The article emphasizes on their critical role and pinpoints buttress options while choosing among them.