• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.024 seconds

Revolutionizing Brain Tumor Segmentation in MRI with Dynamic Fusion of Handcrafted Features and Global Pathway-based Deep Learning

  • Faizan Ullah;Muhammad Nadeem;Mohammad Abrar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.105-125
    • /
    • 2024
  • Gliomas are the most common malignant brain tumor and cause the most deaths. Manual brain tumor segmentation is expensive, time-consuming, error-prone, and dependent on the radiologist's expertise and experience. Manual brain tumor segmentation outcomes by different radiologists for the same patient may differ. Thus, more robust, and dependable methods are needed. Medical imaging researchers produced numerous semi-automatic and fully automatic brain tumor segmentation algorithms using ML pipelines and accurate (handcrafted feature-based, etc.) or data-driven strategies. Current methods use CNN or handmade features such symmetry analysis, alignment-based features analysis, or textural qualities. CNN approaches provide unsupervised features, while manual features model domain knowledge. Cascaded algorithms may outperform feature-based or data-driven like CNN methods. A revolutionary cascaded strategy is presented that intelligently supplies CNN with past information from handmade feature-based ML algorithms. Each patient receives manual ground truth and four MRI modalities (T1, T1c, T2, and FLAIR). Handcrafted characteristics and deep learning are used to segment brain tumors in a Global Convolutional Neural Network (GCNN). The proposed GCNN architecture with two parallel CNNs, CSPathways CNN (CSPCNN) and MRI Pathways CNN (MRIPCNN), segmented BraTS brain tumors with high accuracy. The proposed model achieved a Dice score of 87% higher than the state of the art. This research could improve brain tumor segmentation, helping clinicians diagnose and treat patients.

Deep Learning Model Validation Method Based on Image Data Feature Coverage (영상 데이터 특징 커버리지 기반 딥러닝 모델 검증 기법)

  • Lim, Chang-Nam;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.375-384
    • /
    • 2021
  • Deep learning techniques have been proven to have high performance in image processing and are applied in various fields. The most widely used methods for validating a deep learning model include a holdout verification method, a k-fold cross verification method, and a bootstrap method. These legacy methods consider the balance of the ratio between classes in the process of dividing the data set, but do not consider the ratio of various features that exist within the same class. If these features are not considered, verification results may be biased toward some features. Therefore, we propose a deep learning model validation method based on data feature coverage for image classification by improving the legacy methods. The proposed technique proposes a data feature coverage that can be measured numerically how much the training data set for training and validation of the deep learning model and the evaluation data set reflects the features of the entire data set. In this method, the data set can be divided by ensuring coverage to include all features of the entire data set, and the evaluation result of the model can be analyzed in units of feature clusters. As a result, by providing feature cluster information for the evaluation result of the trained model, feature information of data that affects the trained model can be provided.

Deep Learning based Photo Horizon Correction (딥러닝을 이용한 영상 수평 보정)

  • Hong, Eunbin;Jeon, Junho;Cho, Sunghyun;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.95-103
    • /
    • 2017
  • Horizon correction is a crucial stage for image composition enhancement. In this paper, we propose a deep learning based method for estimating the slanted angle of a photograph and correcting it. To estimate and correct the horizon direction, existing methods use hand-crafted low-level features such as lines, planes, and gradient distributions. However, these methods may not work well on the images that contain no lines or planes. To tackle this limitation and robustly estimate the slanted angle, we propose a convolutional neural network (CNN) based method to estimate the slanted angle by learning more generic features using a huge dataset. In addition, we utilize multiple adaptive spatial pooling layers to extract multi-scale image features for better performance. In the experimental results, we show our CNN-based approach robustly and accurately estimates the slanted angle of an image regardless of the image content, even if the image contains no lines or planes at all.

Convolutional Neural Network with Expert Knowledge for Hyperspectral Remote Sensing Imagery Classification

  • Wu, Chunming;Wang, Meng;Gao, Lang;Song, Weijing;Tian, Tian;Choo, Kim-Kwang Raymond
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3917-3941
    • /
    • 2019
  • The recent interest in artificial intelligence and machine learning has partly contributed to an interest in the use of such approaches for hyperspectral remote sensing (HRS) imagery classification, as evidenced by the increasing number of deep framework with deep convolutional neural networks (CNN) structures proposed in the literature. In these approaches, the assumption of obtaining high quality deep features by using CNN is not always easy and efficient because of the complex data distribution and the limited sample size. In this paper, conventional handcrafted learning-based multi features based on expert knowledge are introduced as the input of a special designed CNN to improve the pixel description and classification performance of HRS imagery. The introduction of these handcrafted features can reduce the complexity of the original HRS data and reduce the sample requirements by eliminating redundant information and improving the starting point of deep feature training. It also provides some concise and effective features that are not readily available from direct training with CNN. Evaluations using three public HRS datasets demonstrate the utility of our proposed method in HRS classification.

Understanding the Association Between Cryptocurrency Price Predictive Performance and Input Features (암호화폐 종가 예측 성능과 입력 변수 간의 연관성 분석)

  • Park, Jaehyun;Seo, Yeong-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.19-28
    • /
    • 2022
  • Recently, cryptocurrency has attracted much attention, and price prediction studies of cryptocurrency have been actively conducted. Especially, efforts to improve the prediction performance by applying the deep learning model are continuing. LSTM (Long Short-Term Memory) model, which shows high performance in time series data among deep learning models, is applied in various views. However, it shows low performance in cryptocurrency price data with high volatility. Although, to solve this problem, new input features were found and study was conducted using them, there is a lack of study on input features that drop predictive performance. Thus, in this paper, we collect the recent trends of six cryptocurrencies including Bitcoin and Ethereum and analyze effects of input features on the cryptocurrency price predictive performance through statistics and deep learning. The results of the experiment showed that cryptocurrency price predictive performance the best when open price, high price, low price, volume and price were combined except for rate of closing price fluctuation.

Wavelet-like convolutional neural network structure for time-series data classification

  • Park, Seungtae;Jeong, Haedong;Min, Hyungcheol;Lee, Hojin;Lee, Seungchul
    • Smart Structures and Systems
    • /
    • v.22 no.2
    • /
    • pp.175-183
    • /
    • 2018
  • Time-series data often contain one of the most valuable pieces of information in many fields including manufacturing. Because time-series data are relatively cheap to acquire, they (e.g., vibration signals) have become a crucial part of big data even in manufacturing shop floors. Recently, deep-learning models have shown state-of-art performance for analyzing big data because of their sophisticated structures and considerable computational power. Traditional models for a machinery-monitoring system have highly relied on features selected by human experts. In addition, the representational power of such models fails as the data distribution becomes complicated. On the other hand, deep-learning models automatically select highly abstracted features during the optimization process, and their representational power is better than that of traditional neural network models. However, the applicability of deep-learning models to the field of prognostics and health management (PHM) has not been well investigated yet. This study integrates the "residual fitting" mechanism inherently embedded in the wavelet transform into the convolutional neural network deep-learning structure. As a result, the architecture combines a signal smoother and classification procedures into a single model. Validation results from rotor vibration data demonstrate that our model outperforms all other off-the-shelf feature-based models.

Deep Learning based Emotion Classification using Multi Modal Bio-signals (다중 모달 생체신호를 이용한 딥러닝 기반 감정 분류)

  • Lee, JeeEun;Yoo, Sun Kook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.146-154
    • /
    • 2020
  • Negative emotion causes stress and lack of attention concentration. The classification of negative emotion is important to recognize risk factors. To classify emotion status, various methods such as questionnaires and interview are used and it could be changed by personal thinking. To solve the problem, we acquire multi modal bio-signals such as electrocardiogram (ECG), skin temperature (ST), galvanic skin response (GSR) and extract features. The neural network (NN), the deep neural network (DNN), and the deep belief network (DBN) is designed using the multi modal bio-signals to analyze emotion status. As a result, the DBN based on features extracted from ECG, ST and GSR shows the highest accuracy (93.8%). It is 5.7% higher than compared to the NN and 1.4% higher than compared to the DNN. It shows 12.2% higher accuracy than using only single bio-signal (GSR). The multi modal bio-signal acquisition and the deep learning classifier play an important role to classify emotion.

Deep CNN based Pilot Allocation Scheme in Massive MIMO systems

  • Kim, Kwihoon;Lee, Joohyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4214-4230
    • /
    • 2020
  • This paper introduces a pilot allocation scheme for massive MIMO systems based on deep convolutional neural network (CNN) learning. This work is an extension of a prior work on the basic deep learning framework of the pilot assignment problem, the application of which to a high-user density nature is difficult owing to the factorial increase in both input features and output layers. To solve this problem, by adopting the advantages of CNN in learning image data, we design input features that represent users' locations in all the cells as image data with a two-dimensional fixed-size matrix. Furthermore, using a sorting mechanism for applying proper rule, we construct output layers with a linear space complexity according to the number of users. We also develop a theoretical framework for the network capacity model of the massive MIMO systems and apply it to the training process. Finally, we implement the proposed deep CNN-based pilot assignment scheme using a commercial vanilla CNN, which takes into account shift invariant characteristics. Through extensive simulation, we demonstrate that the proposed work realizes about a 98% theoretical upper-bound performance and an elapsed time of 0.842 ms with low complexity in the case of a high-user-density condition.

A Study on the Process Planning for Secondary Operations on Features of Deep Drawing Cup and the Development of the Expert System-Based CAPP (Deep Drawing의 후가공 특징형상 공정설계 및 전문가시스템 개발에 관한 연구)

  • 오준환;이재원;조성진;남배중;양재우
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.11
    • /
    • pp.46-57
    • /
    • 1998
  • Even though there are some studies on the deep drawing process and process planning, little study has been done on the methodology of process planning of secondary operations of deep drawing such as 'piercins'. In this paper, first, we systematized the methodology of the process planning of secondary operations on axisymmetric cup. Second we described the development of the expert system for their CAPP For these studies, we extracted the knowledge of their process planning from experts and analysed and systemized them. To develop the expert system, rule-based reasoning paradigm is used. The shape information of manufacturing features of secondary operations are manually input to the system through SUI. The process planning results are automatically connected to AutoCAD. We believe that the systematized process knowledge and the development of the expert system for its CAPP could give lots of aids to the associated field.

  • PDF

Improved Classification of Cancerous Histopathology Images using Color Channel Separation and Deep Learning

  • Gupta, Rachit Kumar;Manhas, Jatinder
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.175-182
    • /
    • 2021
  • Oral cancer is ranked second most diagnosed cancer among Indian population and ranked sixth all around the world. Oral cancer is one of the deadliest cancers with high mortality rate and very less 5-year survival rates even after treatment. It becomes necessary to detect oral malignancies as early as possible so that timely treatment may be given to patient and increase the survival chances. In recent years deep learning based frameworks have been proposed by many researchers that can detect malignancies from medical images. In this paper we have proposed a deep learning-based framework which detects oral cancer from histopathology images very efficiently. We have designed our model to split the color channels and extract deep features from these individual channels rather than single combined channel with the help of Efficient NET B3. These features from different channels are fused by using feature fusion module designed as a layer and placed before dense layers of Efficient NET. The experiments were performed on our own dataset collected from hospitals. We also performed experiments of BreakHis, and ICML datasets to evaluate our model. The results produced by our model are very good as compared to previously reported results.