• Title/Summary/Keyword: Deep Learning Dataset

Search Result 776, Processing Time 0.024 seconds

Classification of Leukemia Disease in Peripheral Blood Cell Images Using Convolutional Neural Network

  • Tran, Thanh;Park, Jin-Hyuk;Kwon, Oh-Heum;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1150-1161
    • /
    • 2018
  • Classification is widely used in medical images to categorize patients and non-patients. However, conventional classification requires a complex procedure, including some rigid steps such as pre-processing, segmentation, feature extraction, detection, and classification. In this paper, we propose a novel convolutional neural network (CNN), called LeukemiaNet, to specifically classify two different types of leukemia, including acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), and non-cancerous patients. To extend the limited dataset, a PCA color augmentation process is utilized before images are input into the LeukemiaNet. This augmentation method enhances the accuracy of our proposed CNN architecture from 96.9% to 97.2% for distinguishing ALL, AML, and normal cell images.

Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification

  • Kim, Sung Hee;Pae, Dong Sung;Kang, Tae-Koo;Kim, Dong W.;Lim, Myo Taeg
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2468-2478
    • /
    • 2018
  • We propose the Sparse Feature Convolutional Neural Network (SFCNN) to reduce the volume of convolutional neural networks (CNNs). Despite the superior classification performance of CNNs, their enormous network volume requires high computational cost and long processing time, making real-time applications such as online-training difficult. We propose an advanced network that reduces the volume of conventional CNNs by producing a region-based sparse feature map. To produce the sparse feature map, two complementary region-based value extraction methods, cluster max extraction and local value extraction, are proposed. Cluster max is selected as the main function based on experimental results. To evaluate SFCNN, we conduct an experiment with two conventional CNNs. The network trains 59 times faster and tests 81 times faster than the VGG network, with a 1.2% loss of accuracy in multi-class classification using the Caltech101 dataset. In vehicle classification using the GTI Vehicle Image Database, the network trains 88 times faster and tests 94 times faster than the conventional CNNs, with a 0.1% loss of accuracy.

Analyzing Effective of Activation Functions on Recurrent Neural Networks for Intrusion Detection

  • Le, Thi-Thu-Huong;Kim, Jihyun;Kim, Howon
    • Journal of Multimedia Information System
    • /
    • v.3 no.3
    • /
    • pp.91-96
    • /
    • 2016
  • Network security is an interesting area in Information Technology. It has an important role for the manager monitor and control operating of the network. There are many techniques to help us prevent anomaly or malicious activities such as firewall configuration etc. Intrusion Detection System (IDS) is one of effective method help us reduce the cost to build. The more attacks occur, the more necessary intrusion detection needs. IDS is a software or hardware systems, even though is a combination of them. Its major role is detecting malicious activity. In recently, there are many researchers proposed techniques or algorithms to build a tool in this field. In this paper, we improve the performance of IDS. We explore and analyze the impact of activation functions applying to recurrent neural network model. We use to KDD cup dataset for our experiment. By our experimental results, we verify that our new tool of IDS is really significant in this field.

The Method for Generating Recommended Candidates through Prediction of Multi-Criteria Ratings Using CNN-BiLSTM

  • Kim, Jinah;Park, Junhee;Shin, Minchan;Lee, Jihoon;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.707-720
    • /
    • 2021
  • To improve the accuracy of the recommendation system, multi-criteria recommendation systems have been widely researched. However, it is highly complicated to extract the preferred features of users and items from the data. To this end, subjective indicators, which indicate a user's priorities for personalized recommendations, should be derived. In this study, we propose a method for generating recommendation candidates by predicting multi-criteria ratings from reviews and using them to derive user priorities. Using a deep learning model based on convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM), multi-criteria prediction ratings were derived from reviews. These ratings were then aggregated to form a linear regression model to predict the overall rating. This model not only predicts the overall rating but also uses the training weights from the layers of the model as the user's priority. Based on this, a new score matrix for recommendation is derived by calculating the similarity between the user and the item according to the criteria, and an item suitable for the user is proposed. The experiment was conducted by collecting the actual "TripAdvisor" dataset. For performance evaluation, the proposed method was compared with a general recommendation system based on singular value decomposition. The results of the experiments demonstrate the high performance of the proposed method.

Epileptic Seizure Detection for Multi-channel EEG with Recurrent Convolutional Neural Networks (순환 합성곱 신경망를 이용한 다채널 뇌파 분석의 간질 발작 탐지)

  • Yoo, Ji-Hyun
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1175-1179
    • /
    • 2018
  • In this paper, we propose recurrent CNN(Convolutional Neural Networks) for detecting seizures among patients using EEG signals. In the proposed method, data were mapped by image to preserve the spectral characteristics of the EEG signal and the position of the electrode. After the spectral preprocessing, we input it into CNN and extracted the spatial and temporal features without wavelet transform. Results from the Children's Hospital of Boston Massachusetts Institute of Technology (CHB-MIT) dataset showed a sensitivity of 90% and a false positive rate (FPR) of 0.85 per hour.

The Method of Abandoned Object Recognition based on Neural Networks (신경망 기반의 유기된 물체 인식 방법)

  • Ryu, Dong-Gyun;Lee, Jae-Heung
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1131-1139
    • /
    • 2018
  • This paper proposes a method of recognition abandoned objects using convolutional neural networks. The method first detects an area for an abandoned object in image and, if there is a detected area, applies convolutional neural networks to that area to recognize which object is represented. Experiments were conducted through an application system that detects illegal trash dumping. The experiments result showed the area of abandoned object was detected efficiently. The detected areas enter the input of convolutional neural networks and are classified into whether it is a trash or not. To do this, I trained convolutional neural networks with my own trash dataset and open database. As a training result, I achieved high accuracy for the test set not included in the training set.

Comparative Analysis of Deep Learning Researches for Compressed Video Quality Improvement (압축 영상 화질 개선을 위한 딥 러닝 연구에 대한 분석)

  • Lee, Young-Woon;Kim, Byung-Gyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.420-429
    • /
    • 2019
  • Recently, researches using Convolutional Neural Network (CNN)-based approaches have been actively conducted to improve the reduced quality of compressed video using block-based video coding standards such as H.265/HEVC. This paper aims to summarize and analyze the network models in these quality enhancement studies. At first the detailed components of CNN for quality enhancement are overviewed and then we summarize prior studies in the image domain. Next, related studies are summarized in three aspects of network structure, dataset, and training methods, and present representative models implementation and experimental results for performance comparison.

SEL-RefineMask: A Seal Segmentation and Recognition Neural Network with SEL-FPN

  • Dun, Ze-dong;Chen, Jian-yu;Qu, Mei-xia;Jiang, Bin
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.411-427
    • /
    • 2022
  • Digging historical and cultural information from seals in ancient books is of great significance. However, ancient Chinese seal samples are scarce and carving methods are diverse, and traditional digital image processing methods based on greyscale have difficulty achieving superior segmentation and recognition performance. Recently, some deep learning algorithms have been proposed to address this problem; however, current neural networks are difficult to train owing to the lack of datasets. To solve the afore-mentioned problems, we proposed an SEL-RefineMask which combines selector of feature pyramid network (SEL-FPN) with RefineMask to segment and recognize seals. We designed an SEL-FPN to intelligently select a specific layer which represents different scales in the FPN and reduces the number of anchor frames. We performed experiments on some instance segmentation networks as the baseline method, and the top-1 segmentation result of 64.93% is 5.73% higher than that of humans. The top-1 result of the SEL-RefineMask network reached 67.96% which surpassed the baseline results. After segmentation, a vision transformer was used to recognize the segmentation output, and the accuracy reached 91%. Furthermore, a dataset of seals in ancient Chinese books (SACB) for segmentation and small seal font (SSF) for recognition were established which are publicly available on the website.

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.

Image Captioning with Synergy-Gated Attention and Recurrent Fusion LSTM

  • Yang, You;Chen, Lizhi;Pan, Longyue;Hu, Juntao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3390-3405
    • /
    • 2022
  • Long Short-Term Memory (LSTM) combined with attention mechanism is extensively used to generate semantic sentences of images in image captioning models. However, features of salient regions and spatial information are not utilized sufficiently in most related works. Meanwhile, the LSTM also suffers from the problem of underutilized information in a single time step. In the paper, two innovative approaches are proposed to solve these problems. First, the Synergy-Gated Attention (SGA) method is proposed, which can process the spatial features and the salient region features of given images simultaneously. SGA establishes a gated mechanism through the global features to guide the interaction of information between these two features. Then, the Recurrent Fusion LSTM (RF-LSTM) mechanism is proposed, which can predict the next hidden vectors in one time step and improve linguistic coherence by fusing future information. Experimental results on the benchmark dataset of MSCOCO show that compared with the state-of-the-art methods, the proposed method can improve the performance of image captioning model, and achieve competitive performance on multiple evaluation indicators.