• Title/Summary/Keyword: Deep Convolutional Neural Networks

Search Result 42, Processing Time 0.158 seconds

Wi-Fi RSSI Heat Maps Based Indoor Localization System Using Deep Convolutional Neural Networks

  • Poulose, Alwin;Han, Dong Seog
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • /
    • pp.717-720
    • /
    • 2020
  • An indoor localization system that uses Wi-Fi RSSI signals for localization gives accurate user position results. The conventional Wi-Fi RSSI signal based localization system uses raw RSSI signals from access points (APs) to estimate the user position. However, the RSSI values of a particular location are usually not stable due to the signal propagation in the indoor environments. To reduce the RSSI signal fluctuations, shadow fading, multipath effects and the blockage of Wi-Fi RSSI signals, we propose a Wi-Fi localization system that utilizes the advantages of Wi-Fi RSSI heat maps. The proposed localization system uses a regression model with deep convolutional neural networks (DCNNs) and gives accurate user position results for indoor localization. The experiment results demonstrate the superior performance of the proposed localization system for indoor localization.

  • PDF

Enhanced Network Intrusion Detection using Deep Convolutional Neural Networks

  • Naseer, Sheraz;Saleem, Yasir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5159-5178
    • /
    • 2018
  • Network Intrusion detection is a rapidly growing field of information security due to its importance for modern IT infrastructure. Many supervised and unsupervised learning techniques have been devised by researchers from discipline of machine learning and data mining to achieve reliable detection of anomalies. In this paper, a deep convolutional neural network (DCNN) based intrusion detection system (IDS) is proposed, implemented and analyzed. Deep CNN core of proposed IDS is fine-tuned using Randomized search over configuration space. Proposed system is trained and tested on NSLKDD training and testing datasets using GPU. Performance comparisons of proposed DCNN model are provided with other classifiers using well-known metrics including Receiver operating characteristics (RoC) curve, Area under RoC curve (AuC), accuracy, precision-recall curve and mean average precision (mAP). The experimental results of proposed DCNN based IDS shows promising results for real world application in anomaly detection systems.

A Survey on Deep Convolutional Neural Networks for Image Steganography and Steganalysis

  • Hussain, Israr;Zeng, Jishen;Qin, Xinhong;Tan, Shunquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1228-1248
    • /
    • 2020
  • Steganalysis & steganography have witnessed immense progress over the past few years by the advancement of deep convolutional neural networks (DCNN). In this paper, we analyzed current research states from the latest image steganography and steganalysis frameworks based on deep learning. Our objective is to provide for future researchers the work being done on deep learning-based image steganography & steganalysis and highlights the strengths and weakness of existing up-to-date techniques. The result of this study opens new approaches for upcoming research and may serve as source of hypothesis for further significant research on deep learning-based image steganography and steganalysis. Finally, technical challenges of current methods and several promising directions on deep learning steganography and steganalysis are suggested to illustrate how these challenges can be transferred into prolific future research avenues.

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.

Predicting Employment Earning using Deep Convolutional Neural Networks (딥 컨볼루션 신경망을 이용한 고용 소득 예측)

  • Ramadhani, Adyan Marendra;Kim, Na-Rang;Choi, Hyung-Rim
    • Journal of Digital Convergence
    • /
    • v.16 no.6
    • /
    • pp.151-161
    • /
    • 2018
  • Income is a vital aspect of economic life. Knowing what their income will help people create budgets that allow them to pay for their living expenses. Income data is used by banks, stores, and service companies for marketing purposes and for retaining loyal customers; it is a crucial demographic element used at a wide variety of customer touch points. Therefore, it is essential to be able to make income predictions for existing and potential customers. This paper aims to predict employment earnings or income based on history, and uses machine learning techniques such as SVMs (Support Vector Machines), Gaussian, decision tree and DCNNs (Deep Convolutional Neural Networks) for predicting employment earnings. The results show that the DCNN method provides optimum results with 88% compared to other machine learning techniques used in this paper. Improvement of the data length such PCA has the potential to provide more optimum result.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Facial Local Region Based Deep Convolutional Neural Networks for Automated Face Recognition (자동 얼굴인식을 위한 얼굴 지역 영역 기반 다중 심층 합성곱 신경망 시스템)

  • Kim, Kyeong-Tae;Choi, Jae-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.4
    • /
    • pp.47-55
    • /
    • 2018
  • In this paper, we propose a novel face recognition(FR) method that takes advantage of combining weighted deep local features extracted from multiple Deep Convolutional Neural Networks(DCNNs) learned with a set of facial local regions. In the proposed method, the so-called weighed deep local features are generated from multiple DCNNs each trained with a particular face local region and the corresponding weight represents the importance of local region in terms of improving FR performance. Our weighted deep local features are applied to Joint Bayesian metric learning in conjunction with Nearest Neighbor(NN) Classifier for the purpose of FR. Systematic and comparative experiments show that our proposed method is robust to variations in pose, illumination, and expression. Also, experimental results demonstrate that our method is feasible for improving face recognition performance.

BSR (Buzz, Squeak, Rattle) noise classification based on convolutional neural network with short-time Fourier transform noise-map (Short-time Fourier transform 소음맵을 이용한 컨볼루션 기반 BSR (Buzz, Squeak, Rattle) 소음 분류)

  • Bu, Seok-Jun;Moon, Se-Min;Cho, Sung-Bae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.4
    • /
    • pp.256-261
    • /
    • 2018
  • There are three types of noise generated inside the vehicle: BSR (Buzz, Squeak, Rattle). In this paper, we propose a classifier that automatically classifies automotive BSR noise by using features extracted from deep convolutional neural networks. In the preprocessing process, the features of above three noises are represented as noise-map using STFT (Short-time Fourier Transform) algorithm. In order to cope with the problem that the position of the actual noise is unknown in the part of the generated noise map, the noise map is divided using the sliding window method. In this paper, internal parameter of the deep convolutional neural networks is visualized using the t-SNE (t-Stochastic Neighbor Embedding) algorithm, and the misclassified data is analyzed in a qualitative way. In order to analyze the classified data, the similarity of the noise type was quantified by SSIM (Structural Similarity Index) value, and it was found that the retractor tremble sound is most similar to the normal travel sound. The classifier of the proposed method compared with other classifiers of machine learning method recorded the highest classification accuracy (99.15 %).

Toward Optimal FPGA Implementation of Deep Convolutional Neural Networks for Handwritten Hangul Character Recognition

  • Park, Hanwool;Yoo, Yechan;Park, Yoonjin;Lee, Changdae;Lee, Hakkyung;Kim, Injung;Yi, Kang
    • Journal of Computing Science and Engineering
    • /
    • v.12 no.1
    • /
    • pp.24-35
    • /
    • 2018
  • Deep convolutional neural network (DCNN) is an advanced technology in image recognition. Because of extreme computing resource requirements, DCNN implementation with software alone cannot achieve real-time requirement. Therefore, the need to implement DCNN accelerator hardware is increasing. In this paper, we present a field programmable gate array (FPGA)-based hardware accelerator design of DCNN targeting handwritten Hangul character recognition application. Also, we present design optimization techniques in SDAccel environments for searching the optimal FPGA design space. The techniques we used include memory access optimization and computing unit parallelism, and data conversion. We achieved about 11.19 ms recognition time per character with Xilinx FPGA accelerator. Our design optimization was performed with Xilinx HLS and SDAccel environment targeting Kintex XCKU115 FPGA from Xilinx. Our design outperforms CPU in terms of energy efficiency (the number of samples per unit energy) by 5.88 times, and GPGPU in terms of energy efficiency by 5 times. We expect the research results will be an alternative to GPGPU solution for real-time applications, especially in data centers or server farms where energy consumption is a critical problem.

Object Detection on the Road Environment Using Attention Module-based Lightweight Mask R-CNN (주의 모듈 기반 Mask R-CNN 경량화 모델을 이용한 도로 환경 내 객체 검출 방법)

  • Song, Minsoo;Kim, Wonjun;Jang, Rae-Young;Lee, Ryong;Park, Min-Woo;Lee, Sang-Hwan;Choi, Myung-seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.944-953
    • /
    • 2020
  • Object detection plays a crucial role in a self-driving system. With the advances of image recognition based on deep convolutional neural networks, researches on object detection have been actively explored. In this paper, we proposed a lightweight model of the mask R-CNN, which has been most widely used for object detection, to efficiently predict location and shape of various objects on the road environment. Furthermore, feature maps are adaptively re-calibrated to improve the detection performance by applying an attention module to the neural network layer that plays different roles within the mask R-CNN. Various experimental results for real driving scenes demonstrate that the proposed method is able to maintain the high detection performance with significantly reduced network parameters.