• Title/Summary/Keyword: CNN Feature

Search Result 311, Processing Time 0.025 seconds

A Study on the Gender and Age Classification of Speech Data Using CNN (CNN을 이용한 음성 데이터 성별 및 연령 분류 기술 연구)

  • Park, Dae-Seo;Bang, Joon-Il;Kim, Hwa-Jong;Ko, Young-Jun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.11-21
    • /
    • 2018
  • Research is carried out to categorize voices using Deep Learning technology. The study examines neural network-based sound classification studies and suggests improved neural networks for voice classification. Related studies studied urban data classification. However, related studies showed poor performance in shallow neural network. Therefore, in this paper the first preprocess voice data and extract feature value. Next, Categorize the voice by entering the feature value into previous sound classification network and proposed neural network. Finally, compare and evaluate classification performance of the two neural networks. The neural network of this paper is organized deeper and wider so that learning is better done. Performance results showed that 84.8 percent of related studies neural networks and 91.4 percent of the proposed neural networks. The proposed neural network was about 6 percent high.

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.

Apply Locally Weight Parameter Elimination for CNN Model Compression (지역적 가중치 파라미터 제거를 적용한 CNN 모델 압축)

  • Lim, Su-chang;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.9
    • /
    • pp.1165-1171
    • /
    • 2018
  • CNN requires a large amount of computation and memory in the process of extracting the feature of the object. Also, It is trained from the network that the user has configured, and because the structure of the network is fixed, it can not be modified during training and it is also difficult to use it in a mobile device with low computing power. To solve these problems, we apply a pruning method to the pre-trained weight file to reduce computation and memory requirements. This method consists of three steps. First, all the weights of the pre-trained network file are retrieved for each layer. Second, take an absolute value for the weight of each layer and obtain the average. After setting the average to a threshold, remove the weight below the threshold. Finally, the network file applied the pruning method is re-trained. We experimented with LeNet-5 and AlexNet, achieved 31x on LeNet-5 and 12x on AlexNet.

Mortality Prediction of Older Adults Using Random Forest and Deep Learning (랜덤 포레스트와 딥러닝을 이용한 노인환자의 사망률 예측)

  • Park, Junhyeok;Lee, Songwook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.10
    • /
    • pp.309-316
    • /
    • 2020
  • We predict the mortality of the elderly patients visiting the emergency department who are over 65 years old using Feed Forward Neural Network (FFNN) and Convolutional Neural Network (CNN) respectively. Medical data consist of 99 features including basic information such as sex, age, temperature, and heart rate as well as past history, various blood tests and culture tests, and etc. Among these, we used random forest to select features by measuring the importance of features in the prediction of mortality. As a result, using the top 80 features with high importance is best in the mortality prediction. The performance of the FFNN and CNN is compared by using the selected features for training each neural network. To train CNN with images, we convert medical data to fixed size images. We acquire better results with CNN than with FFNN. With CNN for mortality prediction, F1 score and the AUC for test data are 56.9 and 92.1 respectively.

Classification of Trucks using Convolutional Neural Network (합성곱 신경망을 사용한 화물차의 차종분류)

  • Lee, Dong-Gyu
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.6
    • /
    • pp.375-380
    • /
    • 2018
  • This paper proposes a classification method using the Convolutional Neural Network(CNN) which can obtain the type of trucks from the input image without the feature extraction step. To automatically classify vehicle images according to the type of truck cargo box, the top view images of the vehicle are used as input image and we design the structure of the CNN suitable for the input images. Learning images and correct output results is generated and the weights of neural network are obtained through the learning process. The actual image is input to the CNN and the output of the CNN is calculated. The classification performance is evaluated through comparison CNN output with actual vehicle types. Experimental results show that vehicle images could be classified with more than 90 percent accuracy according to the type of cargo box and this method can be used for pre-classification for inspecting loading defect.

Dynamic RNN-CNN malware classifier correspond with Random Dimension Input Data (임의 차원 데이터 대응 Dynamic RNN-CNN 멀웨어 분류기)

  • Lim, Geun-Young;Cho, Young-Bok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.5
    • /
    • pp.533-539
    • /
    • 2019
  • This study proposes a malware classification model that can handle arbitrary length input data using the Microsoft Malware Classification Challenge dataset. We are based on imaging existing data from malware. The proposed model generates a lot of images when malware data is large, and generates a small image of small data. The generated image is learned as time series data by Dynamic RNN. The output value of the RNN is classified into malware by using only the highest weighted output by applying the Attention technique, and learning the RNN output value by Residual CNN again. Experiments on the proposed model showed a Micro-average F1 score of 92% in the validation data set. Experimental results show that the performance of a model capable of learning and classifying arbitrary length data can be verified without special feature extraction and dimension reduction.

Stacked Sparse Autoencoder-DeepCNN Model Trained on CICIDS2017 Dataset for Network Intrusion Detection (네트워크 침입 탐지를 위해 CICIDS2017 데이터셋으로 학습한 Stacked Sparse Autoencoder-DeepCNN 모델)

  • Lee, Jong-Hwa;Kim, Jong-Wouk;Choi, Mi-Jung
    • KNOM Review
    • /
    • v.24 no.2
    • /
    • pp.24-34
    • /
    • 2021
  • Service providers using edge computing provide a high level of service. As a result, devices store important information in inner storage and have become a target of the latest cyberattacks, which are more difficult to detect. Although experts use a security system such as intrusion detection systems, the existing intrusion systems have low detection accuracy. Therefore, in this paper, we proposed a machine learning model for more accurate intrusion detections of devices in edge computing. The proposed model is a hybrid model that combines a stacked sparse autoencoder (SSAE) and a convolutional neural network (CNN) to extract important feature vectors from the input data using sparsity constraints. To find the optimal model, we compared and analyzed the performance as adjusting the sparsity coefficient of SSAE. As a result, the model showed the highest accuracy as a 96.9% using the sparsity constraints. Therefore, the model showed the highest performance when model trains only important features.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

Analysis of Feature Extraction Algorithms Based on Deep Learning (Deep Learning을 기반으로 한 Feature Extraction 알고리즘의 분석)

  • Kim, Gyung Tae;Lee, Yong Hwan;Kim, Yeong Seop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.60-67
    • /
    • 2020
  • Recently, artificial intelligence related technologies including machine learning are being applied to various fields, and the demand is also increasing. In particular, with the development of AR, VR, and MR technologies related to image processing, the utilization of computer vision based on deep learning has increased. The algorithms for object recognition and detection based on deep learning required for image processing are diversified and advanced. Accordingly, problems that were difficult to solve with the existing methodology were solved more simply and easily by using deep learning. This paper introduces various deep learning-based object recognition and extraction algorithms used to detect and recognize various objects in an image and analyzes the technologies that attract attention.

Skin Lesion Segmentation with Codec Structure Based Upper and Lower Layer Feature Fusion Mechanism

  • Yang, Cheng;Lu, GuanMing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.60-79
    • /
    • 2022
  • The U-Net architecture-based segmentation models attained remarkable performance in numerous medical image segmentation missions like skin lesion segmentation. Nevertheless, the resolution gradually decreases and the loss of spatial information increases with deeper network. The fusion of adjacent layers is not enough to make up for the lost spatial information, thus resulting in errors of segmentation boundary so as to decline the accuracy of segmentation. To tackle the issue, we propose a new deep learning-based segmentation model. In the decoding stage, the feature channels of each decoding unit are concatenated with all the feature channels of the upper coding unit. Which is done in order to ensure the segmentation effect by integrating spatial and semantic information, and promotes the robustness and generalization of our model by combining the atrous spatial pyramid pooling (ASPP) module and channel attention module (CAM). Extensive experiments on ISIC2016 and ISIC2017 common datasets proved that our model implements well and outperforms compared segmentation models for skin lesion segmentation.