• Title/Summary/Keyword: Multiple output CNN

Search Result 6, Processing Time 0.018 seconds

Implementation of handwritten digit recognition CNN structure using GPGPU and Combined Layer (GPGPU와 Combined Layer를 이용한 필기체 숫자인식 CNN구조 구현)

  • Lee, Sangil;Nam, Kihun;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.165-169
    • /
    • 2017
  • CNN(Convolutional Nerual Network) is one of the algorithms that show superior performance in image recognition and classification among machine learning algorithms. CNN is simple, but it has a large amount of computation and it takes a lot of time. Consequently, in this paper we performed an parallel processing unit for the convolution layer, pooling layer and the fully connected layer, which consumes a lot of handling time in the process of CNN, through the SIMT(Single Instruction Multiple Thread)'s structure of GPGPU(General-Purpose computing on Graphics Processing Units).And we also expect to improve performance by reducing the number of memory accesses and directly using the output of convolution layer not storing it in pooling layer. In this paper, we use MNIST dataset to verify this experiment and confirm that the proposed CNN structure is 12.38% better than existing structure.

Deep learning-based target distance and velocity estimation technique for OFDM radars (OFDM 레이다를 위한 딥러닝 기반 표적의 거리 및 속도 추정 기법)

  • Choi, Jae-Woong;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.104-113
    • /
    • 2022
  • In this paper, we propose deep learning-based target distance and velocity estimation technique for OFDM radar systems. In the proposed technique, the 2D periodogram is obtained via 2D fast Fourier transform (FFT) from the reflected signal after removing the modulation effect. The periodogram is the input to the conventional and proposed estimators. The peak of the 2D periodogram represents the target, and the constant false alarm rate (CFAR) algorithm is the most popular conventional technique for the target's distance and speed estimation. In contrast, the proposed method is designed using the multiple output convolutional neural network (CNN). Unlike the conventional CFAR, the proposed estimator is easier to use because it does not require any additional information such as noise power. According to the simulation results, the proposed CNN improves the mean square error (MSE) by more than 5 times compared with the conventional CFAR, and the proposed estimator becomes more accurate as the number of transmitted OFDM symbols increases.

Speed-limit Sign Recognition Using Convolutional Neural Network Based on Random Forest (랜덤 포레스트 분류기 기반의 컨벌루션 뉴럴 네트워크를 이용한 속도제한 표지판 인식)

  • Lee, EunJu;Nam, Jae-Yeal;Ko, ByoungChul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.6
    • /
    • pp.938-949
    • /
    • 2015
  • In this paper, we propose a robust speed-limit sign recognition system which is durable to any sign changes caused by exterior damage or color contrast due to light direction. For recognition of speed-limit sign, we apply CNN which is showing an outstanding performance in pattern recognition field. However, original CNN uses multiple hidden layers to extract features and uses fully-connected method with MLP(Multi-layer perceptron) on the result. Therefore, the major demerit of conventional CNN is to require a long time for training and testing. In this paper, we apply randomly-connected classifier instead of fully-connected classifier by combining random forest with output of 2 layers of CNN. We prove that the recognition results of CNN with random forest show best performance than recognition results of CNN with SVM (Support Vector Machine) or MLP classifier when we use eight speed-limit signs of GTSRB (German Traffic Sign Recognition Benchmark).

Data anomaly detection for structural health monitoring using a combination network of GANomaly and CNN

  • Liu, Gaoyang;Niu, Yanbo;Zhao, Weijian;Duan, Yuanfeng;Shu, Jiangpeng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.53-62
    • /
    • 2022
  • The deployment of advanced structural health monitoring (SHM) systems in large-scale civil structures collects large amounts of data. Note that these data may contain multiple types of anomalies (e.g., missing, minor, outlier, etc.) caused by harsh environment, sensor faults, transfer omission and other factors. These anomalies seriously affect the evaluation of structural performance. Therefore, the effective analysis and mining of SHM data is an extremely important task. Inspired by the deep learning paradigm, this study develops a novel generative adversarial network (GAN) and convolutional neural network (CNN)-based data anomaly detection approach for SHM. The framework of the proposed approach includes three modules : (a) A three-channel input is established based on fast Fourier transform (FFT) and Gramian angular field (GAF) method; (b) A GANomaly is introduced and trained to extract features from normal samples alone for class-imbalanced problems; (c) Based on the output of GANomaly, a CNN is employed to distinguish the types of anomalies. In addition, a dataset-oriented method (i.e., multistage sampling) is adopted to obtain the optimal sampling ratios between all different samples. The proposed approach is tested with acceleration data from an SHM system of a long-span bridge. The results show that the proposed approach has a higher accuracy in detecting the multi-pattern anomalies of SHM data.

A Noise-Tolerant Hierarchical Image Classification System based on Autoencoder Models (오토인코더 기반의 잡음에 강인한 계층적 이미지 분류 시스템)

  • Lee, Jong-kwan
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.23-30
    • /
    • 2021
  • This paper proposes a noise-tolerant image classification system using multiple autoencoders. The development of deep learning technology has dramatically improved the performance of image classifiers. However, if the images are contaminated by noise, the performance degrades rapidly. Noise added to the image is inevitably generated in the process of obtaining and transmitting the image. Therefore, in order to use the classifier in a real environment, we have to deal with the noise. On the other hand, the autoencoder is an artificial neural network model that is trained to have similar input and output values. If the input data is similar to the training data, the error between the input data and output data of the autoencoder will be small. However, if the input data is not similar to the training data, the error will be large. The proposed system uses the relationship between the input data and the output data of the autoencoder, and it has two phases to classify the images. In the first phase, the classes with the highest likelihood of classification are selected and subject to the procedure again in the second phase. For the performance analysis of the proposed system, classification accuracy was tested on a Gaussian noise-contaminated MNIST dataset. As a result of the experiment, it was confirmed that the proposed system in the noisy environment has higher accuracy than the CNN-based classification technique.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.