• Title, Summary, Keyword: Convolutional Neural Networks

Search Result 308, Processing Time 0.052 seconds

Performance Comparison of Guitar Chords Classification Systems Based on Artificial Neural Network (인공신경망 기반의 기타 코드 분류 시스템 성능 비교)

  • Park, Sun Bae;Yoo, Do-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.391-399
    • /
    • 2018
  • In this paper, we construct and compare various guitar chord classification systems using perceptron neural network and convolutional neural network without pre-processing other than Fourier transform to identify the optimal chord classification system. Conventional guitar chord classification schemes use, for better feature extraction, computationally demanding pre-processing techniques such as stochastic analysis employing a hidden markov model or an acoustic data filtering and hence are burdensome for real-time chord classifications. For this reason, we construct various perceptron neural networks and convolutional neural networks that use only Fourier tranform for data pre-processing and compare them with dataset obtained by playing an electric guitar. According to our comparison, convolutional neural networks provide optimal performance considering both chord classification acurracy and fast processing time. In particular, convolutional neural networks exhibit robust performance even when only small fraction of low frequency components of the data are used.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

The Method of Abandoned Object Recognition based on Neural Networks (신경망 기반의 유기된 물체 인식 방법)

  • Ryu, Dong-Gyun;Lee, Jae-Heung
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1131-1139
    • /
    • 2018
  • This paper proposes a method of recognition abandoned objects using convolutional neural networks. The method first detects an area for an abandoned object in image and, if there is a detected area, applies convolutional neural networks to that area to recognize which object is represented. Experiments were conducted through an application system that detects illegal trash dumping. The experiments result showed the area of abandoned object was detected efficiently. The detected areas enter the input of convolutional neural networks and are classified into whether it is a trash or not. To do this, I trained convolutional neural networks with my own trash dataset and open database. As a training result, I achieved high accuracy for the test set not included in the training set.

Efficient Implementation of Convolutional Neural Network Using CUDA (CUDA를 이용한 Convolutional Neural Network의 효율적인 구현)

  • Ki, Cheol-Min;Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.6
    • /
    • pp.1143-1148
    • /
    • 2017
  • Currently, Artificial Intelligence and Deep Learning are rising as hot social issues, and these technologies are applied to various fields. A good method among the various algorithms in Artificial Intelligence is Convolutional Neural Networks. Convolutional Neural Network is a form that adds Convolution Layers to Multi Layer Neural Network. If you use Convolutional Neural Networks for small amount of data, or if the structure of layers is not complicated, you don't have to pay attention to speed. But the learning should take long time when the size of the learning data is large and the structure of layers is complicated. In these cases, GPU-based parallel processing is frequently needed. In this paper, we developed Convolutional Neural Networks using CUDA, and show that its learning is faster and more efficient than learning using some other frameworks or programs.

Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data

  • Hu, Cong;Wu, Xiao-Jun;Shu, Zhen-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5427-5445
    • /
    • 2019
  • While deep neural networks have achieved remarkable performance in representation learning, a huge amount of labeled training data are usually required by supervised deep models such as convolutional neural networks. In this paper, we propose a new representation learning method, namely generative adversarial networks (GAN) based bagging deep convolutional autoencoders (GAN-BDCAE), which can map data to diverse hierarchical representations in an unsupervised fashion. To boost the size of training data, to train deep model and to aggregate diverse learning machines are the three principal avenues towards increasing the capabilities of representation learning of neural networks. We focus on combining those three techniques. To this aim, we adopt GAN for realistic unlabeled sample generation and bagging deep convolutional autoencoders (BDCAE) for robust feature learning. The proposed method improves the discriminative ability of learned feature embedding for solving subsequent pattern recognition problems. We evaluate our approach on three standard benchmarks and demonstrate the superiority of the proposed method compared to traditional unsupervised learning methods.

Performance Improvement of Object Recognition System in Broadcast Media Using Hierarchical CNN (계층적 CNN을 이용한 방송 매체 내의 객체 인식 시스템 성능향상 방안)

  • Kwon, Myung-Kyu;Yang, Hyo-Sik
    • Journal of Digital Convergence
    • /
    • v.15 no.3
    • /
    • pp.201-209
    • /
    • 2017
  • This paper is a smartphone object recognition system using hierarchical convolutional neural network. The overall configuration is a method of communicating object information to the smartphone by matching the collected data by connecting the smartphone and the server and recognizing the object to the convergence neural network in the server. It is also compared to a hierarchical convolutional neural network and a fractional convolutional neural network. Hierarchical convolutional neural networks have 88% accuracy, fractional convolutional neural networks have 73% accuracy and 15%p performance improvement. Based on this, it shows possibility of expansion of T-Commerce market connected with smartphone and broadcasting media.

The application of convolutional neural networks for automatic detection of underwater object in side scan sonar images (사이드 스캔 소나 영상에서 수중물체 자동 탐지를 위한 컨볼루션 신경망 기법 적용)

  • Kim, Jungmoon;Choi, Jee Woong;Kwon, Hyuckjong;Oh, Raegeun;Son, Su-Uk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.2
    • /
    • pp.118-128
    • /
    • 2018
  • In this paper, we have studied how to search an underwater object by learning the image generated by the side scan sonar in the convolution neural network. In the method of human side analysis of the side scan image or the image, the convolution neural network algorithm can enhance the efficiency of the analysis. The image data of the side scan sonar used in the experiment is the public data of NSWC (Naval Surface Warfare Center) and consists of four kinds of synthetic underwater objects. The convolutional neural network algorithm is based on Faster R-CNN (Region based Convolutional Neural Networks) learning based on region of interest and the details of the neural network are self-organized to fit the data we have. The results of the study were compared with a precision-recall curve, and we investigated the applicability of underwater object detection in convolution neural networks by examining the effect of change of region of interest assigned to sonar image data on detection performance.

Training Artificial Neural Networks and Convolutional Neural Networks using WFSO Algorithm (WFSO 알고리즘을 이용한 인공 신경망과 합성곱 신경망의 학습)

  • Jang, Hyun-Woo;Jung, Sung Hoon
    • Journal of Digital Contents Society
    • /
    • v.18 no.5
    • /
    • pp.969-976
    • /
    • 2017
  • This paper proposes the learning method of an artificial neural network and a convolutional neural network using the WFSO algorithm developed as an optimization algorithm. Since the optimization algorithm searches based on a number of candidate solutions, it has a drawback in that it is generally slow, but it rarely falls into the local optimal solution and it is easy to parallelize. In addition, the artificial neural networks with non-differentiable activation functions can be trained and the structure and weights can be optimized at the same time. In this paper, we describe how to apply WFSO algorithm to artificial neural network learning and compare its performances with error back-propagation algorithm in multilayer artificial neural networks and convolutional neural networks.

Detection of Coffee Bean Defects using Convolutional Neural Networks (Convolutional Neural Network를 이용한 불량원두 검출 시스템)

  • Kim, Ho-Joong;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • /
    • pp.316-319
    • /
    • 2014
  • People's interests in coffee are increasing with the expansion of coffee market. In this trend, people's taste becomes more luxurious and coffee bean's quality is considered to be very important. Currently, bean defects are mainly detected by experienced specialists. In this paper, a detection system of bean defects using machine learning is presented. This system concentrates on detecting two main defect types : bean's shape and insect damage. Convolutional Neural Networks are used for machine learning. The neural networks are comprised of two neural networks. The first neural network detects defects in the bean's shape, and the second one detects the bean's insect damage. The development of this system could be a starting point for automated coffee bean defects detection. Later, further research is needed to detect other bean defect types.

  • PDF