• Title/Summary/Keyword: Unsupervised Neural Network

Search Result 129, Processing Time 0.031 seconds

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Hangul Recognition Using a Hierarchical Neural Network (계층구조 신경망을 이용한 한글 인식)

  • 최동혁;류성원;강현철;박규태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.11
    • /
    • pp.852-858
    • /
    • 1991
  • An adaptive hierarchical classifier(AHCL) for Korean character recognition using a neural net is designed. This classifier has two neural nets: USACL (Unsupervised Adaptive Classifier) and SACL (Supervised Adaptive Classifier). USACL has the input layer and the output layer. The input layer and the output layer are fully connected. The nodes in the output layer are generated by the unsupervised and nearest neighbor learning rule during learning. SACL has the input layer, the hidden layer and the output layer. The input layer and the hidden layer arefully connected, and the hidden layer and the output layer are partially connected. The nodes in the SACL are generated by the supervised and nearest neighbor learning rule during learning. USACL has pre-attentive effect, which perform partial search instead of full search during SACL classification to enhance processing speed. The input of USACL and SACL is a directional edge feature with a directional receptive field. In order to test the performance of the AHCL, various multi-font printed Hangul characters are used in learning and testing, and its processing its speed and and classification rate are compared with the conventional LVQ(Learning Vector Quantizer) which has the nearest neighbor learning rule.

  • PDF

A Co-Evolutionary Approach for Learning and Structure Search of Neural Networks (공진화에 의한 신경회로망의 구조탐색 및 학습)

  • 이동욱;전효병;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.10a
    • /
    • pp.111-114
    • /
    • 1997
  • Usually, Evolutionary Algorithms are considered more efficient for optimal system design, However, the performance of the system is determined by fitness function and system environment. In this paper, in order to overcome the limitation of the performance by this factor, we propose a co-evolutionary method that two populations constantly interact and coevolve. In this paper, we apply coevolution to neural network's evolving. So, one population is composed of the structure of neural networks and other population is composed of training patterns. The structure of neural networks evolve to optimal structure and, at the same time, training patterns coevolve to feature patterns. This method prevent the system from the limitation of the performance by random design of neural network structure and inadequate selection of training patterns. In this time neural networks are trained by evolution strategies that are able to apply to the unsupervised learning. And in the coding of neural networks, we propose the method to maintain nonredundancy and character preservingness that are essential factor of genetic coding. We show the validity and the effectiveness of the proposed scheme by applying it to the visual servoing of RV-M2 robot manipulators.

  • PDF

A NOVEL UNSUPERVISED DECONVOLUTION NETWORK:EFFICIENT FOR A SPARSE SOURCE

  • Choi, Seung-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.336-338
    • /
    • 1998
  • This paper presents a novel neural network structure to the blind deconvolution task where the input (source) to a system is not available and the source has any type of distribution including sparse distribution. We employ multiple sensors so that spatial information plays a important role. The resulting learning algorithm is linear so that it works for both sub-and super-Gaussian source. Moreover, we can successfully deconvolve the mixture of a sparse source, while most existing algorithms [5] have difficulties in this task. Computer simulations confirm the validity and high performance of the proposed algorithm.

  • PDF

Implementation of Unsupervised Nonlinear Classifier with Binary Harmony Search Algorithm (Binary Harmony Search 알고리즘을 이용한 Unsupervised Nonlinear Classifier 구현)

  • Lee, Tae-Ju;Park, Seung-Min;Ko, Kwang-Eun;Sung, Won-Ki;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.4
    • /
    • pp.354-359
    • /
    • 2013
  • In this paper, we suggested the method for implementation of unsupervised nonlinear classification using Binary Harmony Search (BHS) algorithm, which is known as a optimization algorithm. Various algorithms have been suggested for classification of feature vectors from the process of machine learning for pattern recognition or EEG signal analysis processing. Supervised learning based support vector machine or fuzzy c-mean (FCM) based on unsupervised learning have been used for classification in the field. However, conventional methods were hard to apply nonlinear dataset classification or required prior information for supervised learning. We solved this problems with proposed classification method using heuristic approach which took the minimal Euclidean distance between vectors, then we assumed them as same class and the others were another class. For the comparison, we used FCM, self-organizing map (SOM) based on artificial neural network (ANN). KEEL machine learning datset was used for simulation. We concluded that proposed method was superior than other algorithms.

Multiple Texture Objects Extraction with Self-organizing Optimal Gabor-filter (자기조직형 최적 가버필터에 의한 다중 텍스쳐 오브젝트 추출)

  • Lee, Woo-Beom;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.311-320
    • /
    • 2003
  • The Optimal filter yielding optimal texture feature separation is a most effective technique for extracting the texture objects from multiple textures images. But, most optimal filter design approaches are restricted to the issue of supervised problems. No full-unsupervised method is based on the recognition of texture objects in image. We propose a novel approach that uses unsupervised learning schemes for efficient texture image analysis, and the band-pass feature of Gabor-filter is used for the optimal filter design. In our approach, the self-organizing neural network for multiple texture image identification is based on block-based clustering. The optimal frequency of Gabor-filter is turned to the optimal frequency of the distinct texture in frequency domain by analyzing the spatial frequency. In order to show the performance of the designed filters, after we have attempted to build a various texture images. The texture objects extraction is achieved by using the designed Gabor-filter. Our experimental results show that the performance of the system is very successful.

A StyleGAN Image Detection Model Based on Convolutional Neural Network (합성곱신경망 기반의 StyleGAN 이미지 탐지모델)

  • Kim, Jiyeon;Hong, Seung-Ah;Kim, Hamin
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1447-1456
    • /
    • 2019
  • As artificial intelligence technology is actively used in image processing, it is possible to generate high-quality fake images based on deep learning. Fake images generated using GAN(Generative Adversarial Network), one of unsupervised learning algorithms, have reached levels that are hard to discriminate from the naked eye. Detecting these fake images is required as they can be abused for crimes such as illegal content production, identity fraud and defamation. In this paper, we develop a deep-learning model based on CNN(Convolutional Neural Network) for the detection of StyleGAN fake images. StyleGAN is one of GAN algorithms and has an excellent performance in generating face images. We experiment with 48 number of experimental scenarios developed by combining parameters of the proposed model. We train and test each scenario with 300,000 number of real and fake face images in order to present a model parameter that improves performance in the detection of fake faces.

Blind Image Separation with Neural Learning Based on Information Theory and Higher-order Statistics (신경회로망 ICA를 이용한 혼합영상신호의 분리)

  • Cho, Hyun-Cheol;Lee, Kwon-Soon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.8
    • /
    • pp.1454-1463
    • /
    • 2008
  • Blind source separation by independent component analysis (ICA) has applied in signal processing, telecommunication, and image processing to recover unknown original source signals from mutually independent observation signals. Neural networks are learned to estimate the original signals by unsupervised learning algorithm. Because the outputs of the neural networks which yield original source signals are mutually independent, then mutual information is zero. This is equivalent to minimizing the Kullback-Leibler convergence between probability density function and the corresponding factorial distribution of the output in neural networks. In this paper, we present a learning algorithm using information theory and higher order statistics to solve problem of blind source separation. For computer simulation two deterministic signals and a Gaussian noise are used as original source signals. We also test the proposed algorithm by applying it to several discrete images.

HIERARCHICAL CLUSTER ANALYSIS by arboART NEURAL NETWORKS and its APPLICATION to KANSEI EVALUATION DATA ANALYSIS

  • Ishihara, Shigekazu;Ishihara, Keiko;Nagamachi, Mitsuo
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.05a
    • /
    • pp.195-200
    • /
    • 2002
  • ART (Adaptive Resonance Theory [1]) neural network and its variations perform non-hierarchical clustering by unsupervised learning. We propose a scheme "arboART" for hierarchical clustering by using several ART1.5-SSS networks. It classifies multidimensional vectors as a cluster tree, and finds features of clusters. The Basic idea of arboART is to use the prototype formed in an ART network as an input to other ART network that has looser distance criteria (Ishihara, et al., [2,3]). By sending prototype vectors made by ART to one after another, many small categories are combined into larger and more generalized categories. We can draw a dendrogram using classification records of sample and categories. We have confirmed its ability using standard test data commonly used in pattern recognition community. The clustering result is better than traditional computing methods, on separation of outliers, smaller error (diameter) of clusters and causes no chaining. This methodology is applied to Kansei evaluation experiment data analysis.

  • PDF

The Modified ART1 Network using Multiresolution Mergence : Mixed Character Recognition (다중 해상도 병합을 이용한 수정된 적응 공명 이론 신경망: 혼합 문자 인식 적용)

  • Choi, Gyung-Hyun;Kim, Min-Je
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.215-222
    • /
    • 2007
  • As Information Technology growing, the character recognition application plays an important role in the ubiquitous environment. In this paper, we propose the Modified ART1 network using Multiresolution Mergence to the problems of the character recognition. The approach is based on the unsupervised neural network and multiresolution. In order to decrease noises and to increase the classification rate of the characters, we propose the multiresolution mergence strategy using both high resolution and low resolution information. Also, to maximize the effect of multiresolution mergence, we use a modified ART1 method with a different similarity measure. Our experimental results show that the classification rate of character is quite increased as well as the performance of the propose algorithm in conjunction with the similarity measure is improved comparing to the conventional ART1 algorithm in this application.