• Title/Summary/Keyword: perceptron learning

Search Result 349, Processing Time 0.02 seconds

Physiological Fuzzy Single Layer Learning Algorithm for Image Recognition (영상 인식을 위한 생리학적 퍼지 단층 학습 알고리즘)

  • 김영주
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.406-412
    • /
    • 2001
  • In this paper, a new fuzzy single layer learning algorithm is proposed, which shows improved learning time and convergence property than that of the conventional fuzzy single layer perceptron algorithms. First, we investigate the structure of physiological neurons of the nervous system and propose new neuron structures based on fuzzy logic. And by using the proposed fuzzy neuron structures, the model and learning algorithm of Physiological Fuzzy Single Layer Perceptron(P-FSLP) are proposed. For the evaluation of performance of the P-FSLP algorithm, we applied the conventional fuzzy single layer perceptron algorithms and the P-FSLP algorithm to three experiments including Exclusive OR problem, the 3-bit parity bit problem and the recognition of car licence plates, which is an application of image recognition, and evaluated the performance of the algorithms. The experimentation results showed that the proposed P-FSLP algorithm reduces the possibility of local minima more than the conventional fuzzy single layer perceptrons do, and enhances the time and convergence for learning. Furthermore, we found that the P-FSLP algorithm has the great capability for image recognition applications.

  • PDF

Optical Implementation of Single-Layer Perceptron Using Holographic Lenslet Arrays (홀로그램 렌즈 배열을 이용한 단층 인식자의 광학적 구현)

  • 신상길
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 1990.02a
    • /
    • pp.126-130
    • /
    • 1990
  • A single-layer Perceptron with 4x4 input neurons and one output neuron is optically implemented. Holo-graphic lenslet arrays are usee for the programmable optical interconnection topology. The hologram is bleached in order to increase the diffraction efficiency. It is shown that the performance of Perceptron depends on the learning rate, the inertia rate, and the correlation of input patterns.

  • PDF

An Enhanced Fuzzy Single Layer Perceptron With Linear Activation Function (선형 활성화 함수를 이용한 개선된 퍼지 단층 퍼셉트론)

  • Park, Choong-Shik;Cho, Jae-Hyun;Kim, Kwang-Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.7
    • /
    • pp.1387-1393
    • /
    • 2007
  • Even if the linearly separable patterns can be classified by the conventional single layer perceptron, the non-linear problems such as XOR can not be classified by it. A fuzzy single layer perceptron can solve the conventional XOR problems by applying fuzzy membership functions. However, in the fuzzy single layer perception, there are a couple disadvantages which are a decision boundary is sometimes vibrating and a convergence may be extremely lowered according to the scopes of the initial values and learning rates. In this paper, for these reasons, we proposed an enhanced fuzzy single layer perceptron algorithm that can prevent from vibration the decision boundary by introducing a bias term and can also reduce the learn time by applying the modified delta rule which include the learning rates and the momentum concept and applying the new linear activation function. Consequently, the simulation results of the XOR and pattern classification problems presented that the proposed method provided the shorter learning time and better convergence than the conventional fuzzy single layer perceptron.

NETLA Based Optimal Synthesis Method of Binary Neural Network for Pattern Recognition

  • Lee, Joon-Tark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.216-221
    • /
    • 2004
  • This paper describes an optimal synthesis method of binary neural network for pattern recognition. Our objective is to minimize the number of connections and the number of neurons in hidden layer by using a Newly Expanded and Truncated Learning Algorithm (NETLA) for the multilayered neural networks. The synthesis method in NETLA uses the Expanded Sum of Product (ESP) of the boolean expressions and is based on the multilayer perceptron. It has an ability to optimize a given binary neural network in the binary space without any iterative learning as the conventional Error Back Propagation (EBP) algorithm. Furthermore, NETLA can reduce the number of the required neurons in hidden layer and the number of connections. Therefore, this learning algorithm can speed up training for the pattern recognition problems. The superiority of NETLA to other learning algorithms is demonstrated by an practical application to the approximation problem of a circular region.

A Method on the Improvement of Speaker Enrolling Speed for a Multilayer Perceptron Based Speaker Verification System through Reducing Learning Data (다층신경망 기반 화자증명 시스템에서 학습 데이터 감축을 통한 화자등록속도 향상방법)

  • 이백영;황병원;이태승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.585-591
    • /
    • 2002
  • While the multilayer perceptron(MLP) provides several advantages against the existing pattern recognition methods, it requires relatively long time in learning. This results in prolonging speaker enrollment time with a speaker verification system that uses the MLP as a classifier. This paper proposes a method that shortens the enrollment time through adopting the cohort speakers method used in the existing parametric systems and reducing the number of background speakers required to learn the MLP, and confirms the effect of the method by showing the result of an experiment that applies the method to a continuant and MLP-based speaker verification system.

Searching a global optimum by stochastic perturbation in error back-propagation algorithm (오류 역전파 학습에서 확률적 가중치 교란에 의한 전역적 최적해의 탐색)

  • 김삼근;민창우;김명원
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.3
    • /
    • pp.79-89
    • /
    • 1998
  • The Error Back-Propagation(EBP) algorithm is widely applied to train a multi-layer perceptron, which is a neural network model frequently used to solve complex problems such as pattern recognition, adaptive control, and global optimization. However, the EBP is basically a gradient descent method, which may get stuck in a local minimum, leading to failure in finding the globally optimal solution. Moreover, a multi-layer perceptron suffers from locking a systematic determination of the network structure appropriate for a given problem. It is usually the case to determine the number of hidden nodes by trial and error. In this paper, we propose a new algorithm to efficiently train a multi-layer perceptron. OUr algorithm uses stochastic perturbation in the weight space to effectively escape from local minima in multi-layer perceptron learning. Stochastic perturbation probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the EGP learning gets stuck to it. Addition of new hidden nodes also can be viewed asa special case of stochastic perturbation. Using stochastic perturbation we can solve the local minima problem and the network structure design in a unified way. The results of our experiments with several benchmark test problems including theparity problem, the two-spirals problem, andthe credit-screening data show that our algorithm is very efficient.

  • PDF

New Approach to Optimize the Size of Convolution Mask in Convolutional Neural Networks

  • Kwak, Young-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • Convolutional neural network (CNN) consists of a few pairs of both convolution layer and subsampling layer. Thus it has more hidden layers than multi-layer perceptron. With the increased layers, the size of convolution mask ultimately determines the total number of weights in CNN because the mask is shared among input images. It also is an important learning factor which makes or breaks CNN's learning. Therefore, this paper proposes the best method to choose the convolution size and the number of layers for learning CNN successfully. Through our face recognition with vast learning examples, we found that the best size of convolution mask is 5 by 5 and 7 by 7, regardless of the number of layers. In addition, the CNN with two pairs of both convolution and subsampling layer is found to make the best performance as if the multi-layer perceptron having two hidden layers does.

An Improvement of the MLP Based Speaker Verification System through Improving the learning Speed and Reducing the Learning Data (학습속도 개선과 학습데이터 축소를 통한 MLP 기반 화자증명 시스템의 등록속도 향상방법)

  • Lee, Baek-Yeong;Lee, Tae-Seung;Hwang, Byeong-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.88-98
    • /
    • 2002
  • The multilayer perceptron (MLP) has several advantages against other pattern recognition methods, and is expected to be used as the learning and recognizing speakers of speaker verification system. But because of the low learning speed of the error backpropagation (EBP) algorithm that is used for the MLP learning, the MLP learning requires considerable time. Because the speaker verification system must provide verification services just after a speaker's enrollment, it is required to solve the problem. So, this paper tries to make short of time required to enroll speakers with the MLP based speaker verification system, using the method of improving the EBP learning speed and the method of reducing background speakers which adopts the cohort speakers method from the existing speaker verification.

Wine Quality Classification with Multilayer Perceptron

  • Agrawal, Garima;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.2
    • /
    • pp.25-30
    • /
    • 2018
  • This paper is about wine quality classification with multilayer perceptron using the deep neural network. Wine complexity is an issue when predicting the quality. And the deep neural network is considered when using complex dataset. Wine Producers always aim high to get the highest possible quality. They are working on how to achieve the best results with minimum cost and efforts. Deep learning is the possible solution for them. It can help them to understand the pattern and predictions. Although there have been past researchers, which shows how artificial neural network or data mining can be used with different techniques, in this paper, rather not focusing on various techniques, we evaluate how a deep learning model predicts for the quality using two different activation functions. It will help wine producers to decide, how to lead their business with deep learning. Prediction performance could change tremendously with different models and techniques used. There are many factors, which, impact the quality of the wine. Therefore, it is a good idea to use best features for prediction. However, it could also be a good idea to test this dataset without separating these features. It means we use all features so that the system can consider all the feature. In the experiment, due to the limited data set and limited features provided, it was not possible for a system to choose the effective features.

Multilayer Perceptron Model to Estimate Solar Radiation with a Solar Module

  • Kim, Joonyong;Rhee, Joongyong;Yang, Seunghwan;Lee, Chungu;Cho, Seongin;Kim, Youngjoo
    • Journal of Biosystems Engineering
    • /
    • v.43 no.4
    • /
    • pp.352-361
    • /
    • 2018
  • Purpose: The objective of this study was to develop a multilayer perceptron (MLP) model to estimate solar radiation using a solar module. Methods: Data for the short-circuit current of a solar module and other environmental parameters were collected for a year. For MLP learning, 14,400 combinations of input variables, learning rates, activation functions, numbers of layers, and numbers of neurons were trained. The best MLP model employed the batch backpropagation algorithm with all input variables and two hidden layers. Results: The root-mean-squared error (RMSE) of each learning cycle and its average over three repetitions were calculated. The average RMSE of the best artificial neural network model was $48.13W{\cdot}m^{-2}$. This result was better than that obtained for the regression model, for which the RMSE was $66.67W{\cdot}m^{-2}$. Conclusions: It is possible to utilize a solar module as a power source and a sensor to measure solar radiation for an agricultural sensor node.