• Title/Summary/Keyword: Hidden Neurons

Search Result 133, Processing Time 0.02 seconds

Optimal Heating Load Identification using a DRNN (DRNN을 이용한 최적 난방부하 식별)

  • Chung, Kee-Chull;Yang, Hai-Won
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.10
    • /
    • pp.1231-1238
    • /
    • 1999
  • This paper presents an approach for the optimal heating load Identification using Diagonal Recurrent Neural Networks(DRNN). In this paper, the DRNN captures the dynamic nature of a system and since it is not fully connected, training is much faster than a fully connected recurrent neural network. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer. The hidden layer is comprised of self-recurrent neurons, each feeding its output only into itself. In this study, A dynamic backpropagation (DBP) with delta-bar-delta learning method is used to train an optimal heating load identifier. Delta-bar-delta learning method is an empirical method to adapt the learning rate gradually during the training period in order to improve accuracy in a short time. The simulation results based on experimental data show that the proposed model is superior to the other methods in most cases, in regard of not only learning speed but also identification accuracy.

  • PDF

RBF Neural Network Sturcture for Prediction of Non-linear, Non-stationary Time Series (비선형, 비정상 시계열 예측을 위한RBF(Radial Basis Function) 신경회로망 구조)

  • Kim, Sang-Hwan;Lee, Chong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2299-2301
    • /
    • 1998
  • In this paper, a modified RBF (Radial Basis Function) neural network structure is suggested for the prediction of time series with non-linear, non-stationary characteristics. Conventional RBF neural network predicting time series by using past outputs is for sensing the trajectory of the time series and for reacting when there exists strong relation between input and hidden neuron's RBF center. But this response is highly sensitive to level and trend of time serieses. In order to overcome such dependencies, hidden neurons are modified to react to the increments of input variable and multiplied by increments(or decrements) of out puts for prediction. When the suggested structure is applied to prediction of Lorenz equation, and Rossler equation, improved performances are obtainable.

  • PDF

Application of artificial neural networks (ANNs) and linear regressions (LR) to predict the deflection of concrete deep beams

  • Mohammadhassani, Mohammad;Nezamabadi-pour, Hossein;Jumaat, Mohd Zamin;Jameel, Mohammed;Arumugam, Arul M.S.
    • Computers and Concrete
    • /
    • v.11 no.3
    • /
    • pp.237-252
    • /
    • 2013
  • This paper presents the application of artificial neural network (ANN) to predict deep beam deflection using experimental data from eight high-strength-self-compacting-concrete (HSSCC) deep beams. The optimized network architecture was ten input parameters, two hidden layers, and one output. The feed forward back propagation neural network of ten and four neurons in first and second hidden layers using TRAINLM training function predicted highly accurate and more precise load-deflection diagrams compared to classical linear regression (LR). The ANN's MSE values are 40 times smaller than the LR's. The test data R value from ANN is 0.9931; thus indicating a high confidence level.

The Automatic Topology Construction of The Neural Network using the Fuzzy Rule (퍼지규칙을 이용한 신경회로망의 자동 구성)

  • 이현관;이정훈;엄기환
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.4
    • /
    • pp.766-776
    • /
    • 2001
  • In the constructing of the multi layer neural network, the network topology is often chosen arbitrarily for different applications, and the optimum topology of the network is determined by the long processing of the trial and error. In this paper, we propose the automatic topology construction using the fuzzy rule that optimizes the neurons of hidden layer, and prune the weights connecting the hidden layer and the output layer during the training process. The simulation of pattern recognition, and the experiment of the mapping of the inverted pendulum showed the effectiveness of the proposed method.

  • PDF

A Component-wise Load Forecasting by Adaptable Artificial Neural Network (적응력을 갖는 신경회로망에 의한 성분별 부하 예측)

  • Lim, Jae-Yoon;Kim, Jin-Soo;Kim, Jung-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.21-23
    • /
    • 1994
  • The degree of forecast accuracy with BP-algorithm largely depends upon the neuron number in hidden layer. In order to construct the optimal structure, first, we prescribe the error bounds of learning procedure, and then, we provid the method of incrementing the number of hidden neurons by using the derivative of errors with respect to an output neuron weights. For the case study, we apply the proposed method to forecast the component-wise residential load, and compare this results to that of time series forecasting.

  • PDF

Discernment of Android User Interaction Data Distribution Using Deep Learning

  • Ho, Jun-Won
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.143-148
    • /
    • 2022
  • In this paper, we employ deep neural network (DNN) to discern Android user interaction data distribution from artificial data distribution. We utilize real Android user interaction trace dataset collected from [1] to evaluate our DNN design. In particular, we use sequential model with 4 dense hidden layers and 1 dense output layer in TensorFlow and Keras. We also deploy sigmoid activation function for a dense output layer with 1 neuron and ReLU activation function for each dense hidden layer with 32 neurons. Our evaluation shows that our DNN design fulfills high test accuracy of at least 0.9955 and low test loss of at most 0.0116 in all cases of artificial data distributions.

Illumination correction via improved grey wolf optimizer for regularized random vector functional link network

  • Xiaochun Zhang;Zhiyu Zhou
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.816-839
    • /
    • 2023
  • In a random vector functional link (RVFL) network, shortcomings such as local optimal stagnation and decreased convergence performance cause a reduction in the accuracy of illumination correction by only inputting the weights and biases of hidden neurons. In this study, we proposed an improved regularized random vector functional link (RRVFL) network algorithm with an optimized grey wolf optimizer (GWO). Herein, we first proposed the moth-flame optimization (MFO) algorithm to provide a set of excellent initial populations to improve the convergence rate of GWO. Thereafter, the MFO-GWO algorithm simultaneously optimized the input feature, input weight, hidden node and bias of RRVFL, thereby avoiding local optimal stagnation. Finally, the MFO-GWO-RRVFL algorithm was applied to ameliorate the performance of illumination correction of various test images. The experimental results revealed that the MFO-GWO-RRVFL algorithm was stable, compatible, and exhibited a fast convergence rate.

Neural Network Architecture Optimization and Application

  • Liu, Zhijun;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.214-217
    • /
    • 1999
  • In this paper, genetic algorithm (GA) is implemented to search for the optimal structures (i.e. the kind of neural networks, the number of inputs and hidden neurons) of neural networks which are used approximating a given nonlinear function. Two kinds of neural networks, i.e. the multilayer feedforward [1] and time delay neural networks (TDNN) [2] are involved in this paper. The synapse weights of each neural network in each generation are obtained by associated training algorithms. The simulation results of nonlinear function approximation are given out and some improvements in the future are outlined.

  • PDF

A neural network with local weight learning and its application to inverse kinematic robot solution (부분 학습구조의 신경회로와 로보트 역 기구학 해의 응용)

  • 이인숙;오세영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.36-40
    • /
    • 1990
  • Conventional back propagation learning is generally characterized by slow and rather inaccurate learning which makes it difficult to use in control applications. A new multilayer perception architecture and its learning algorithm is proposed that consists of a Kohonen front layer followed by a back propagation network. The Kohonen layer selects a subset of the hidden layer neurons for local tuning. This architecture has been tested on the inverse kinematic solution of robot manipulator while demonstrating its fast and accurate learning capabilities.

  • PDF

Analyzing Performance and Dynamics of Echo State Networks Given Various Structures of Hidden Neuron Connections (Echo State Network 모델의 은닉 뉴런 간 연결구조에 따른 성능과 동역학적 특성 분석)

  • Yoon, Sangwoong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.4
    • /
    • pp.338-342
    • /
    • 2015
  • Recurrent Neural Network (RNN), a machine learning model which can handle time-series data, can possess more varied structures than a feed-forward neural network, since a RNN allows hidden-to-hidden connections. This research focuses on the network structure among hidden neurons, and discusses the information processing capability of RNN. Time-series learning potential and dynamics of RNNs are investigated upon several well-established network structure models. Hidden neuron network structure is found to have significant impact on the performance of a model, and the performance variations are generally correlated with the criticality of the network dynamics. Especially Preferential Attachment Network model showed an interesting behavior. These findings provide clues for performance improvement of the RNN.