DOI QR코드

DOI QR Code

Classification algorithm using characteristics of EBP and OVSSA

EBP와 OVSSA의 특성을 이용하는 분류 알고리즘

  • 이종찬 (청운대학교 인터넷학과)
  • Received : 2017.12.13
  • Accepted : 2018.02.20
  • Published : 2018.02.28

Abstract

This paper is based on a simple approach that the most efficient learning of a multi-layered network is the process of finding the optimal set of weight vectors. To overcome the disadvantages of general learning problems, the proposed model uses a combination of features of EBP and OVSSA. In other words, the proposed method can construct a single model by taking advantage of each algorithm so that it can escape to the probability theory of OVSSA in order to reinforce the property that EBP falls into local minimum value. In the proposed algorithm, methods for reducing errors in EBP are used as energy functions and the energy is minimized to OVSSA. A simple experimental result confirms that two algorithms with different properties can be combined.

Keywords

EBP;OVSSA;Classification;Energy function;Optimization problem

References

  1. D. E. Rumelhart, G. E. Hinton & R. J. Williams (1986), Learning internal representations by error propagation, PDP, I, 318-362.
  2. P. D. Wasserman. (1990), A combined back- propagation / cauchy machine network, Journal of Neural Network Computing, 34-40.
  3. Y. LeCun, Y. Bengio & G. Hinton. (2015, May) ,Deep learning, Nature, 521, 436-444. https://doi.org/10.1038/nature14539
  4. G. Hinton & R. Salakhutdinov. (2006, July) Reducing the dimensionality of data with neural networks, Science, 313.
  5. V. Nair & G. E. Hinton. (2010), Rectified linear units improve restricted boltzmann machines, International Conference on Machine Learning.
  6. M. Ranzato, & M. Szummer. (2008), Semi- supervised learning of compact document representations with deep networks. International Conference on Machine Learning, 792-799.
  7. J. Schmidhuber. (2015), Deep learning in neural networks : An overview, Neural networks, 1-88.
  8. J.C.Lee & W.D.Lee. (1994), Pattern classification model based on an optimization tool, International conference on Neural Information Processing, 1744-1748.
  9. H. Jeong. (1988, Oct), Learning scheme for neural networks by simulated annealing with back-propagation, Workshop for Information Science society, Korean Federation Science and Technology Societies, 15-20.
  10. N. Baba & M. Kozaki. (1992, June), An intelligent forecasting system of stock price using neural networks, IJCNN, I, 371-377.
  11. K.Lee, K.Cho, W.Lee & S,Lee. (1992, June), Mean field annealing with continuous variables and its application to the quantification analysis Problem, IJCNN, II, 431-435.
  12. M.Kim, H.Choi & W.D.Lee. (1992, June), Fuzzy clustering using extended MFA for continuous valued state space, IJCNN, II, 733-738.
  13. G. Wang & N. Ansari. (1997), Optimal broadcast scheduling in packet radio networks using mean field annealing, IEEE Journal on selected areas in communications, 15(2).
  14. G. D. Kim & Y. H. Kim. (2017), A survey on oil spill and weather forecast using machine learning based on neural networks and statistical methods, Journal of the Korea Convergence Society, 8(10), 1-8. https://doi.org/10.15207/JKCS.2017.8.10.001
  15. Y.D.Yun, Y.W.Yang, H.S.Ji & H.S.Lim. (2017), Development of smart senior classification model based on activity profile using machine learning method, Journal of the Korea Convergence Society, 8(1), 25-34. https://doi.org/10.15207/JKCS.2017.8.1.025