• Title/Summary/Keyword: Feed-Forward Neural Networks

Search Result 92, Processing Time 0.022 seconds

Fight Detection in Hockey Videos using Deep Network

  • Mukherjee, Subham;Saini, Rajkumar;Kumar, Pradeep;Roy, Partha Pratim;Dogra, Debi Prosad;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.225-232
    • /
    • 2017
  • Understanding actions in videos is an important task. It helps in finding the anomalies present in videos such as fights. Detection of fights becomes more crucial when it comes to sports. This paper focuses on finding fight scenes in Hockey sport videos using blur & radon transform and convolutional neural networks (CNNs). First, the local motion within the video frames has been extracted using blur information. Next, fast fourier and radon transform have been applied on the local motion. The video frames with fight scene have been identified using transfer learning with the help of pre-trained deep learning model VGG-Net. Finally, a comparison of the methodology has been performed using feed forward neural networks. Accuracies of 56.00% and 75.00% have been achieved using feed forward neural network and VGG16-Net, respectively.

Effect of Nonlinear Transformations on Entropy of Hidden Nodes

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.10 no.1
    • /
    • pp.18-22
    • /
    • 2014
  • Hidden nodes have a key role in the information processing of feed-forward neural networks in which inputs are processed through a series of weighted sums and nonlinear activation functions. In order to understand the role of hidden nodes, we must analyze the effect of the nonlinear activation functions on the weighted sums to hidden nodes. In this paper, we focus on the effect of nonlinear functions in a viewpoint of information theory. Under the assumption that the nonlinear activation function can be approximated piece-wise linearly, we prove that the entropy of weighted sums to hidden nodes decreases after piece-wise linear functions. Therefore, we argue that the nonlinear activation function decreases the uncertainty among hidden nodes. Furthermore, the more the hidden nodes are saturated, the more the entropy of hidden nodes decreases. Based on this result, we can say that, after successful training of feed-forward neural networks, hidden nodes tend not to be in linear regions but to be in saturated regions of activation function with the effect of uncertainty reduction.

Global Function Approximations Using Wavelet Neural Networks (웨이블렛 신경망을 이용한 전역근사 메타모델의 성능비교)

  • Shin, Kwang-Ho;Lee, Jong-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.8
    • /
    • pp.753-759
    • /
    • 2009
  • Feed-forward neural networks have been widely used as function approximation tools in the context of global approximate optimization. In the present study, a wavelet neural network (WNN) which is based on wavelet transform theory is suggested as an alternative to a traditional back-propagation neural network (BPN). The basic theory of wavelet neural network is briefly described, and approximation performance is tested using a nonlinear multimodal function and a composite rotor blade analysis problem. Laplacian of Gaussian function, Mexican function, and Morlet function are considered during the construction of WNN architectures. In addition, approximation results from WNN are compared with those from BPN.

Comparison of Objective Functions for Feed-forward Neural Network Classifiers Using Receiver Operating Characteristics Graph

  • Oh, Sang-Hoon;Wakuya, Hiroshi
    • International Journal of Contents
    • /
    • v.10 no.1
    • /
    • pp.23-28
    • /
    • 2014
  • When developing a classifier using various objective functions, it is important to compare the performances of the classifiers. Although there are statistical analyses of objective functions for classifiers, simulation results can provide us with direct comparison results and in this case, a comparison criterion is considerably critical. A Receiver Operating Characteristics (ROC) graph is a simulation technique for comparing classifiers and selecting a better one based on a performance. In this paper, we adopt the ROC graph to compare classifiers trained by mean-squared error, cross-entropy error, classification figure of merit, and the n-th order extension of cross-entropy error functions. After the training of feed-forward neural networks using the CEDAR database, the ROC graphs are plotted to help us identify which objective function is better.

Contour Plots of Objective Functions for Feed-Forward Neural Networks

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.8 no.4
    • /
    • pp.30-35
    • /
    • 2012
  • Error surfaces provide us with very important information for training of feed-forward neural networks (FNNs). In this paper, we draw the contour plots of various error or objective functions for training of FNNs. Firstly, when applying FNNs to classifications, the weakness of mean-squared error is explained with the viewpoint of error contour plot. And the classification figure of merit, mean log-square error, cross-entropy error, and n-th order extension of cross-entropy error objective functions are considered for the contour plots. Also, the recently proposed target node method is explained with the viewpoint of contour plot. Based on the contour plots, we can explain characteristics of various error or objective functions when training of FNNs proceeds.

GA-based Feed-forward Self-organizing Neural Network Architecture and Its Applications for Multi-variable Nonlinear Process Systems

  • Oh, Sung-Kwun;Park, Ho-Sung;Jeong, Chang-Won;Joo, Su-Chong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.3
    • /
    • pp.309-330
    • /
    • 2009
  • In this paper, we introduce the architecture of Genetic Algorithm(GA) based Feed-forward Polynomial Neural Networks(PNNs) and discuss a comprehensive design methodology. A conventional PNN consists of Polynomial Neurons, or nodes, located in several layers through a network growth process. In order to generate structurally optimized PNNs, a GA-based design procedure for each layer of the PNN leads to the selection of preferred nodes(PNs) with optimal parameters available within the PNN. To evaluate the performance of the GA-based PNN, experiments are done on a model by applying Medical Imaging System(MIS) data to a multi-variable software process. A comparative analysis shows that the proposed GA-based PNN is modeled with higher accuracy and more superb predictive capability than previously presented intelligent models.

Design of hetero-hybridized feed-forward neural networks with information granules using evolutionary algorithm

  • Roh Seok-Beom;Oh Sung-Kwun;Ahn Tae-Chon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.483-487
    • /
    • 2005
  • We introduce a new architecture of hetero-hybridized feed-forward neural networks composed of fuzzy set-based polynomial neural networks (FSPNN) and polynomial neural networks (PM) that are based on a genetically optimized multi-layer perceptron and develop their comprehensive design methodology involving mechanisms of genetic optimization and Information Granulation. The construction of Information Granulation based HFSPNN (IG-HFSPNN) exploits fundamental technologies of Computational Intelligence(Cl), namely fuzzy sets, neural networks, and genetic algorithms(GAs) and Information Granulation. The architecture of the resulting genetically optimized Information Granulation based HFSPNN (namely IG-gHFSPNN) results from a synergistic usage of the hybrid system generated by combining new fuzzy set based polynomial neurons (FPNs)-based Fuzzy Neural Networks(PM) with polynomial neurons (PNs)-based Polynomial Neural Networks(PM). The design of the conventional genetically optimized HFPNN exploits the extended Group Method of Data Handling(GMDH) with some essential parameters of the network being tuned by using Genetie Algorithms throughout the overall development process. However, the new proposed IG-HFSPNN adopts a new method called as Information Granulation to deal with Information Granules which are included in the real system, and a new type of fuzzy polynomial neuron called as fuzzy set based polynomial neuron. The performance of the IG-gHFPNN is quantified through experimentation.

  • PDF

A Study on the Neuro-Fuzzy Control and Its Application

  • So, Myung-Ok;Yoo, Heui-Han;Jin, Sun-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.28 no.2
    • /
    • pp.228-236
    • /
    • 2004
  • In this paper. we present a neuro-fuzzy controller which unifies both fuzzy logic and multi-layered feed forward neural networks. Fuzzy logic provides a means for converting linguistic control knowledge into control actions. On the other hand. feed forward neural networks provide salient features. such as learning and parallelism. In the proposed neuro-fuzzy controller. the parameters of membership functions in the antecedent part of fuzzy inference rules are identified by using the error back propagation algorithm as a learning rule. while the coefficients of the linear combination of input variables in the consequent part are determined by using the least square estimation method. Finally. the effectiveness of the proposed controller is verified through computer simulation for an inverted pole system.

Audio Event Detection Using Deep Neural Networks (깊은 신경망을 이용한 오디오 이벤트 검출)

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kim, Ji-Hwan
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.183-190
    • /
    • 2017
  • This paper proposes an audio event detection method using Deep Neural Networks (DNN). The proposed method applies Feed Forward Neural Network (FFNN) to generate output probabilities of twenty audio events for each frame. Mel scale filter bank (FBANK) features are extracted from each frame, and its five consecutive frames are combined as one vector which is the input feature of the FFNN. The output layer of FFNN produces audio event probabilities for each input feature vector. More than five consecutive frames of which event probability exceeds threshold are detected as an audio event. An audio event continues until the event is detected within one second. The proposed method achieves as 71.8% accuracy for 20 classes of the UrbanSound8K and the BBC Sound FX dataset.

Magnetic Flux Leakage (MFL) based Defect Characterization of Steam Generator Tubes using Artificial Neural Networks

  • Daniel, Jackson;Abudhahir, A.;Paulin, J. Janet
    • Journal of Magnetics
    • /
    • v.22 no.1
    • /
    • pp.34-42
    • /
    • 2017
  • Material defects in the Steam Generator Tubes (SGT) of sodium cooled fast breeder reactor (PFBR) can lead to leakage of water into sodium. The water and sodium reaction will lead to major accidents. Therefore, the examination of steam generator tubes for the early detection of defects is an important requirement for safety and economic considerations. In this work, the Magnetic Flux Leakage (MFL) based Non Destructive Testing (NDT) technique is used to perform the defect detection process. The rectangular notch defects on the outer surface of steam generator tubes are modeled using COMSOL multiphysics 4.3a software. The obtained MFL images are de-noised to improve the integrity of flaw related information. Grey Level Co-occurrence Matrix (GLCM) features are extracted from MFL images and taken as input parameter to train the neural network. A comparative study on characterization have been carried out using feed-forward back propagation (FFBP) and cascade-forward back propagation (CFBP) algorithms. The results of both algorithms are evaluated with Mean Square Error (MSE) as a prediction performance measure. The average percentage error for length, depth and width are also computed. The result shows that the feed-forward back propagation network model performs better in characterizing the defects.