• Title/Summary/Keyword: training parameters

Search Result 1,021, Processing Time 0.026 seconds

A Note on E-Learning Dynamic Assessment with Fuzzy Estimations

  • Orozova Daniela;Kim Tae-Kyun;Kim Yung-Hwan;Park Dal-Won;Seo Jong-Jin;Atanassov Krassimir;Kang Dong-Jin;Rim Seog-Hoon;Jang Lee-Chae;Ryoo Cheon-Seoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.179-182
    • /
    • 2005
  • A model of an assessment module has been created, using intuitionistic fuzzy estimations, which render account on the knowledge of the trained objects. The final mark is determined on the basis of a set of evaluation units. An opportunity is offered no only fur tracing the changes of the parameters of the trainer object, but there is also an opportunity of tracing the status of the already comprehended knowledge, as well as evaluating and changing the training themes and evaluation criteria.

Logical Combinations of Neural Networks

  • Pradittasnee, Lapas;Thammano, Arit;Noppanakeepong, Suthichai
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1053-1056
    • /
    • 2000
  • In general, neural networks based modeling involves trying multiple networks with different architectures and/or training parameters in order to achieve the best accuracy. Only the single best-trained neural network is chosen, while the rest are discarded. However, using only the single best network may never give the best solution in every situation. Many researchers, therefore, propose methods to improve the accuracy of neural networks based modeling. In this paper, the idea of the logical combinations of neural networks is proposed and discussed in detail. The logical combination is constructed by combining the corresponding outputs of the neural networks with the logical “And” node. The experimental results based on simulated data show that the modeling accuracy is significantly improved when compared to using only the single best-trained neural network.

  • PDF

Frontal view face recognition using the hidden markov model and neural networks (은닉 마르코프 모델과 신경회로망을 이용한 정면 얼굴인식)

  • 윤강식;함영국;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.9
    • /
    • pp.97-106
    • /
    • 1996
  • In this paper, we propose a face recognition algorithm using the hidden markov model and neural networks (HMM-NN). In the preprocessing stage, we find edges of a face using the locally adaptive threshold (LAT) scheme and extract features based on generic knowledge of a face, then construct a database with extracted features. In the training stage, we generate HMM parameters for each person by using the forward-backward algorithm. In the recognition stage, we apply probability vlaues calculated by the HMM to subsequent neural networks (NN) as input data. Computer simulation shows that the proposed HMM-NN algorithm gives higher recognition rate compared with conventional face recognition algorithms.

  • PDF

Simulator Output Knowledge Analysis Using Neural network Approach : A Broadand Network Desing Example

  • Kim, Gil-Jo;Park, Sung-Joo
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1994.10a
    • /
    • pp.12-12
    • /
    • 1994
  • Simulation output knowledge analysis is one of problem-solving and/or knowledge adquistion process by investgating the system behavior under study through simulation . This paper describes an approach to simulation outputknowldege analysis using fuzzy neural network model. A fuzzy neral network model is designed with fuzzy setsand membership functions for variables of simulation model. The relationship between input parameters and output performances of simulation model is captured as system behavior knowlege in a fuzzy neural networkmodel by training examples form simulation exepreiments. Backpropagation learning algorithms is used to encode the knowledge. The knowledge is utilized to solve problem through simulation such as system performance prodiction and goal-directed analysis. For explicit knowledge acquisition, production rules are extracted from the implicit neural network knowledge. These rules may assit in explaining the simulation results and providing knowledge base for an expert system. This approach thus enablesboth symbolic and numeric reasoning to solve problem througth simulation . We applied this approach to the design problem of broadband communication network.

  • PDF

Sparse Kernel Regression using IRWLS Procedure

  • Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.3
    • /
    • pp.735-744
    • /
    • 2007
  • Support vector machine(SVM) is capable of providing a more complete description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse kernel regression(SKR) to overcome a weak point of SVM, which is, the steep growth of the number of support vectors with increasing the number of training data. The iterative reweighted least squares(IRWLS) procedure is used to solve the optimal problem of SKR with a Laplacian prior. Furthermore, the generalized cross validation(GCV) function is introduced to select the hyper-parameters which affect the performance of SKR. Experimental results are then presented which illustrate the performance of the proposed procedure.

  • PDF

Application of artificial neural networks (ANNs) and linear regressions (LR) to predict the deflection of concrete deep beams

  • Mohammadhassani, Mohammad;Nezamabadi-pour, Hossein;Jumaat, Mohd Zamin;Jameel, Mohammed;Arumugam, Arul M.S.
    • Computers and Concrete
    • /
    • v.11 no.3
    • /
    • pp.237-252
    • /
    • 2013
  • This paper presents the application of artificial neural network (ANN) to predict deep beam deflection using experimental data from eight high-strength-self-compacting-concrete (HSSCC) deep beams. The optimized network architecture was ten input parameters, two hidden layers, and one output. The feed forward back propagation neural network of ten and four neurons in first and second hidden layers using TRAINLM training function predicted highly accurate and more precise load-deflection diagrams compared to classical linear regression (LR). The ANN's MSE values are 40 times smaller than the LR's. The test data R value from ANN is 0.9931; thus indicating a high confidence level.

Real Time Eye and Gaze Tracking (트래킹 Gaze와 실시간 Eye)

  • Min Jin-Kyoung;Cho Hyeon-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2004.11a
    • /
    • pp.234-239
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process fur each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

Adaptive Signal Separation with Maximum Likelihood

  • Zhao, Yongjian;Jiang, Bin
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.145-154
    • /
    • 2020
  • Maximum likelihood (ML) is the best estimator asymptotically as the number of training samples approaches infinity. This paper deduces an adaptive algorithm for blind signal processing problem based on gradient optimization criterion. A parametric density model is introduced through a parameterized generalized distribution family in ML framework. After specifying a limited number of parameters, the density of specific original signal can be approximated automatically by the constructed density function. Consequently, signal separation can be conducted without any prior information about the probability density of the desired original signal. Simulations on classical biomedical signals confirm the performance of the deduced technique.

The Estimation of Link Travel Speed Using Hybrid Neuro-Fuzzy Networks (Hybrid Neuro-Fuzzy Network를 이용한 실시간 주행속도 추정)

  • Hwang, In-Shik;Lee, Hong-Chul
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.26 no.4
    • /
    • pp.306-314
    • /
    • 2000
  • In this paper we present a new approach to estimate link travel speed based on the hybrid neuro-fuzzy network. It combines the fuzzy ART algorithm for structure learning and the backpropagation algorithm for parameter adaptation. At first, the fuzzy ART algorithm partitions the input/output space using the training data set in order to construct initial neuro-fuzzy inference network. After the initial network topology is completed, a backpropagation learning scheme is applied to optimize parameters of fuzzy membership functions. An initial neuro-fuzzy network can be applicable to any other link where the probe car data are available. This can be realized by the network adaptation and add/modify module. In the network adaptation module, a CBR(Case-Based Reasoning) approach is used. Various experiments show that proposed methodology has better performance for estimating link travel speed comparing to the existing method.

  • PDF

Improved Error Backpropagation by Elastic Learning Rate and Online Update (가변학습율과 온라인모드를 이용한 개선된 EBP 알고리즘)

  • Lee, Tae-Seung;Park, Ho-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.568-570
    • /
    • 2004
  • The error-backpropagation (EBP) algerithm for training multilayer perceptrons (MLPs) is known to have good features of robustness and economical efficiency. However, the algorithm has difficulty in selecting an optimal constant learning rate and thus results in non-optimal learning speed and inflexible operation for working data. This paper Introduces an elastic learning rate that guarantees convergence of learning and its local realization by online upoate of MLP parameters Into the original EBP algorithm in order to complement the non-optimality. The results of experiments on a speaker verification system with Korean speech database are presented and discussed to demonstrate the performance improvement of the proposed method in terms of learning speed and flexibility fer working data of the original EBP algorithm.

  • PDF