• Title/Summary/Keyword: Parameter learning

Search Result 673, Processing Time 0.037 seconds

The Camparative study of NHPP Extreme Value Distribution Software Reliability Model from the Perspective of Learning Effects (NHPP 극값 분포 소프트웨어 신뢰모형에 대한 학습효과 기법 비교 연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.1-8
    • /
    • 2011
  • In this study, software products developed in the course of testing, software managers in the process of testing software test and test tools for effective learning effects perspective has been studied using the NHPP software. The finite failure non-homogeneous Poisson process models presented and the life distribution applied extreme distribution which used to find the minimum (or the maximum) of a number of samples of various distributions. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than automatic error that is generally efficient model could be confirmed. This paper, a numerical example of applying using time between failures and parameter estimation using maximum likelihood estimation method, after the efficiency of the data through trend analysis model selection were efficient using the mean square error.

Intelligent Control of Robot Manipulators by Learning (학습을 이용한 로봇 머니퓰레이터용 지능제어)

  • Lee DongHun;Kuc TaeYong;Chung ChaeWook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.330-336
    • /
    • 2005
  • An intelligent control method is proposed for control of rigid robot manipulators which achieves exponential tracking of repetitive robot trajectory under uncertain operating conditions such as parameter uncertainty and unknown deterministic disturbance. In the learning controller, exponentially stable learning algorithms are combined with stabilizing computed error feedforward and feedback inputs. It is shown that all the error signals in the learning system are bounded and the repetitive robot motion converges to the desired one exponentially fast with guaranteed convergence rate. An engineering workstation based control system is built to verify the effectiveness of the proposed control scheme.

Recent Research & Development Trends in Automated Machine Learning (자동 기계학습(AutoML) 기술 동향)

  • Moon, Y.H.;Shin, I.H.;Lee, Y.J.;Min, O.G.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.32-42
    • /
    • 2019
  • The performance of machine learning algorithms significantly depends on how a configuration of hyperparameters is identified and how a neural network architecture is designed. However, this requires expert knowledge of relevant task domains and a prohibitive computation time. To optimize these two processes using minimal effort, many studies have investigated automated machine learning in recent years. This paper reviews the conventional random, grid, and Bayesian methods for hyperparameter optimization (HPO) and addresses its recent approaches, which speeds up the identification of the best set of hyperparameters. We further investigate existing neural architecture search (NAS) techniques based on evolutionary algorithms, reinforcement learning, and gradient derivatives and analyze their theoretical characteristics and performance results. Moreover, future research directions and challenges in HPO and NAS are described.

Effects of Hyper-parameters and Dataset on CNN Training

  • Nguyen, Huu Nhan;Lee, Chanho
    • Journal of IKEEE
    • /
    • v.22 no.1
    • /
    • pp.14-20
    • /
    • 2018
  • The purpose of training a convolutional neural network (CNN) is to obtain weight factors that give high classification accuracies. The initial values of hyper-parameters affect the training results, and it is important to train a CNN with a suitable hyper-parameter set of a learning rate, a batch size, the initialization of weight factors, and an optimizer. We investigate the effects of a single hyper-parameter while others are fixed in order to obtain a hyper-parameter set that gives higher classification accuracies and requires shorter training time using a proposed VGG-like CNN for training since the VGG is widely used. The CNN is trained for four datasets of CIFAR10, CIFAR100, GTSRB and DSDL-DB. The effects of the normalization and the data transformation for datasets are also investigated, and a training scheme using merged datasets is proposed.

Butter-Worth analog filter parameter estimation using the genetic algorithm (유전자 알고리듬을 이용한 Butter-Worth 아날로그 필터의 파라미터 추정)

  • Son, Jun-Hyeok;Seo, So-Hyeok
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2513-2515
    • /
    • 2005
  • Recently genetic algorithm techniques have widely used in adaptive and control schemes for production systems. However, generally it costs a lot of time for leaming in the case applied in control system. Furthermore, the physical meaning of genetic algorithm constructed as a result is not obvious. And this method has been used as a learning algorithm to estimate the parameter of a genetic algorithm used for identification of the process dynamics of Butter-Worth analog filter and it was shown that this method offered superior capability over the genetic algorithm. A genetic algorithm is used to solve the parameter identification problem for linear and nonlinear digital filters. This paper goal estimate Butter-Worth analog filter parameter using the genetic algorithm.

  • PDF

Hyper-parameter Optimization for Monte Carlo Tree Search using Self-play

  • Lee, Jin-Seon;Oh, Il-Seok
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.36-43
    • /
    • 2020
  • The Monte Carlo tree search (MCTS) is a popular method for implementing an intelligent game program. It has several hyper-parameters that require an optimization for showing the best performance. Due to the stochastic nature of the MCTS, the hyper-parameter optimization is difficult to solve. This paper uses the self-playing capability of the MCTS-based game program for optimizing the hyper-parameters. It seeks a winner path over the hyper-parameter space while performing the self-play. The top-q longest winners in the winner path compete for the final winner. The experiment using the 15-15-5 game (Omok in Korean name) showed a promising result.

Lie Detection Technique using Video from the Ratio of Change in the Appearance

  • Hossain, S.M. Emdad;Fageeri, Sallam Osman;Soosaimanickam, Arockiasamy;Kausar, Mohammad Abu;Said, Aiman Moyaid
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.165-170
    • /
    • 2022
  • Lying is nuisance to all, and all liars knows it is nuisance but still keep on lying. Sometime people are in confusion how to escape from or how to detect the liar when they lie. In this research we are aiming to establish a dynamic platform to identify liar by using video analysis especially by calculating the ratio of changes in their appearance when they lie. The platform will be developed using a machine learning algorithm along with the dynamic classifier to classify the liar. For the experimental analysis the dataset to be processed in two dimensions (people lying and people tell truth). Both parameter of facial appearance will be stored for future identification. Similarly, there will be standard parameter to be built for true speaker and liar. We hope this standard parameter will be able to diagnosed a liar without a pre-captured data.

Supervised Competitive Learning Neural Network with Flexible Output Layer

  • Cho, Seong-won
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.675-679
    • /
    • 2001
  • In this paper, we present a new competitive learning algorithm called Dynamic Competitive Learning (DCL). DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among the LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results for pattern recognition of remote sensing data and handwritten numeral data indicate the superiority of DCL in comparison to the conventional competitive learning methods.

  • PDF

Deep Learning in MR Image Processing

  • Lee, Doohee;Lee, Jingu;Ko, Jingyu;Yoon, Jaeyeon;Ryu, Kanghyun;Nam, Yoonho
    • Investigative Magnetic Resonance Imaging
    • /
    • v.23 no.2
    • /
    • pp.81-99
    • /
    • 2019
  • Recently, deep learning methods have shown great potential in various tasks that involve handling large amounts of digital data. In the field of MR imaging research, deep learning methods are also rapidly being applied in a wide range of areas to complement or replace traditional model-based methods. Deep learning methods have shown remarkable improvements in several MR image processing areas such as image reconstruction, image quality improvement, parameter mapping, image contrast conversion, and image segmentation. With the current rapid development of deep learning technologies, the importance of the role of deep learning in MR imaging research appears to be growing. In this article, we introduce the basic concepts of deep learning and review recent studies on various MR image processing applications.

Comparing the Performance of 17 Machine Learning Models in Predicting Human Population Growth of Countries

  • Otoom, Mohammad Mahmood
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.220-225
    • /
    • 2021
  • Human population growth rate is an important parameter for real-world planning. Common approaches rely upon fixed parameters like human population, mortality rate, fertility rate, which is collected historically to determine the region's population growth rate. Literature does not provide a solution for areas with no historical knowledge. In such areas, machine learning can solve the problem, but a multitude of machine learning algorithm makes it difficult to determine the best approach. Further, the missing feature is a common real-world problem. Thus, it is essential to compare and select the machine learning techniques which provide the best and most robust in the presence of missing features. This study compares 17 machine learning techniques (base learners and ensemble learners) performance in predicting the human population growth rate of the country. Among the 17 machine learning techniques, random forest outperformed all the other techniques both in predictive performance and robustness towards missing features. Thus, the study successfully demonstrates and compares machine learning techniques to predict the human population growth rate in settings where historical data and feature information is not available. Further, the study provides the best machine learning algorithm for performing population growth rate prediction.