• Title/Summary/Keyword: Learning Parameter

Search Result 681, Processing Time 0.032 seconds

Functions of Chaos Neuron Models with a Feedback Slaving Principle

  • Inoue, Masayoshi
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1009-1012
    • /
    • 1993
  • An association memory, solving an optimization problem, a Boltzmann machine scheme learning and a back propagation learning in our chaos neuron models are reviewed and some new results are presented. In each model its microscopicrule (a parameter of a chaos system in a neuron) is subject to its macroscopic state. This feedback and chaos dynamics are essential mechanisms of our model and their roles are briefly discussed.

  • PDF

An On-line Construction of Generalized RBF Networks for System Modeling (시스템 모델링을 위한 일반화된 RBF 신경회로망의 온라인 구성)

  • Kwon, Oh-Shin;Kim, Hyong-Suk;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.1
    • /
    • pp.32-42
    • /
    • 2000
  • This paper presents an on-line learning algorithm for sequential construction of generalized radial basis function networks (GRBFNs) to model nonlinear systems from empirical data. The GRBFN, an extended from of standard radial basis function (RBF) networks with constant weights, is an architecture capable of representing nonlinear systems by smoothly integrating local linear models. The proposed learning algorithm has a two-stage learning scheme that performs both structure learning and parameter learning. The structure learning stage constructs the GRBFN model using two construction criteria, based on both training error criterion and Mahalanobis distance criterion, to assign new hidden units and the linear local models for given empirical training data. In the parameter learning stage the network parameters are updated using the gradient descent rule. To evaluate the modeling performance of the proposed algorithm, simulations and their results applied to two well-known benchmarks are discussed.

  • PDF

Online Learning of Bayesian Network Parameters for Incomplete Data of Real World (현실 세계의 불완전한 데이타를 위한 베이지안 네트워크 파라메터의 온라인 학습)

  • Lim, Sung-Soo;Cho, Sung-Bae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.12
    • /
    • pp.885-893
    • /
    • 2006
  • The Bayesian network(BN) has emerged in recent years as a powerful technique for handling uncertainty iii complex domains. Parameter learning of BN to find the most proper network from given data set has been investigated to decrease the time and effort for designing BN. Off-line learning needs much time and effort to gather the enough data and since there are uncertainties in real world, it is hard to get the complete data. In this paper, we propose an online learning method of Bayesian network parameters from incomplete data. It provides higher flexibility through learning from incomplete data and higher adaptability on environments through online learning. The results of comparison with Voting EM algorithm proposed by Cohen at el. confirm that the proposed method has the same performance in complete data set and higher performance in incomplete data set, comparing with Voting EM algorithm.

Deep Learning in Genomic and Medical Image Data Analysis: Challenges and Approaches

  • Yu, Ning;Yu, Zeng;Gu, Feng;Li, Tianrui;Tian, Xinmin;Pan, Yi
    • Journal of Information Processing Systems
    • /
    • v.13 no.2
    • /
    • pp.204-214
    • /
    • 2017
  • Artificial intelligence, especially deep learning technology, is penetrating the majority of research areas, including the field of bioinformatics. However, deep learning has some limitations, such as the complexity of parameter tuning, architecture design, and so forth. In this study, we analyze these issues and challenges in regards to its applications in bioinformatics, particularly genomic analysis and medical image analytics, and give the corresponding approaches and solutions. Although these solutions are mostly rule of thumb, they can effectively handle the issues connected to training learning machines. As such, we explore the tendency of deep learning technology by examining several directions, such as automation, scalability, individuality, mobility, integration, and intelligence warehousing.

On-line Bayesian Learning based on Wireless Sensor Network (무선 센서 네트워크에 기반한 온라인 베이지안 학습)

  • Lee, Ho-Suk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06d
    • /
    • pp.105-108
    • /
    • 2007
  • Bayesian learning network is employed for diverse applications. This paper discusses the Bayesian learning network algorithm structure which can be applied in the wireless sensor network environment for various online applications. First, this paper discusses Bayesian parameter learning, Bayesian DAG structure learning, characteristics of wireless sensor network, and data gathering in the wireless sensor network. Second, this paper discusses the important considerations about the online Bayesian learning network and the conceptual structure of the learning network algorithm.

  • PDF

Design of a ParamHub for Machine Learning in a Distributed Cloud Environment

  • Su-Yeon Kim;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.161-168
    • /
    • 2024
  • As the size of big data models grows, distributed training is emerging as an essential element for large-scale machine learning tasks. In this paper, we propose ParamHub for distributed data training. During the training process, this agent utilizes the provided data to adjust various conditions of the model's parameters, such as the model structure, learning algorithm, hyperparameters, and bias, aiming to minimize the error between the model's predictions and the actual values. Furthermore, it operates autonomously, collecting and updating data in a distributed environment, thereby reducing the burden of load balancing that occurs in a centralized system. And Through communication between agents, resource management and learning processes can be coordinated, enabling efficient management of distributed data and resources. This approach enhances the scalability and stability of distributed machine learning systems while providing flexibility to be applied in various learning environments.

Neural Network Parameter Estimation of IPMSM Drive using AFLC (AFLC를 이용한 IPMSM 드라이브의 NN 파라미터 추정)

  • Ko, Jae-Sub;Choi, Jung-Sik;Chung, Dong-Hwa
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.293-300
    • /
    • 2011
  • A number of techniques have been developed for estimation of speed or position in motor drives. The accuracy of these techniques is affected by the variation of motor parameters such as the stator resistance, stator inductance or torque constant. This paper is proposed a neural network based estimator for torque and stator resistance and adaptive fuzzy learning contrroller(AFLC) for speed control in IPMSM Drives. AFLC is chaged fuzzy rule base by rule base modifier for robust control of IPMSM. The neural weights are initially chosen randomly and a model reference algorithm adjusts those weights to give the optimum estimations. The neural network estimator is able to track the varying parameters quite accurately at different speeds with consistent performance. The neural network parameter estimator has been applied to slot and flux linkage torque ripple minimization of the IPMSM. The validity of the proposed parameter estimator and AFLC is confirmed by comparing to conventional algorithm.

Improving Accuracy over Parameter through Channel Pruning based on Neural Architecture Search in Object Detection (물체 탐지에서 Neural Architecture Search 기반 Channel Pruning 을 통한 Parameter 수 대비 정확도 개선)

  • Jaehyeon Roh;Seunghyun Yu;Seungwook Son;Yongwha Chung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.512-513
    • /
    • 2023
  • CNN 기반 Deep Learning 분야에서 객체 탐지 정확도를 높이기 위해 모델의 많은 Parameter 가 사용된다. 많은 Parameter 를 사용하게 되면 최소 하드웨어 성능 요구치가 상승하고 처리속도도 감소한다는 문제가 있어, 최소한의 정확도 하락으로 Parameter 를 줄이기 위한 여러 Pruning 기법이 사용된다. 본 연구에서는 Neural Architecture Search(NAS) 기반 Channel Pruning 인 Artificial Bee Colony(ABC) 알고리즘을 사용하였고, 기존 NAS 기반 Channel Pruning 논문들이 Classification Task 에서만 실험한 것과 달리 Object Detection Task 에서도 NAS 기반 Channel Pruning 을 적용하여 기존 Uniform Pruning 과 비교할 때 파라미터 수 대비 정확도가 개선됨을 확인하였다.

Smoothing parameter selection in semi-supervised learning (준지도 학습의 모수 선택에 관한 연구)

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.993-1000
    • /
    • 2016
  • Semi-supervised learning makes it easy to use an unlabeled data in the supervised learning such as classification. Applying the semi-supervised learning on the regression analysis, we propose two methods for a better regression function estimation. The proposed methods have been assumed different marginal densities of independent variables and different smoothing parameters in unlabeled and labeled data. We shows that the overfitted pilot estimator should be used to achieve the fastest convergence rate and unlabeled data may help to improve the convergence rate with well estimated smoothing parameters. We also find the conditions of smoothing parameters to achieve optimal convergence rate.

Noise Canceler Based on Deep Learning Using Discrete Wavelet Transform (이산 Wavelet 변환을 이용한 딥러닝 기반 잡음제거기)

  • Haeng-Woo Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1103-1108
    • /
    • 2023
  • In this paper, we propose a new algorithm for attenuating the background noises in acoustic signal. This algorithm improves the noise attenuation performance by using the FNN(: Full-connected Neural Network) deep learning algorithm instead of the existing adaptive filter after wavelet transform. After wavelet transforming the input signal for each short-time period, noise is removed from a single input audio signal containing noise by using a 1024-1024-512-neuron FNN deep learning model. This transforms the time-domain voice signal into the time-frequency domain so that the noise characteristics are well expressed, and effectively predicts voice in a noisy environment through supervised learning using the conversion parameter of the pure voice signal for the conversion parameter. In order to verify the performance of the noise reduction system proposed in this study, a simulation program using Tensorflow and Keras libraries was written and a simulation was performed. As a result of the experiment, the proposed deep learning algorithm improved Mean Square Error (MSE) by 30% compared to the case of using the existing adaptive filter and by 20% compared to the case of using the STFT(: Short-Time Fourier Transform) transform effect was obtained.