• Title/Summary/Keyword: K2-learning algorithm

Search Result 542, Processing Time 0.029 seconds

Neural networks optimization for multi-dimensional digital signal processing in IoT devices (IoT 디바이스에서 다차원 디지털 신호 처리를 위한 신경망 최적화)

  • Choi, KwonTaeg
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1165-1173
    • /
    • 2017
  • Deep learning method, which is one of the most famous machine learning algorithms, has proven its applicability in various applications and is widely used in digital signal processing. However, it is difficult to apply deep learning technology to IoT devices with limited CPU performance and memory capacity, because a large number of training samples requires a lot of memory and computation time. In particular, if the Arduino with a very small memory capacity of 2K to 8K, is used, there are many limitations in implementing the algorithm. In this paper, we propose a method to optimize the ELM algorithm, which is proved to be accurate and efficient in various fields, on Arduino board. Experiments have shown that multi-class learning is possible up to 15-dimensional data on Arduino UNO with memory capacity of 2KB and possible up to 42-dimensional data on Arduino MEGA with memory capacity of 8KB. To evaluate the experiment, we proved the effectiveness of the proposed algorithm using the data sets generated using gaussian mixture modeling and the public UCI data sets.

Modeling the Properties of the PECVD Silicon Dioxide Films Using Polynomial Neural Networks

  • Han, Seung-Soo;Song, Kyung-Bin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.195-200
    • /
    • 1998
  • Since the neural network was introduced, significant progress has been made on data handling and learning algorithms. Currently, the most popular learning algorithm in neural network training is feed forward error back-propagation (FFEBP) algorithm. Aside from the success of the FFEBP algorithm, polynomial neural networks (PNN) learning has been proposed as a new learning method. The PNN learning is a self-organizing process designed to determine an appropriate set of Ivakhnenko polynomials that allow the activation of many neurons to achieve a desired state of activation that mimics a given set of sampled patterns. These neurons are interconnected in such a way that the knowledge is stored in Ivakhnenko coefficients. In this paper, the PNN model has been developed using the plasma enhanced chemical vapor deposition (PECVD) experimental data. To characterize the PECVD process using PNN, SiO$_2$films deposited under varying conditions were analyzed using fractional factorial experimental design with three center points. Parameters varied in these experiments included substrate temperature, pressure, RF power, silane flow rate and nitrous oxide flow rate. Approximately five microns of SiO$_2$were deposited on (100) silicon wafers in a Plasma-Therm 700 series PECVD system at 13.56 MHz.

  • PDF

e-Learning Course Reviews Analysis based on Big Data Analytics (빅데이터 분석을 이용한 이러닝 수강 후기 분석)

  • Kim, Jang-Young;Park, Eun-Hye
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.2
    • /
    • pp.423-428
    • /
    • 2017
  • These days, various and tons of education information are rapidly increasing and spreading due to Internet and smart devices usage. Recently, as e-Learning usage increasing, many instructors and students (learners) need to set a goal to maximize learners' result of education and education system efficiency based on big data analytics via online recorded education historical data. In this paper, the author applied Word2Vec algorithm (neural network algorithm) to find similarity among education words and classification by clustering algorithm in order to objectively recognize and analyze online recorded education historical data. When the author applied the Word2Vec algorithm to education words, related-meaning words can be found, classified and get a similar vector values via learning repetition. In addition, through experimental results, the author proved the part of speech (noun, verb, adjective and adverb) have same shortest distance from the centroid by using clustering algorithm.

On the set up to the Number of Hidden Node of Adaptive Back Propagation Neural Network (적응 역전파 신경회로망의 은닉 층 노드 수 설정에 관한 연구)

  • Hong, Bong-Wha
    • The Journal of Information Technology
    • /
    • v.5 no.2
    • /
    • pp.55-67
    • /
    • 2002
  • This paper presents an adaptive back propagation algorithm that update the learning parameter by the generated error, adaptively and varies the number of hidden layer node. This algorithm is expected to escaping from the local minimum and make the best environment for convergence to be change the number of hidden layer node. On the simulation tested this algorithm on two learning pattern. One was exclusive-OR learning and the other was $7{\times}5$ dot alphabetic font learning. In both examples, the probability of becoming trapped in local minimum was reduce. Furthermore, in alphabetic font learning, the neural network enhanced to learning efficient about 41.56%~58.28% for the conventional back propagation. and HNAD(Hidden Node Adding and Deleting) algorithm.

  • PDF

Design of a Fuzzy Controller Using Genetic Algorithms Employing Random Signal-Based Learning (랜덤 신호 기반 학습의 유전 알고리즘을 이용한 퍼지 제어기의 설계)

  • Han, Chang-Uk;Park, Jeong-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.2
    • /
    • pp.131-137
    • /
    • 2001
  • Traditional genetic algorithms, though robust, are generally not the most successful optimization algorithm on only particular domian. Hybridizing a genetic algorithm with other algorithms can produce better performance than both the genetic algorithm and the other algorithms. This paper describes the application of random signal-based learning to a genetic algorithm in order to get well tuned fuzzy rules. The key of tis approach is to adjust both the width and the center of membership functions so that the tuned rule-based fuzzy controller can generate the desired performance. The effectiveness of the proposed algorithm is verified by computer simulation.

  • PDF

An Improvement of AdaBoost using Boundary Classifier

  • Lee, Wonju;Cheon, Minkyu;Hyun, Chang-Ho;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.2
    • /
    • pp.166-171
    • /
    • 2013
  • The method proposed in this paper can improve the performance of the Boosting algorithm in machine learning. The proposed Boundary AdaBoost algorithm can make up for the weak points of Normal binary classifier using threshold boundary concepts. The new proposed boundary can be located near the threshold of the binary classifier. The proposed algorithm improves classification in areas where Normal binary classifier is weak. Thus, the optimal boundary final classifier can decrease error rates classified with more reasonable features. Finally, this paper derives the new algorithm's optimal solution, and it demonstrates how classifier accuracy can be improved using the proposed Boundary AdaBoost in a simulation experiment of pedestrian detection using 10-fold cross validation.

A Sparse Target Matrix Generation Based Unsupervised Feature Learning Algorithm for Image Classification

  • Zhao, Dan;Guo, Baolong;Yan, Yunyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2806-2825
    • /
    • 2018
  • Unsupervised learning has shown good performance on image, video and audio classification tasks, and much progress has been made so far. It studies how systems can learn to represent particular input patterns in a way that reflects the statistical structure of the overall collection of input patterns. Many promising deep learning systems are commonly trained by the greedy layerwise unsupervised learning manner. The performance of these deep learning architectures benefits from the unsupervised learning ability to disentangling the abstractions and picking out the useful features. However, the existing unsupervised learning algorithms are often difficult to train partly because of the requirement of extensive hyperparameters. The tuning of these hyperparameters is a laborious task that requires expert knowledge, rules of thumb or extensive search. In this paper, we propose a simple and effective unsupervised feature learning algorithm for image classification, which exploits an explicit optimizing way for population and lifetime sparsity. Firstly, a sparse target matrix is built by the competitive rules. Then, the sparse features are optimized by means of minimizing the Euclidean norm ($L_2$) error between the sparse target and the competitive layer outputs. Finally, a classifier is trained using the obtained sparse features. Experimental results show that the proposed method achieves good performance for image classification, and provides discriminative features that generalize well.

Design of pin jointed structures using teaching-learning based optimization

  • Togan, Vedat
    • Structural Engineering and Mechanics
    • /
    • v.47 no.2
    • /
    • pp.209-225
    • /
    • 2013
  • A procedure employing a Teaching-Learning Based Optimization (TLBO) method is developed to design discrete pin jointed structures. TLBO process consists of two parts: the first part represents learning from teacher and the second part illustrates learning by interaction among the learners. The results are compared with those obtained using other various evolutionary optimization methods considering the best solution, average solution, and computational effort. Consequently, the TLBO algorithm works effectively and demonstrates remarkable performance for the optimization of engineering design applications.

Feed-forward Learning Algorithm by Generalized Clustering Network (Generalized Clustering Network를 이용한 전방향 학습 알고리즘)

  • Min, Jun-Yeong;Jo, Hyeong-Gi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.5
    • /
    • pp.619-625
    • /
    • 1995
  • This paper constructs a feed-forward learning complex algorithm which replaced by the backpropagation learning. This algorithm first attempts to organize the pattern vectors into clusters by Generalized Learning Vector Quantization(GLVQ) clustering algorithm(Nikhil R. Pal et al, 1993), second, regroup the pattern vectors belonging to different clusters, and the last, recognize into regrouping pattern vectors by single layer perceptron. Because this algorithm is feed-forward learning algorithm, time is less than backpropagation algorithm and the recognition rate is increased. We use 250 ASCII code bit patterns that is normalized to 16$\times$8. As experimental results, when 250 patterns devide by 10 clusters, average iteration of each cluster is 94.7, and recognition rate is 100%.

  • PDF

Efficient gravitational search algorithm for optimum design of retaining walls

  • Khajehzadeh, Mohammad;Taha, Mohd Raihan;Eslami, Mahdiyeh
    • Structural Engineering and Mechanics
    • /
    • v.45 no.1
    • /
    • pp.111-127
    • /
    • 2013
  • In this paper, a new version of gravitational search algorithm based on opposition-based learning (OBGSA) is introduced and applied for optimum design of reinforced concrete retaining walls. The new algorithm employs the opposition-based learning concept to generate initial population and updating agents' position during the optimization process. This algorithm is applied to minimize three objective functions include weight, cost and $CO_2$ emissions of retaining structure subjected to geotechnical and structural requirements. The optimization problem involves five geometric variables and three variables for reinforcement setups. The performance comparison of the new OBGSA and classical GSA algorithms on a suite of five well-known benchmark functions illustrate a faster convergence speed and better search ability of OBGSA for numerical optimization. In addition, the reliability and efficiency of the proposed algorithm for optimization of retaining structures are investigated by considering two design examples of retaining walls. The numerical experiments demonstrate that the new algorithm has high viability, accuracy and stability and significantly outperforms the original algorithm and some other methods in the literature.