• Title/Summary/Keyword: competitive learning

Search Result 369, Processing Time 0.025 seconds

Supervised Competitive Learning Neural Network with Flexible Output Layer

  • Cho, Seong-won
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.675-679
    • /
    • 2001
  • In this paper, we present a new competitive learning algorithm called Dynamic Competitive Learning (DCL). DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among the LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results for pattern recognition of remote sensing data and handwritten numeral data indicate the superiority of DCL in comparison to the conventional competitive learning methods.

  • PDF

Parallel neural netowrks with dynamic competitive learning (동적 경쟁학습을 수행하는 병렬 신경망)

  • 김종완
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.3
    • /
    • pp.169-175
    • /
    • 1996
  • In this paper, a new parallel neural network system that performs dynamic competitive learning is proposed. Conventional learning mehtods utilize the full dimension of the original input patterns. However, a particular attribute or dimension of the input patterns does not necessarily contribute to classification. The proposed system consists of parallel neural networks with the reduced input dimension in order to take advantage of the information in each dimension of the input patterns. Consensus schemes were developed to decide the netowrks performs a competitive learning that dynamically generates output neurons as learning proceeds. Each output neuron has it sown class threshold in the proposed dynamic competitive learning. Because the class threshold in the proposed dynamic learning phase, the proposed neural netowrk adapts properly to the input patterns distribution. Experimental results with remote sensing and speech data indicate the improved performance of the proposed method compared to the conventional learning methods.

  • PDF

The Effect of Open Innovation and Organizational Learning on Technological Competitive Advantage in Venture Business (개방형 혁신과 조직학습 특성이 벤처기업의 기술경쟁우위에 미치는 영향)

  • Seo, Ribin;Yoon, Heon Deok
    • Knowledge Management Research
    • /
    • v.13 no.2
    • /
    • pp.73-93
    • /
    • 2012
  • Although a wide range of theoretical researches have emphasized on the importance of knowledge management in cooperative R&D network, the empirical researches to synthetically examine the role of organizational learning and open innovation which influence on the performance of technological innovation are not enough to meet academic and practical demands. This study is to investigate the effect of open innovation and organizational learning in venture business on technological competitive advantage and establish the mediating role of organizational learning. For the purpose, the questionnaires, made based on the reviewing previous researches, were collected from 274 Korean venture businesses whose managerial focus is on developing technological innovation. As a result of analysis, the relational dimensions of open innovation - network, intensity and trust shared by a firm with external R&D partners - as well as the internal organizational learning system and competence have positive influence on building technological competitive advantage whose sub-variables are technological excellence, market growth potential and business feasibility. In addition, it is identified that organizational learning has the mediating and moderating effect in the relationship between open innovation and technological competitive advantage. These results imply that open innovation complements and expend the range of limited resources and the scope of innovation in technology-intensive small and medium-sized enterprises. Besides, organizational learning activity reinforces the use of knowledge and resources, obtained from external R&D partners. On the basis of these results, detailed issues and discussion were made in the conclusion.

  • PDF

Pattern recognition using competitive learning neural network with changeable output layer (가변 출력층 구조의 경쟁학습 신경회로망을 이용한 패턴인식)

  • 정성엽;조성원
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.2
    • /
    • pp.159-167
    • /
    • 1996
  • In this paper, a new competitive learning algorithm called dynamic competitive learning (DCL) is presented. DCL is a supervised learning mehtod that dynamically generates output neuraons and nitializes weight vectors from training patterns. It introduces a new parameter called LOG (limit of garde) to decide whether or not an output neuron is created. In other words, if there exist some neurons in the province of LOG that classify the input vector correctly, then DCL adjusts the weight vector for the neuraon which has the minimum grade. Otherwise, it produces a new output neuron using the given input vector. It is largely learning is not limited only to the winner and the output neurons are dynamically generated int he trining process. In addition, the proposed algorithm has a small number of parameters. Which are easy to be determined and applied to the real problems. Experimental results for patterns recognition of remote sensing data and handwritten numeral data indicate the superiority of dCL in comparison to the conventional competitive learning methods.

  • PDF

A Study on the Effect of Pair Check Cooperative Learning in Operating System Class

  • Shin, Woochang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.104-110
    • /
    • 2020
  • In the 4th Industrial Revolution, the competitiveness of the software industry is important, and as a solution to fundamentally secure the competitiveness of the software industry, education classes should be provided to educate high quality software personnel in educational institutions. Despite this social situation, software-related classes in universities are largely composed of competitive or individual learning structures. Cooperative learning is a learning model that can complement the problems of competitive and individual learning. Cooperative learning is more effective in improving academic achievement than individual or competitive learning. In addition, most learners have the advantage of having a more desirable self-image by having a successful experience. In this paper, we apply a pair check model, which is a type of cooperative learning, in operating system classes. In addition, the class procedure and instruction plan are designed to apply the pair check model. We analyze the test results to analyze the performance of the cooperative learning model.

Analysis of the fokker-plank equation for the dynamics of langevine cometitive learning neural network (Fokker-plank 방정식의 해석을 통한 Langevine 경쟁학습의 동역학 분석)

  • 석진욱;조성원
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.7
    • /
    • pp.82-91
    • /
    • 1997
  • In this paper, we analyze the dynamics of langevine competitive learning neural network based on its fokker-plank equation. From the viewpont of the stochastic differential equation (SDE), langevine competitive learning equation is one of langevine stochastic differential equation and has the diffusin equation on the topological space (.ohm., F, P) with probability measure. We derive the fokker-plank equation from the proposed algorithm and prove by introducing a infinitestimal operator for markov semigroups, that the weight vector in the particular simplex can converge to the globally optimal point under the condition of some convex or pseudo-convex performance measure function. Experimental resutls for pattern recognition of the remote sensing data indicate the superiority of langevine competitive learning neural network in comparison to the conventional competitive learning neural network.

  • PDF

A study of global minimization analaysis of Langevine competitive learning neural network based on constraction condition and its application to recognition for the handwritten numeral (축합조건의 분석을 통한 Langevine 경쟁 학습 신경회로망의 대역 최소화 근사 해석과 필기체 숫자 인식에 관한 연구)

  • 석진욱;조성원;최경삼
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.466-469
    • /
    • 1996
  • In this paper, we present the global minimization condition by an informal analysis of the Langevine competitive learning neural network. From the viewpoint of the stochastic process, it is important that competitive learning guarantees an optimal solution for pattern recognition. By analysis of the Fokker-Plank equation for the proposed neural network, we show that if an energy function has a special pseudo-convexity, Langevine competitive learning can find the global minima. Experimental results for pattern recognition of handwritten numeral data indicate the superiority of the proposed algorithm.

  • PDF

An Informal Analysis of Diffusion, Global Optimization Properties in Langevine Competitive Learning Neural Network (Langevine 경쟁학습 신경회로망의 확산성과 대역 최적화 성질의 근사 해석)

  • Seok, Jin-Wuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1344-1346
    • /
    • 1996
  • In this paper, we discuss an informal analysis of diffusion, global optimization properties of Langevine competitive learning neural network. In the view of the stochastic process, it is important that competitive learning gurantee an optimal solution for pattern recognition. We show that the binary reinforcement function in Langevine competitive learning is a brownian motion as Gaussian process, and construct the Fokker-Plank equation for the proposed neural network. Finally, we show that the informal analysis of the proposed algorithm has a possiblity of globally optimal. solution with the proper initial condition.

  • PDF

Competitive Learning Neural Network with Dynamic Output Neuron Generation (동적으로 출력 뉴런을 생성하는 경쟁 학습 신경회로망)

  • 김종완;안제성;김종상;이흥호;조성원
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.133-141
    • /
    • 1994
  • Conventional competitive learning algorithms compute the Euclidien distance to determine the winner neuron out of all predetermined output neurons. In such cases, there is a drawback that the performence of the learning algorithm depends on the initial reference(=weight) vectors. In this paper, we propose a new competitive learning algorithm that dynamically generates output neurons. The proposed method generates output neurons by dynamically changing the class thresholds for all output neurons. We compute the similarity between the input vector and the reference vector of each output neuron generated. If the two are similar, the reference vector is adjusted to make it still more like the input vector. Otherwise, the input vector is designated as the reference vector of a new outputneuron. Since the reference vectors of output neurons are dynamically assigned according to input pattern distribution, the proposed method gets around the phenomenon that learning is early determined due to redundant output neurons. Experiments using speech data have shown the proposed method to be superior to existint methods.

  • PDF

Fast Competitive Learning with Classified Learning Rates (분류된 학습률을 가진 고속 경쟁 학습)

  • Kim, Chang-Wook;Cho, Seong-Won;Lee, Choong-Woong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.142-150
    • /
    • 1994
  • This paper deals with fast competitive learning using classified learning rates. The basic idea of the proposed method is to assign a classified learning rate to each weight vector. The weight vector associated with an output node is updated using its own learning rate. Each learning rate is changed only when its corresponding output node wins the competition, and the learning rates of the losing nodes are not changed. The experimental results obtained with image vector quantization show that the proposed method learns more rapidly and yields better quality that conventional competitive learning.

  • PDF