• Title/Summary/Keyword: Weight Learning

Search Result 658, Processing Time 0.026 seconds

Development of a Multi-criteria Pedestrian Pathfinding Algorithm by Perceptron Learning

  • Yu, Kyeonah;Lee, Chojung;Cho, Inyoung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.49-54
    • /
    • 2017
  • Pathfinding for pedestrians provided by various navigation programs is based on a shortest path search algorithm. There is no big difference in their guide results, which makes the path quality more important. Multiple criteria should be included in the search cost to calculate the path quality, which is called a multi-criteria pathfinding. In this paper we propose a user adaptive pathfinding algorithm in which the cost function for a multi-criteria pathfinding is defined as a weighted sum of multiple criteria and the weights are learned automatically by Perceptron learning. Weight learning is implemented in two ways: short-term weight learning that reflects weight changes in real time as the user moves and long-term weight learning that updates the weights by the average value of the entire path after completing the movement. We use the weight update method with momentum for long-term weight learning, so that learning speed is improved and the learned weight can be stabilized. The proposed method is implemented as an app and is applied to various movement situations. The results show that customized pathfinding based on user preference can be obtained.

A study on the improvement of fuzzy ARTMAP for pattern recognition problems (Fuzzy ARTMAP 신경회로망의 패턴 인식율 개선에 관한 연구)

  • 이재설;전종로;이충웅
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.9
    • /
    • pp.117-123
    • /
    • 1996
  • In this paper, we present a new learning method for the fuzzy ARTMAP which is effective for the noisy input patterns. Conventional fuzzy ARTMAP employs only fuzzy AND operation between input vector and weight vector in learning both top-down and bottom-up weight vectors. This fuzzy AND operation causes excessive update of the weight vector in the noisy input environment. As a result, the number of spurious categories are increased and the recognition ratio is reduced. To solve these problems, we propose a new method in updating the weight vectors: the top-down weight vectors of the fuzzy ART system are updated using weighted average of the input vector and the weight vector itself, and the bottom-up weight vectors are updated using fuzzy AND operation between the updated top-down weitht vector and bottom-up weight vector itself. The weighted average prevents the excessive update of the weight vectors and the fuzzy AND operation renders the learning fast and stble. Simulation results show that the proposed method reduces the generation of spurious categories and increases the recognition ratio in the noisy input environment.

  • PDF

The design method for a vector codebook using a variable weight and employing an improved splitting method (개선된 미세분할 방법과 가변적인 가중치를 사용한 벡터 부호책 설계 방법)

  • Cho, Che-Hwang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.462-469
    • /
    • 2002
  • While the conventional K-means algorithms use a fixed weight to design a vector codebook for all learning iterations, the proposed method employs a variable weight for learning iterations. The weight value of two or more beyond a convergent region is applied to obtain new codevectors at the initial learning iteration. The number of learning iteration applying a variable weight must be decreased for higher weight value at the initial learning iteration to design a better codebook. To enhance the splitting method that is used to generate an initial codebook, we propose a new method, which reduces the error between a representative vector and the member of training vectors. The method is that the representative vector with maximum squared error is rejected, but the vector with minimum error is splitting, and then we can obtain the better initial codevectors.

Structural optimization with teaching-learning-based optimization algorithm

  • Dede, Tayfun;Ayvaz, Yusuf
    • Structural Engineering and Mechanics
    • /
    • v.47 no.4
    • /
    • pp.495-511
    • /
    • 2013
  • In this paper, a new efficient optimization algorithm called Teaching-Learning-Based Optimization (TLBO) is used for the least weight design of trusses with continuous design variables. The TLBO algorithm is based on the effect of the influence of a teacher on the output of learners in a class. Several truss structures are analyzed to show the efficiency of the TLBO algorithm and the results are compared with those reported in the literature. It is concluded that the TLBO algorithm presented in this study can be effectively used in the weight minimization of truss structures.

Supervised Competitive Learning Neural Network with Flexible Output Layer

  • Cho, Seong-won
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.675-679
    • /
    • 2001
  • In this paper, we present a new competitive learning algorithm called Dynamic Competitive Learning (DCL). DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among the LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results for pattern recognition of remote sensing data and handwritten numeral data indicate the superiority of DCL in comparison to the conventional competitive learning methods.

  • PDF

Effective Frequency of External Feedback for Increasing the Percentage of Body Weight Loading on the Affected Leg of Hemiplegic Patients (편마비환자의 환측하지 체중부하율 향상을 위한 효과적인 외적 되먹임 빈도)

  • Noh, Mi-He;Yi, Chung-Hwi;Cho, Sang-Hyun;Kim, Tae-Ue
    • Physical Therapy Korea
    • /
    • v.5 no.3
    • /
    • pp.1-10
    • /
    • 1998
  • In motor learning, the relative frequency of external feedback is the proportion of external feedback presentations divided by the total number of practice trials. In earlier studies, increasing the percentage of body weight loading on the affected leg of hemiplegic patients, external feedback was continuously produced as the patient attempted to perform a movement. This feedback was produced to enhance the learning effect. However, recent studies in nondisabled populations have suggested that compared with 100% relative frequency conditions, practice with lower relative frequencies is more effective. My study compared the effect of 100% relative frequency conditions with 67% relative frequency conditions to determine what effect they exerted on motor learning for increasing the percentage of body weight loading on the affected lower limbs of patients with hemiplegia. Twenty-four hemiplegic patients were randomly assigned to one of two experimental groups. Each group practiced weight transfer motor learning on a machine. During practice, visual feedback was offered to all subjects. The experiment was carried out with full visual feedback for patients in group one but only 67% visual feedback for patients in group two. The percentage of loading on the affected leg was recorded four times: before learning (baseline value), immediately after learning, 30 minutes after learning, 24 hours after learning. The results were as follows: 1. In the 100% visual feedback group, the percentage of loading on the affected leg increased significantly in all three testing modes over the baseline value. 2. In the 67% visual feedback group, the percentage of loading on the affected leg increased significantly in all three measurements. 3. Immediately after learning, the learning effect was not significantly different between the two groups, but was significantly greater after both the 30 minutes delay and the 24 hours period. These results suggest that the 33% reduction in the provision of visual feedback may enhance the learning effect of increasing the percentage of body weight loading on the affected leg in patients with hemiplegia.

  • PDF

Enhanced RBF Network by Using Auto- Turning Method of Learning Rate, Momentum and ART2

  • Kim, Kwang-baek;Moon, Jung-wook
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.84-87
    • /
    • 2003
  • This paper proposes the enhanced REF network, which arbitrates learning rate and momentum dynamically by using the fuzzy system, to arbitrate the connected weight effectively between the middle layer of REF network and the output layer of REF network. ART2 is applied to as the learning structure between the input layer and the middle layer and the proposed auto-turning method of arbitrating the learning rate as the method of arbitrating the connected weight between the middle layer and the output layer. The enhancement of proposed method in terms of learning speed and convergence is verified as a result of comparing it with the conventional delta-bar-delta algorithm and the REF network on the basis of the ART2 to evaluate the efficiency of learning of the proposed method.

  • PDF

Weighted Fast Adaptation Prior on Meta-Learning

  • Widhianingsih, Tintrim Dwi Ary;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.68-74
    • /
    • 2019
  • Along with the deeper architecture in the deep learning approaches, the need for the data becomes very big. In the real problem, to get huge data in some disciplines is very costly. Therefore, learning on limited data in the recent years turns to be a very appealing area. Meta-learning offers a new perspective to learn a model with this limitation. A state-of-the-art model that is made using a meta-learning framework, Meta-SGD, is proposed with a key idea of learning a hyperparameter or a learning rate of the fast adaptation stage in the outer update. However, this learning rate usually is set to be very small. In consequence, the objective function of SGD will give a little improvement to our weight parameters. In other words, the prior is being a key value of getting a good adaptation. As a goal of meta-learning approaches, learning using a single gradient step in the inner update may lead to a bad performance. Especially if the prior that we use is far from the expected one, or it works in the opposite way that it is very effective to adapt the model. By this reason, we propose to add a weight term to decrease, or increase in some conditions, the effect of this prior. The experiment on few-shot learning shows that emphasizing or weakening the prior can give better performance than using its original value.

A Study on the Implementation of Modified Hybrid Learning Rule (변형하이브리드 학습규칙의 구현에 관한 연구)

  • 송도선;김석동;이행세
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.116-123
    • /
    • 1994
  • A modified Hybrid learning rule(MHLR) is proposed, which is derived from combining the Back Propagation algorithm that is known as an excellent classifier with modified Hebbian by changing the orginal Hebbian which is a good feature extractor. The network architecture of MHLR is multi-layered neural network. The weights of MHLR are calculated from sum of the weight of BP and the weight of modified Hebbian between input layer and higgen layer and from the weight of BP between gidden layer and output layer. To evaluate the performance, BP, MHLR and the proposed Hybrid learning rule (HLR) are simulated by Monte Carlo method. As the result, MHLR is the best in recognition rate and HLR is the second. In learning speed, HLR and MHLR are much the same, while BP is relatively slow.

  • PDF

A Study on the Accuracy Improvement of One-repetition Maximum based on Deep Neural Network for Physical Exercise

  • Lee, Byung-Hoon;Kim, Myeong-Jin;Kim, Kyung-Seok
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.147-154
    • /
    • 2019
  • In this paper, we conducted a study that utilizes deep learning to calculate appropriate physical exercise information when basic human factors such as sex, age, height, and weight of users come in. To apply deep learning, a method was applied to calculate the amount of fat needed to calculate the amount of one repetition maximum by utilizing the structure of the basic Deep Neural Network. By applying Accuracy improvement methods such as Relu, Weight initialization, and Dropout to existing deep learning structures, we have improved Accuracy to derive a lean body weight that is closer to actual results. In addition, the results were derived by applying a formula for calculating the one repetition maximum load on upper and lower body movements for use in actual physical exercise. If studies continue, such as the way they are applied in this paper, they will be able to suggest effective physical exercise options for different conditions as well as conditions for users.