• Title/Summary/Keyword: learning function

Search Result 2,295, Processing Time 0.032 seconds

Application and Performance Analysis of Machine Learning for GPS Jamming Detection (GPS 재밍탐지를 위한 기계학습 적용 및 성능 분석)

  • Jeong, Inhwan
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.5
    • /
    • pp.47-55
    • /
    • 2019
  • As the damage caused by GPS jamming has been increased, researches for detecting and preventing GPS jamming is being actively studied. This paper deals with a GPS jamming detection method using multiple GPS receiving channels and three-types machine learning techniques. Proposed multiple GPS channels consist of commercial GPS receiver with no anti-jamming function, receiver with just anti-noise jamming function and receiver with anti-noise and anti-spoofing jamming function. This system enables user to identify the characteristics of the jamming signals by comparing the coordinates received at each receiver. In this paper, The five types of jamming signals with different signal characteristics were entered to the system and three kinds of machine learning methods(AB: Adaptive Boosting, SVM: Support Vector Machine, DT: Decision Tree) were applied to perform jamming detection test. The results showed that the DT technique has the best performance with a detection rate of 96.9% when the single machine learning technique was applied. And it is confirmed that DT technique is more effective for GPS jamming detection than the binary classifier techniques because it has low ambiguity and simple hardware. It was also confirmed that SVM could be used only if additional solutions to ambiguity problem are applied.

Neurological Dynamic Development Cycles of Abstractions in Math Learning (수학학습의 추상적 개념발달에 대한 뇌신경학적 역동학습 연구)

  • Kwon, Hyungkyu
    • Journal of The Korean Association of Information Education
    • /
    • v.18 no.4
    • /
    • pp.559-566
    • /
    • 2014
  • This is to understand the neurological dynamic cognitive processes of math learning based on the abstract mappings( level A2), abstract systems(level A3), and single principles(level A4), which are principles of Fischer's cognitive development theory. Math learning requires flexibility to adapt existing brain function in selecting new neurophysiological activities to learn desired knowledge. This study suggests a general statistical framework for the identification of neurological patterns in different abstract learning change with optimal support. We expected that functional brain networks derived from a simple math learning would change dynamically during the supportive learning associated with different abstract levels. Task based patterns of the brain structure and function on representations of underlying connectivity suggests the possible prediction for the success of the supportive learning.

Hybrid Learning for Vision-and-Language Navigation Agents (시각-언어 이동 에이전트를 위한 복합 학습)

  • Oh, Suntaek;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.281-290
    • /
    • 2020
  • The Vision-and-Language Navigation(VLN) task is a complex intelligence problem that requires both visual and language comprehension skills. In this paper, we propose a new learning model for visual-language navigation agents. The model adopts a hybrid learning that combines imitation learning based on demo data and reinforcement learning based on action reward. Therefore, this model can meet both problems of imitation learning that can be biased to the demo data and reinforcement learning with relatively low data efficiency. In addition, the proposed model uses a novel path-based reward function designed to solve the problem of existing goal-based reward functions. In this paper, we demonstrate the high performance of the proposed model through various experiments using both Matterport3D simulation environment and R2R benchmark dataset.

Development of mobile, online/offline-linked math learning content to promote group creativity (집단창의성 발현을 위한 모바일, 온/오프라인 연계 수학 학습 콘텐츠 개발)

  • Kim, Bumi
    • Journal of the Korean School Mathematics Society
    • /
    • v.25 no.1
    • /
    • pp.39-60
    • /
    • 2022
  • In this study, in order to support the expression of group creativity of high school students, we developed mathematics learning contents linked with mobile and online/offline that obtain the maximum and minimum values of the function within a limited range. This learning content was developed in connection with the 'environment', a cross-curricular learning topic. We explored the concept of group creativity in school mathematics. Its manifestation process, elements of group creativity expression process, and mobile and on/offline implementation functions were also explored. Then, we developed a hybrid app, 'Making the Best Box that Thinks of the Earth', which can express group creativity through mobile and online/offline-linked cooperative learning. A learning management system (LMS) and a teaching and learning guidance plan were also developed to efficiently operate mobile and online/offline-linked math learning using the app in schools. Our study found that the hybrid app, 'Creating the Best Box that Thinks of the Earth', was suitable for promoting collective fluency and collective sophistication based on complementary-metacognitive interaction.

Implementation of Tactical Path-finding Integrated with Weight Learning (가중치 학습과 결합된 전술적 경로 찾기의 구현)

  • Yu, Kyeon-Ah
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.2
    • /
    • pp.91-98
    • /
    • 2010
  • Conventional path-finding has focused on finding short collision-free paths. However, as computer games become more sophisticated, it is required to take tactical information like ambush points or lines of enemy sight into account. One way to make this information have an effect on path-finding is to represent a heuristic function of a search algorithm as a weighted sum of tactics. In this paper we consider the problem of learning heuristic to optimize path-finding based on given tactical information. What is meant by learning is to produce a good weight vector for a heuristic function. Training examples for learning are given by a game level-designer and will be compared with search results in every search level to update weights. This paper proposes a learning algorithm integrated with search for tactical path-finding. The perceptron-like method for updating weights is described and a simulation tool for implementing these is presented. A level-designer can mark desired paths according to characters' properties in the heuristic learning tool and then it uses them as training examples to learn weights and shows traces of paths changing along with weight learning.

Back-propagation Algorithm with a zero compensated Sigmoid-prime function (영점 보상 Sigmoid-prime 함수에 의한 역전파 알고리즘)

  • 이왕국;김정엽;이준재;하영호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.3
    • /
    • pp.115-122
    • /
    • 1994
  • The problems in back-propagation(BP) generally are learning speed and misclassification due to lacal minimum. In this paper, to solve these problems, the classical modified methods of BP are reviewed and an extension of the BP to compensate the sigmoide-prime function around the extremity where the actual output of a unit is close to zero or one is proposed. The proposed method is not onlu faster than the conventional methods in learning speed but has an advantage of setting variables easily because it shows good classification results over the vast and uncharted space about the variations of learning rate, etc.. And it is simple for hardware implementation.

  • PDF

A Study on the Control of Recognition Performance and the Rehabilitation of Damaged Neurons in Multi-layer Perceptron (다층 퍼셉트론으 인식력 제어와 복원에 관한 연구)

  • 박인정;장호성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.2
    • /
    • pp.128-136
    • /
    • 1991
  • A neural network of multi layer perception type, learned by error back propagation learning rule, is generally used for the verification or clustering of similar type of patterns. When learning is completed, the network has a constant value of output depending on a pattern. This paper shows that the intensity of neuron's out put can be controlled by a function which intensifies the excitatory interconnection coefficients or the inhibitory one between neurons in output layer and those in hidden layer. In this paper the value of factor in the function to control the output is derived from the know values of the neural network after learning is completed And also this paper show that the amount of an increased neuron's output in output layer by arbitary value of the factor is derived. For the applications increased recognition performance of a pattern than has distortion is introduced and the output of partially damaged neurons are first managed and this paper shows that the reduced recognition performance can be recovered.

  • PDF

Protein Disorder Prediction Using Multilayer Perceptrons

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.9 no.4
    • /
    • pp.11-15
    • /
    • 2013
  • "Protein Folding Problem" is considered to be one of the "Great Challenges of Computer Science" and prediction of disordered protein is an important part of the protein folding problem. Machine learning models can predict the disordered structure of protein based on its characteristic of "learning from examples". Among many machine learning models, we investigate the possibility of multilayer perceptron (MLP) as the predictor of protein disorder. The investigation includes a single hidden layer MLP, multi hidden layer MLP and the hierarchical structure of MLP. Also, the target node cost function which deals with imbalanced data is used as training criteria of MLPs. Based on the investigation results, we insist that MLP should have deep architectures for performance improvement of protein disorder prediction.

A Learning Method of LQR Controller Using Jacobian (자코비안을 이용한 LQR 제어기 학습법)

  • Lim, Yoon-Kyu;Chung, Byeong-Mook
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.8 s.173
    • /
    • pp.34-41
    • /
    • 2005
  • Generally, it is not easy to get a suitable controller for multi variable systems. If the modeling equation of the system can be found, it is possible to get LQR control as an optimal solution. This paper suggests an LQR learning method to design LQR controller without the modeling equation. The proposed algorithm uses the same cost function with error and input energy as LQR is used, and the LQR controller is trained to reduce the function. In this training process, the Jacobian matrix that informs the converging direction of the controller Is used. Jacobian means the relationship of output variations for input variations and can be approximately found by the simple experiments. In the simulations of a hydrofoil catamaran with multi variables, it can be confirmed that the training of LQR controller is possible by using the approximate Jacobian matrix instead of the modeling equation and this controller is not worse than the traditional LQR controller.

Fuzzy Gain Scheduling of Velocity PI Controller with Intelligent Learning Algorithm for Reactor Control

  • Kim, Dong-Yun;Seong, Poong-Hyun
    • Proceedings of the Korean Nuclear Society Conference
    • /
    • 1996.11a
    • /
    • pp.73-78
    • /
    • 1996
  • In this study, we proposed a fuzzy gain scheduler with intelligent learning algorithm for a reactor control. In the proposed algorithm, we used the gradient descent method to learn the rule bases of a fuzzy algorithm. These rule bases are learned toward minimizing an objective function, which is called a performance cost function. The objective of fuzzy gain scheduler with intelligent learning algorithm is the generation of adequate gains, which minimize the error of system. The condition of every plant is generally changed as time gose. That is, the initial gains obtained through the analysis of system are no longer suitable for the changed plant. And we need to set new gains, which minimize the error stemmed from changing the condition of a plant. In this paper, we applied this strategy for reactor control of nuclear power plant (NPP), and the results were compared with those of a simple PI controller, which has fixed gains. As a result, it was shown that the proposed algorithm was superior to the simple PI controller.

  • PDF