• 제목/요약/키워드: Dynamic Learning

검색결과 1,157건 처리시간 0.026초

SCORM 기반의 동적인 시퀀스를 이용한 적응형 학습 시스템 (An Adaptative Learning System by using SCORM-Based Dynamic Sequencing)

  • 이종근;김준태;김형일
    • 정보처리학회논문지D
    • /
    • 제13D권3호
    • /
    • pp.425-436
    • /
    • 2006
  • 정형화된 교육 절차에 따라 학습을 수행하고 종료하는 방식의 e-learning으로는 학습자의 수준에 맞는 적절한 교육을 제공하기 어렵다. 이와 같은 문제점을 해결하기 위해 SCORM에서는 학습 결과에 따라 학습 절차를 규정하는 시퀀싱을 활용하여 학습자의 수준에 맞는 적절한 교육을 제공한다. 일반적으로 시퀀싱 설계는 교수자나 학습 저작자가 담당하여 학습 프로그램을 규칙화한다. 그러나 정형화된 시퀀싱은 학습 집단이나 학습자의 특성을 반영하지 못하며, 잘못된 시퀀싱이 설계되었을 경우에 학습자들이 불필요한 재학습을 수행해야 한다. 본 논문에서는 이와 같은 문제점을 해결하기 위해 동적 시퀀싱을 적용한 학습 평가 자동화 시스템을 제안한다. 동적 시퀀싱에서는 학습자들의 평가점수가 시퀀싱에서 활용하는 기준점수에 반영되어 기준점수를 동적으로 변화시킨다. 기준점수를 동적으로 변화시킴으로 시퀀싱은 학습 집단이나 학습자들의 수준에 맞게 동적으로 변화된다. 본 논문에서는 몇 가지 실험을 통하여 제안한 동적 시퀀싱을 적용한 학습 평가 자동화 시스템이 학습 집단이나 학습자의 수준에 적합한 교육 절차를 제공함을 보였다.

A study on Indirect Adaptive Decentralized Learning Control of the Vertical Multiple Dynamic System

  • Lee, Soo-Cheol;Park, Seok-Sun;Lee, Jeh-Won
    • International Journal of Precision Engineering and Manufacturing
    • /
    • 제7권1호
    • /
    • pp.62-66
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented an iterative precision of linear decentralized learning control based on p-integrated learning method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the learning control field was learning in robots doing repetitive tasks such as an assembly line works. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Some techniques will show up in the numerical simulation for vertical dynamic robot. The methods of learning system are shown for the iterative precision of each link.

Web 2.0 기반 e-러닝 콘텐츠 재구성 및 수준 진단 (Reconstruction of e-Learning Contents based on Web 2.0, and the Level Diagnosis)

  • 임양원;임한규
    • 한국콘텐츠학회논문지
    • /
    • 제10권7호
    • /
    • pp.429-437
    • /
    • 2010
  • 최근 웹의 기술과 기능이 사용자중심의 패러다임으로 변화되면서 e-러닝의 연구와 설계에서도 학습자 참여와 지속적인 학습이 가능한 동적인 학습콘텐츠를 구성하려는 새로운 연구가 진행되고 있다. 본 논문에서는 e-러닝 2.0에 적용할 수 있도록 효율적인 학습 환경을 제공하기 위해 학습자 중심의 동적인 학습 콘텐츠 난이도 조절에 관한 연구를 기술했다. 본 논문은 학습자 중심의 콘텐츠를 제공하기 위해 DLA(Dynamic Level Adjustment)를 제안한다. 제안된 시스템은 환경의 변화에 적응력이 강한 학습콘텐츠를 조절하고 적용할 수 있는 가이드라인이 되고, 더 깊이 있는 연구가 진행될 수 있도록 목표를 두고 있다. 성능평가 결과 학습자의 다양한 학습패턴을 인지할 수 있는 동적인 학습콘텐츠 모델을 만들 수 있었다.

비선형 백스테핑 방식에 의한 차량 동력학의 적응-학습제어 (Adaptive-learning control of vehicle dynamics using nonlinear backstepping technique)

  • 이현배;국태용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.636-639
    • /
    • 1997
  • In this paper, a dynamic control scheme is proposed which not only compensates for the lateral dynamics and longitudinal dynamics but also deal with the yaw motion dynamics. Using the dynamic control technique, adaptive and learning algorithm together, the proposed controller is not only robust to disturbance and parameter uncertainties but also can learn the inverse dynamics model in steady state. Based on the proposed dynamic control scheme, a dynamic vehicle simulator is contructed to design and test various control techniques for 4-wheel steering vehicles.

  • PDF

동적시스템 제어를 위한 다단동적 뉴로-퍼지 제어기 설계 (Design of Multi-Dynamic Neuro-Fuzzy Controller for Dynamic Systems Control)

  • 조현섭;민진경
    • 한국산학기술학회:학술대회논문집
    • /
    • 한국산학기술학회 2007년도 춘계학술발표논문집
    • /
    • pp.150-153
    • /
    • 2007
  • The intent of this paper is to describe a neural network structure called multi dynamic neural network(MDNN), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the MDNN, are described. Computer simulations are demonstrate the effectiveness of the proposed learning using the MDNN.

  • PDF

두개의 Extended Kalman Filter를 이용한 Recurrent Neural Network 학습 알고리듬 (A Learning Algorithm for a Recurrent Neural Network Base on Dual Extended Kalman Filter)

  • 송명근;김상희;박원우
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.349-351
    • /
    • 2004
  • The classical dynamic backpropagation learning algorithm has the problems of learning speed and the determine of learning parameter. The Extend Kalman Filter(EKF) is used effectively for a state estimation method for a non linear dynamic system. This paper presents a learning algorithm using Dual Extended Kalman Filter(DEKF) for Fully Recurrent Neural Network(FRNN). This DEKF learning algorithm gives the minimum variance estimate of the weights and the hidden outputs. The proposed DEKF learning algorithm is applied to the system identification of a nonlinear SISO system and compared with dynamic backpropagation learning algorithm.

  • PDF

강화학습법을 이용한 유역통합 저수지군 운영 (Basin-Wide Multi-Reservoir Operation Using Reinforcement Learning)

  • 이진희;심명필
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2006년도 학술발표회 논문집
    • /
    • pp.354-359
    • /
    • 2006
  • The analysis of large-scale water resources systems is often complicated by the presence of multiple reservoirs and diversions, the uncertainty of unregulated inflows and demands, and conflicting objectives. Reinforcement learning is presented herein as a new approach to solving the challenging problem of stochastic optimization of multi-reservoir systems. The Q-Learning method, one of the reinforcement learning algorithms, is used for generating integrated monthly operation rules for the Keum River basin in Korea. The Q-Learning model is evaluated by comparing with implicit stochastic dynamic programming and sampling stochastic dynamic programming approaches. Evaluation of the stochastic basin-wide operational models considered several options relating to the choice of hydrologic state and discount factors as well as various stochastic dynamic programming models. The performance of Q-Learning model outperforms the other models in handling of uncertainty of inflows.

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 한국시뮬레이션학회:학술대회논문집
    • /
    • 한국시뮬레이션학회 2001년도 The Seoul International Simulation Conference
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

이족 보행 로봇의 반복 걸음새 제어를 위한 학습제어기의 구현 (Implementation of a Learning Controller for Repetitive Gate Control of Biped Walking Robot)

  • 임동철;오성남;국태용
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.594-596
    • /
    • 2005
  • This paper present a learning controller for repetitive gate control of biped robot. The learning control scheme consists of a feedforward learning rule and linear feedback control input for stabilization of learning system. The feasibility of learning control to biped robotic motion is shown via dynamic simulation and experimental results with 24 DOF biped robot.

  • PDF

Q-learning for intersection traffic flow Control based on agents

  • 주선;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.94-96
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF