• 제목/요약/키워드: Learning state

검색결과 1,597건 처리시간 0.028초

e-Learning 환경에서 영어 말하기와 듣기 학습자의 몰입경험(flow) 척도개발에 관한 탐색적 연구 (A Study on the Flow State Scale of English Speaking and Listening in the e-Learning Environment)

  • 강정화;한금옥;신동로
    • 디지털융복합연구
    • /
    • 제6권3호
    • /
    • pp.13-22
    • /
    • 2008
  • The purpose of this study is to explore 'Flow Experience' of those studying English speaking and listening in the e-Learning environment. The exploration of flow experience in this study is based on the literature research of Csikszentmihalyi's flow models and other studies. There have been many studies on flow experience focusing on arts, leisure and sports in accordance with Csikszentmihalyi's original theory, however, his flow theory has recently been adapted to the educational field. Nonetheless, it is in the e-learning environment, rather than the face-to-face traditional teaming environment, that there is not enough flow state measurement scale. Therefore, it is important to develop as a stepping stone a flow state scale for those who study English speaking and listening by the cyber-native-speaker on e-Learning environment to improve their satisfaction and achievement.

  • PDF

Robustness of 2nd-order Iterative Learning Control for a Class of Discrete-Time Dynamic Systems

  • 김용태
    • 한국지능시스템학회논문지
    • /
    • 제14권3호
    • /
    • pp.363-368
    • /
    • 2004
  • In this paper, the robustness property of 2nd-order iterative learning control(ILC) method for a class of linear and nonlinear discrete-time dynamic systems is studied. 2nd-order ILC method has the PD-type learning algorithm based on both time-domain performance and iteration-domain performance. It is proved that the 2nd-order ILC method has robustness in the presence of state disturbances, measurement noise and initial state error. In the absence of state disturbances, measurement noise and initialization error, the convergence of the 2nd-order ILC algorithm is guaranteed. A numerical example is given to show the robustness and convergence property according to the learning parameters.

상태 공간 압축을 이용한 강화학습 (Reinforcement Learning Using State Space Compression)

  • 김병천;윤병주
    • 한국정보처리학회논문지
    • /
    • 제6권3호
    • /
    • pp.633-640
    • /
    • 1999
  • Reinforcement learning performs learning through interacting with trial-and-error in dynamic environment. Therefore, in dynamic environment, reinforcement learning method like Q-learning and TD(Temporal Difference)-learning are faster in learning than the conventional stochastic learning method. However, because many of the proposed reinforcement learning algorithms are given the reinforcement value only when the learning agent has reached its goal state, most of the reinforcement algorithms converge to the optimal solution too slowly. In this paper, we present COMREL(COMpressed REinforcement Learning) algorithm for finding the shortest path fast in a maze environment, select the candidate states that can guide the shortest path in compressed maze environment, and learn only the candidate states to find the shortest path. After comparing COMREL algorithm with the already existing Q-learning and Priortized Sweeping algorithm, we could see that the learning time shortened very much.

  • PDF

Modern Problems And Prospects Of Distance Educational Technologies

  • Mykolaiko, Volodymyr;Honcharuk, Vitalii;Gudmanian, Artur;Kharkova, Yevdokia;Kovalenko, Svitlana;Byedakova, Sofiia
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.300-306
    • /
    • 2022
  • The theoretical analysis and synthesis of prospects for the development of distance learning in Ukraine, the main topical problems of distance education in Ukraine are considered, the main factors that hinder the introduction of distance learning are analyzed, to pay attention to the need to increase the level of computer literacy among Ukrainian educators and the formation of modern methodology of distance learning, in particular, a single, systematic, national approach of organization, coordination and control in this area. Research methods: analytical method, method of structural and functional analysis, phenomenological method, content analysis method, philosophical reflection method, sociological methods (questionnaire, interview).

중학생의 일기변화 관련 개념 지식상태와 교수-학습 효과 (Analysis of the Knowledge State of Concepts Associated with Weather Changes of Middle School Students and Teaching-Learning Effects)

  • 윤마병
    • 과학교육연구지
    • /
    • 제35권2호
    • /
    • pp.230-239
    • /
    • 2011
  • 이 연구는 중학교 일기변화 단원의 개념 검사지를 개발하여 지식상태 분석법으로 일기변화 관련 개념의 위계를 분석하였고, 개별 학습자의 지식상태와 위계에 따른 수업효과를 알아보았다. 일기변화 관련 개념에 대하여 중학생들이 갖고 있는 지식상태의 위계구조는 '습도 ${\rightarrow}$ 기단 ${\rightarrow}$ 구름 ${\rightarrow}$ 강수 ${\rightarrow}$ 전선 ${\rightarrow}$ 일기' 순이었다. 일기변화 개념 검사에서 같은 점수를 획득하여 학습능력이 비슷할 것으로 추정되는 개별 학습자의 지식상태가 서로 다르게 나타나는 사례가 있었다. 즉, 지식상태의 구조화 정도가 달랐는데, 이는 학습자의 지식상태 구조에 따라 서로 다른 교수-학습 처방이 이루어져야 함을 시사한다. 학습자의 지식상태 분석은 개별 학습자마다의 학습 처방과 선수학습평가의 역할을 할 수 있다. 학습자의 지식상태에 따른 개념의 위계를 고려한 교수-학습 효과를 알아보기 위해 교과서의 내용 제시 순서와 비교하여 수업한 결과, 학습자의 지식상태를 고려하여 수업한 경우에 개념의 성취도가 유의미(p < .05)하게 더 높았다. 이는 교사들이 일기변화 단원을 지도할 때, 학습자의 지식상태를 파악하여 교육과정을 토대로 교과서의 학습 내용을 재순서화 함으로써 더 효과적인 교수-학습이 이루어질 수 있음을 보여 준다.

  • PDF

Torque Ripple Minimization of PMSM Using Parameter Optimization Based Iterative Learning Control

  • Xia, Changliang;Deng, Weitao;Shi, Tingna;Yan, Yan
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권2호
    • /
    • pp.425-436
    • /
    • 2016
  • In this paper, a parameter optimization based iterative learning control strategy is presented for permanent magnet synchronous motor control. This paper analyzes the mechanism of iterative learning control suppressing PMSM torque ripple and discusses the impact of controller parameters on steady-state and dynamic performance of the system. Based on the analysis, an optimization problem is constructed, and the expression of the optimal controller parameter is obtained to adjust the controller parameter online. Experimental research is carried out on a 5.2kW PMSM. The results show that the parameter optimization based iterative learning control proposed in this paper achieves lower torque ripple during steady-state operation and short regulating time of dynamic response, thus satisfying the demands for both steady state and dynamic performance of the speed regulating system.

Theoretical And Methodological Principles Of Distance Learning: Priority Direction Of Education

  • Fabian, Myroslava;Tur, Oksana;Yablonska, Olha;Rumiantseva, Alla;Oliinyk, Halyna;Sukhlenko, Iryna
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.251-255
    • /
    • 2022
  • The article considers the state and trends of distance learning in the world and Ukraine, identifies the main species differences between distance education and other forms of education, analyzes the state of the global market for educational services provided via the Internet. Important features and characteristics of distance learning, examples of its organization in higher education, as well as statistics on the development of distance learning in our country. The main problematic points on the way to the implementation of the distance education system in Ukraine and the factors that hinder the development of this promising form of education are outlined.

An iterative learning and adaptive control scheme for a class of uncertain systems

  • Kuc, Tae-Yong;Lee, Jin-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1990년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 26-27 Oct. 1990
    • /
    • pp.963-968
    • /
    • 1990
  • An iterative learning control scheme for tracking control of a class of uncertain nonlinear systems is presented. By introducing a model reference adaptive controller in the learning control structure, it is possible to achieve zero tracking of unknown system even when the upperbound of uncertainty in system dynamics is not known apriori. The adaptive controller pull the state of the system to the state of reference model via control gain adaptation at each iteration, while the learning controller attracts the model state to the desired one by synthesizing a suitable control input along with iteration numbers. In the controller role transition from the adaptive to the learning controller takes place in gradually as learning proceeds. Another feature of this control scheme is that robustness to bounded input disturbances is guaranteed by the linear controller in the feedback loop of the learning control scheme. In addition, since the proposed controller does not require any knowledge of the dynamic parameters of the system, it is flexible under uncertain environments. With these facts, computational easiness makes the learning scheme more feasible. Computer simulation results for the dynamic control of a two-axis robot manipulator shows a good performance of the scheme in relatively high speed operation of trajectory tracking.

  • PDF

Aspect-based Sentiment Analysis of Product Reviews using Multi-agent Deep Reinforcement Learning

  • M. Sivakumar;Srinivasulu Reddy Uyyala
    • Asia pacific journal of information systems
    • /
    • 제32권2호
    • /
    • pp.226-248
    • /
    • 2022
  • The existing model for sentiment analysis of product reviews learned from past data and new data was labeled based on training. But new data was never used by the existing system for making a decision. The proposed Aspect-based multi-agent Deep Reinforcement learning Sentiment Analysis (ADRSA) model learned from its very first data without the help of any training dataset and labeled a sentence with aspect category and sentiment polarity. It keeps on learning from the new data and updates its knowledge for improving its intelligence. The decision of the proposed system changed over time based on the new data. So, the accuracy of the sentiment analysis using deep reinforcement learning was improved over supervised learning and unsupervised learning methods. Hence, the sentiments of premium customers on a particular site can be explored to other customers effectively. A dynamic environment with a strong knowledge base can help the system to remember the sentences and usage State Action Reward State Action (SARSA) algorithm with Bidirectional Encoder Representations from Transformers (BERT) model improved the performance of the proposed system in terms of accuracy when compared to the state of art methods.

강화학습을 이용한 n-Queen 문제의 수렴속도 향상 (The Improvement of Convergence Rate in n-Queen Problem Using Reinforcement learning)

  • 임수연;손기준;박성배;이상조
    • 한국지능시스템학회논문지
    • /
    • 제15권1호
    • /
    • pp.1-5
    • /
    • 2005
  • 강화학습(Reinforcement-Learning)의 목적은 환경으로부터 주어지는 보상(reward)을 최대화하는 것이며, 강화학습 에이전트는 외부에 존재하는 환경과 시행착오를 통하여 상호작용하면서 학습한다 대표적인 강화학습 알고리즘인 Q-Learning은 시간 변화에 따른 적합도의 차이를 학습에 이용하는 TD-Learning의 한 종류로서 상태공간의 모든 상태-행동 쌍에 대한 평가 값을 반복 경험하여 최적의 전략을 얻는 방법이다. 본 논문에서는 강화학습을 적용하기 위한 예를 n-Queen 문제로 정하고, 문제풀이 알고리즘으로 Q-Learning을 사용하였다. n-Queen 문제를 해결하는 기존의 방법들과 제안한 방법을 비교 실험한 격과, 강화학습을 이용한 방법이 목표에 도달하기 위한 상태전이의 수를 줄여줌으로써 최적 해에 수련하는 속도가 더욱 빠름을 알 수 있었다.