• Title/Summary/Keyword: 학습 경로

Search Result 1,003, Processing Time 0.027 seconds

Optimum Evacuation Route Calculation Using AI Q-Learning (AI기법의 Q-Learning을 이용한 최적 퇴선 경로 산출 연구)

  • Kim, Won-Ouk;Kim, Dae-Hee;Youn, Dae-Gwun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.7
    • /
    • pp.870-874
    • /
    • 2018
  • In the worst maritime accidents, people should abandon ship, but ship structures are narrow and complex and operation takes place on rough seas, so escape is not easy. In particular, passengers on cruise ships are untrained and varied, making evacuation prospects worse. In such a case, the evacuation management of the crew plays a very important role. If a rescuer enters a ship at distress and conducts rescue activities, which zones represent the most effective entry should be examined. Generally, crew and rescuers take the shortest route, but if an accident occurs along the shortest route, it is necessary to select the second-best alternative. To solve this situation, this study aims to calculate evacuation routes using Q-Learning of Reinforcement Learning, which is a machine learning technique. Reinforcement learning is one of the most important functions of artificial intelligence and is currently used in many fields. Most evacuation analysis programs developed so far use the shortest path search method. For this reason, this study explored optimal paths using reinforcement learning. In the future, machine learning techniques will be applicable to various marine-related industries for such purposes as the selection of optimal routes for autonomous vessels and risk avoidance.

A Path Planning for Robot Manipulator using CMACRRT (CMACRRT를 이용한 로봇 매뉴플레이터 경로계획)

  • O Gyeong-Se;Kim Eun-Tae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.223-226
    • /
    • 2006
  • 매니퓰레이션 기술 중에서 경로 계획은 중요한 문제 중의 하나이다. RRT는 경로 계획 알고리즘으로 최근에 제안되었다. RRT는 기존 알고리즘보다 빠르게 장애물을 회피하여 경로를 계획할 수 있다. 기존의 경로 계획 알고리즘은 그 상황에 따라 반복적으로 경로 계획을 하였다. 이러한 점을 개선하기위해 RRT와 인간의 소뇌구조를 모방한 CMAC을 결합한 CMACRRT를 제안한다. CMAC은 RRT가 만들어낸 경로와 그 상황을 기억하여 유사한 상황에서 경로를 다시 사용할 수 있게 해준다. 이렇게해서서 CMAC을 통해 학습된 상황에서 RRT를 사용하지 않고 기존의 경로를 사용할 수 있게 된다.

  • PDF

Augmented Reality-based Billiards Training System (AR을 이용한 당구 학습 시스템)

  • Kang, Seung-Woo;Choi, Kang-Sun
    • Journal of Practical Engineering Education
    • /
    • v.12 no.2
    • /
    • pp.309-319
    • /
    • 2020
  • Billiards is a fun and popular sport, but both route planning and cueing prevent beginners from becoming skillful. A beginner in billiards requires constant concentration and training to reach the right level, but without the right motivating factor, it is easy to lose interests. This study aims to induce interest in billiards and accelerate learning by utilizing billiard path prediction and visualization on a highly immersive augmented reality platform that combines a stereo camera and a VR headset. For implementation, the placement of billiard balls is recognized through the OpenCV image processing program, and physics simulation, path search, and visualization are performed in Unity Engine. As a result, accurate path prediction can be achieved. This made it possible for beginners to reduce the psychological burden of planning the path, focus only on accurate cueing, and gradually increase their billiard proficiency by getting used to the path suggested by the algorithm for a long time. We confirm that the proposed AR billiards is remarkably effective as a learning assistant tool.

Real-Time Path Planning for Mobile Robots Using Q-Learning (Q-learning을 이용한 이동 로봇의 실시간 경로 계획)

  • Kim, Ho-Won;Lee, Won-Chang
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.991-997
    • /
    • 2020
  • Reinforcement learning has been applied mainly in sequential decision-making problems. Especially in recent years, reinforcement learning combined with neural networks has brought successful results in previously unsolved fields. However, reinforcement learning using deep neural networks has the disadvantage that it is too complex for immediate use in the field. In this paper, we implemented path planning algorithm for mobile robots using Q-learning, one of the easy-to-learn reinforcement learning algorithms. We used real-time Q-learning to update the Q-table in real-time since the Q-learning method of generating Q-tables in advance has obvious limitations. By adjusting the exploration strategy, we were able to obtain the learning speed required for real-time Q-learning. Finally, we compared the performance of real-time Q-learning and DQN.

The Case Study for Path Selection Verification of IGP Routing Protocol (IGP 라우팅 프로토콜의 경로선택 검증을 위한 구현 사례)

  • Kim, No-Whan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.9
    • /
    • pp.197-204
    • /
    • 2014
  • RIP, EIGRP, OSPF are the interior gateway protocol for sending and receiving routing information among routers in AS(Autonomous System). Various path selection methods using the metric in regard to them have been studied recently but there are few examples that the contents learners understand theoretically are verified by the practice. The Best Path is determined by calculating the Cost value based on the relevant topology of each routing protocol. After implementing the virtual network, it is certain that the results tracking and verifying the relevant path selection of each routing protocol are consistent with the Best Path. If methods suggested in this paper are applied properly, the relevant path selection process of routing protocol can be understood systematically. And it is expected that the outstanding results of learning will be able to be achieved.

A Basic Research on the Development and Performance Evaluation of Evacuation Algorithm Based on Reinforcement Learning (강화학습 기반 피난 알고리즘 개발과 성능평가에 관한 기초연구)

  • Kwang-il Hwang;Byeol Kim
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.132-133
    • /
    • 2023
  • The safe evacuation of people during disasters is of utmost importance. Various life safety evacuation simulation tools have been developed and implemented, with most relying on algorithms that analyze maps to extract the shortest path and guide agents along predetermined routes. While effective in predicting evacuation routes in stable disaster conditions and short timeframes, this approach falls short in dynamic situations where disaster scenarios constantly change. Existing algorithms struggle to respond to such scenarios, prompting the need for a more adaptive evacuation route algorithm that can respond to changing disasters. Artificial intelligence technology based on reinforcement learning holds the potential to develop such an algorithm. As a fundamental step in algorithm development, this study aims to evaluate whether an evacuation algorithm developed by reinforcement learning satisfies the performance conditions of the evacuation simulation tool required by IMO MSC.1/Circ1533.

  • PDF

Determining Whether to Enter a Hazardous Area Using Pedestrian Trajectory Prediction Techniques and Improving the Training of Small Models with Knowledge Distillation (보행자 경로 예측 기법을 이용한 위험구역 진입 여부 결정과 Knowledge Distillation을 이용한 작은 모델 학습 개선)

  • Choi, In-Kyu;Lee, Young Han;Song, Hyok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1244-1253
    • /
    • 2021
  • In this paper, we propose a method for predicting in advance whether pedestrians will enter the hazardous area after the current time using the pedestrian trajectory prediction method and an efficient simplification method of the trajectory prediction network. In addition, we propose a method to apply KD(Knowledge Distillation) to a small network for real-time operation in an embedded environment. Using the correlation between predicted future paths and hazard zones, we determined whether to enter or not, and applied efficient KD when learning small networks to minimize performance degradation. Experimentally, it was confirmed that the model applied with the simplification method proposed improved the speed by 37.49% compared to the existing model, but led to a slight decrease in accuracy. As a result of learning a small network with an initial accuracy of 91.43% using KD, It was confirmed that it has improved accuracy of 94.76%.

Development of Reinforcement Learning-based Obstacle Avoidance toward Autonomous Mobile Robots for an Industrial Environment (산업용 자율 주행 로봇에서의 격자 지도를 사용한 강화학습 기반 회피 경로 생성기 개발)

  • Yang, Jeong-Yean
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.3
    • /
    • pp.72-79
    • /
    • 2019
  • Autonomous locomotion has two essential functionalities: mapping builds and updates maps by uncertain position information and measured sensor inputs, and localization is to find the positional information with the inaccurate map and the sensor information. In addition, obstacle detection, avoidance, and path designs are necessarily required for autonomous locomotion by combining the probabilistic methods based on uncertain locations. The sensory inputs, which are measured by a metric-based scanner, have difficulties of distinguishing moving obstacles like humans from static objects like walls in given environments. This paper proposes the low resolution grid map combined with reinforcement learning, which is compared with the conventional recognition method for detecting static and moving objects to generate obstacle avoiding path. Finally, the proposed method is verified with experimental results.

Does Science Motivation Lead to Higher Achievement, or Vice Versa?: Their Cross-Lagged Effects and Effects on STEM Career Motivation (과학 학습 동기가 높은 학생이 과학 학업 성취도가 높아지는가, 또는 그 역인가? -양자가 지닌 교차지연 효과 및 이공계 진로 동기에 미치는 효과-)

  • Lee, Gyeong-Geon;Mun, Seonyeong;Han, Moonjung;Hong, Hun-Gi
    • Journal of The Korean Association For Science Education
    • /
    • v.42 no.3
    • /
    • pp.371-381
    • /
    • 2022
  • This study causally investigates whether high school student with high science learning motivation becomes to achieve more or vice versa, and also how those two factors affect STEM career motivation. Research participants were 1st year students in a high school at Seoul. We surveyed their science learning motivation three times in the same time interval in the fall semester of 2021, and once a STEM career motivation in the third period. We collected data from 171 students with their mid-term and final exam scores, with which, we constructed and fitted an autoregressive cross-lagged model. The research model shows high measurement stability and fit indices. All the autoregressive and cross-lagged paths were statistically significant. However, standardized regression coefficients were larger in path from motivation to achievement compared to the opposite. Only science learning motivation shows significant direct effect on STEM career motivation, rather than achievement. For indirect effects, the first science learning motivation affected the final exam score and STEM career motivation, and the final exam score affected STEM career motivation. However, the final exam score did not have a total effect toward STEM career motivation. The result of this study shows reciprocal and cyclic causality between science learning motivation and achievement - in comparison, the effect of motivation for the opposite is larger than that of achievement. Also the result of this study strongly reaffirms the importance of science learning motivation. Instructional implications for strengthening science learning motivation throughout a semester was discussed, and a study for the longitudinal effect of science learning motivation and achievement in high school student toward future STEM vocational life was suggested.