• Title/Summary/Keyword: Q러닝

Search Result 60, Processing Time 0.026 seconds

A Case Study of Flipped Llearning of Cooking Practice Subject of University Students (대학생 조리실무 교과목의 플립드러닝(Flipped learning) 적용사례 연구)

  • Kim, Hak-Ju;Kim, Chan-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.9
    • /
    • pp.129-139
    • /
    • 2020
  • This study was conducted to analyze the subjective perception types of college students majoring in cooking by applying flip-learning teaching and learning methods to the subject of cooking practice to improve the educational efficiency of cooking-related classes. Also, in order to study subjective perception of small students, we tried to grasp the common structure in subjective attitude and perception using Q methodology, and the analysis resulted in four types. Type 1 (N=5): Problem solving ability effect, Type 2 (N=6): Self-directed learning effect, Type 3 (N=3): Mutual cooperation practice effect, Type 4 (N=6) ): Theory learning effect was analyzed for each unique feature type. Flip-learning is applied to cooking practice classes, which is a learner-centered education that leaves the traditional teaching method. Interest was found to have a very positive effect on learners' opinion sharing and learning outcomes. However, it was revealed that all students need to find additional solutions to problems such as the operation plan for flipped learning and the free ride evaluation method in group learning.

Estimation of regional Low-flow Indices Applicable to Unmetered Areas Using Machine Learning Technique (머신러닝 기법을 이용한 미계측지역에 적용가능한 지역화 Low-flow indices 산정)

  • Jeung, Se Jin;Kang, Dong Ho;Kim, Byung Sik
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.39-39
    • /
    • 2020
  • Low-flow 하천에서의 최저수위를 나타내는 지표이다. 일반적으로 유황곡선의 갈수량(Q355)를 대표적으로 사용한다. Low-flow는 물 공급 관리 및 계획, 관개용수, 생태계등 다양한 분야에 영향을 미친다. 이러한 Low-flow를 산정하기 위해서는 충분한 기간의 유량자료가 필요하다. 하지만 국토의 70%가 산지지형으로 구성되어 있는 우리나라의 경우 국가하천과 1급하천을 제외한 산지유역은 수위관측소가 부재하거나 결측으로 인해 자료가 충분하지 않아 Low-flow분석에 한계가 있다. 이에 과거에는 미계측지역의 갈수량을 예측하기 위해서 다중회귀분석, ARIMA 모형 등 다양한 기법을 사용하였지만, 최근들어 머신러닝 모형의 수요가 증가하고 있다. 이에 본 연구에서는 새로운 패러다임에 맞는 머신러닝 기법인 DNN기법을 사용하고자 한다. DNN기법은 ANN기법의 단점인 학습과정에서 최적 매개변수값을 찾기 어렵고, 학습시간이 느린 단점을 보완한 방법이다. 따라서 본연구에서는 머신러닝 기법인 DNN기법을 통해 미계측지역에 적용 가능한 지역화 Low-flow indices를 산정하고자 한다. 먼저, Low-flow에 영향을 미치는 인자들을 수집하고 인자들간의 상관분석, 다중공선성 분석을 통해 통계적으로 유의한 변수를 선정하여, 머신러닝 모형에 입력자료를 구축하였다. 또한 기존의 갈수량 예측기법인 다중회귀분석 결과와 비교하여 머신러닝 기법의 효용성을 검토하였다.

  • PDF

A Study on Automatic Comment Generation Using Deep Learning (딥 러닝을 이용한 자동 댓글 생성에 관한 연구)

  • Choi, Jae-yong;Sung, So-yun;Kim, Kyoung-chul
    • Journal of Korea Game Society
    • /
    • v.18 no.5
    • /
    • pp.83-92
    • /
    • 2018
  • Many studies in deep learning show results as good as human's decision in various fields. And importance of activation of online-community and SNS grows up in game industry. Even it decides whether a game can be successful or not. The purpose of this study is to construct a system which can read texts and create comments according to schedule in online-community and SNS using deep learning. Using recurrent neural network, we constructed models generating a comment and a schedule of writing comments, and made program choosing a news title and uploading the comment at twitter in calculated time automatically. This study can be applied to activating an online game community, a Q&A service, etc.

Q-learning based packet scheduling using Softmax (Softmax를 이용한 Q-learning 기반의 패킷 스케줄링)

  • Kim, Dong-Hyun;Lee, Tae-Ho;Lee, Byung-Jun;Kim, Kyung-Tae;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.37-38
    • /
    • 2019
  • 본 논문에서는 자원제한적인 IoT 환경에서 스케줄링 정확도 향상을 위해 Softmax를 이용한 Q-learning 기반의 패킷 스케줄링 기법을 제안한다. 기존 Q-learning의 Exploitation과 Exploration의 균형을 유지하기 위해 e-greedy 기법이 자주 사용되지만, e-greedy는 Exploration 과정에서 최악의 행동이 선택될 수도 있는 문제가 발생한다. 이러한 문제점을 해결하기 위해 본 연구에서는 Softmax를 기반으로 다중 센서 노드 환경에서 데이터 패킷에 대한 Quality of Service (QoS) requirement 정확도를 높이기 위한 연구를 진행한다. 이 때 Temperature 매개변수를 사용하는데, 이는 새로운 정책을 Explore 하기 위한 매개변수이다. 본 논문에서는 시뮬레이션을 통하여 제안된 Softmax를 이용한 Q-learning 기반의 패킷 스케줄링 기법이 기존의 e-greedy를 이용한 Q-learning 기법에 비해 스케줄링 정확도 측면에서 우수함을 보인다.

  • PDF

Optimum Evacuation Route Calculation Using AI Q-Learning (AI기법의 Q-Learning을 이용한 최적 퇴선 경로 산출 연구)

  • Kim, Won-Ouk;Kim, Dae-Hee;Youn, Dae-Gwun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.7
    • /
    • pp.870-874
    • /
    • 2018
  • In the worst maritime accidents, people should abandon ship, but ship structures are narrow and complex and operation takes place on rough seas, so escape is not easy. In particular, passengers on cruise ships are untrained and varied, making evacuation prospects worse. In such a case, the evacuation management of the crew plays a very important role. If a rescuer enters a ship at distress and conducts rescue activities, which zones represent the most effective entry should be examined. Generally, crew and rescuers take the shortest route, but if an accident occurs along the shortest route, it is necessary to select the second-best alternative. To solve this situation, this study aims to calculate evacuation routes using Q-Learning of Reinforcement Learning, which is a machine learning technique. Reinforcement learning is one of the most important functions of artificial intelligence and is currently used in many fields. Most evacuation analysis programs developed so far use the shortest path search method. For this reason, this study explored optimal paths using reinforcement learning. In the future, machine learning techniques will be applicable to various marine-related industries for such purposes as the selection of optimal routes for autonomous vessels and risk avoidance.

LoRa Network based Parking Dispatching System : Queuing Theory and Q-learning Approach (LoRa 망 기반의 주차 지명 시스템 : 큐잉 이론과 큐러닝 접근)

  • Cho, Youngho;Seo, Yeong Geon;Jeong, Dae-Yul
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1443-1450
    • /
    • 2017
  • The purpose of this study is to develop an intelligent parking dispatching system based on LoRa network technology. During the local festival, many tourists come into the festival site simultaneously after sunset. To handle the traffic jam and parking dispatching, many traffic management staffs are engaged in the main road to guide the cars to available parking lots. Nevertheless, the traffic problems are more serious at the peak time of festival. Such parking dispatching problems are complex and real-time traffic information dependent. We used Queuing theory to predict inbound traffics and to measure parking service performance. Q-learning algorithm is used to find fastest routes and dispatch the vehicles efficiently to the available parking lots.

Reinforcement Learning for Node-disjoint Path Problem in Wireless Ad-hoc Networks (무선 애드혹 네트워크에서 노드분리 경로문제를 위한 강화학습)

  • Jang, Kil-woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.8
    • /
    • pp.1011-1017
    • /
    • 2019
  • This paper proposes reinforcement learning to solve the node-disjoint path problem which establishes multipath for reliable data transmission in wireless ad-hoc networks. The node-disjoint path problem is a problem of determining a plurality of paths so that the intermediate nodes do not overlap between the source and the destination. In this paper, we propose an optimization method considering transmission distance in a large-scale wireless ad-hoc network using Q-learning in reinforcement learning, one of machine learning. Especially, in order to solve the node-disjoint path problem in a large-scale wireless ad-hoc network, a large amount of computation is required, but the proposed reinforcement learning efficiently obtains appropriate results by learning the path. The performance of the proposed reinforcement learning is evaluated from the viewpoint of transmission distance to establish two node-disjoint paths. From the evaluation results, it showed better performance in the transmission distance compared with the conventional simulated annealing.

Multi Behavior Learning of Lamp Robot based on Q-learning (강화학습 Q-learning 기반 복수 행위 학습 램프 로봇)

  • Kwon, Ki-Hyeon;Lee, Hyung-Bong
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.35-41
    • /
    • 2018
  • The Q-learning algorithm based on reinforcement learning is useful for learning the goal for one behavior at a time, using a combination of discrete states and actions. In order to learn multiple actions, applying a behavior-based architecture and using an appropriate behavior adjustment method can make a robot perform fast and reliable actions. Q-learning is a popular reinforcement learning method, and is used much for robot learning for its characteristics which are simple, convergent and little affected by the training environment (off-policy). In this paper, Q-learning algorithm is applied to a lamp robot to learn multiple behaviors (human recognition, desk object recognition). As the learning rate of Q-learning may affect the performance of the robot at the learning stage of multiple behaviors, we present the optimal multiple behaviors learning model by changing learning rate.

Vehicle License Plate Recognition System using SSD-Mobilenet and ResNet for Mobile Device (SSD-Mobilenet과 ResNet을 이용한 모바일 기기용 자동차 번호판 인식시스템)

  • Kim, Woonki;Dehghan, Fatemeh;Cho, Seongwon
    • Smart Media Journal
    • /
    • v.9 no.2
    • /
    • pp.92-98
    • /
    • 2020
  • This paper proposes a vehicle license plate recognition system using light weight deep learning models without high-end server. The proposed license plate recognition system consists of 3 steps: [license plate detection]-[character area segmentation]-[character recognition]. SSD-Mobilenet was used for license plate detection, ResNet with localization was used for character area segmentation, ResNet was used for character recognition. Experiemnts using Samsung Galaxy S7 and LG Q9, accuracy showed 85.3% accuracy and around 1.1 second running time.

Performance Analysis of Deep Reinforcement Learning for Crop Yield Prediction (작물 생산량 예측을 위한 심층강화학습 성능 분석)

  • Ohnmar Khin;Sung-Keun Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.99-106
    • /
    • 2023
  • Recently, many studies on crop yield prediction using deep learning technology have been conducted. These algorithms have difficulty constructing a linear map between input data sets and crop prediction results. Furthermore, implementation of these algorithms positively depends on the rate of acquired attributes. Deep reinforcement learning can overcome these limitations. This paper analyzes the performance of DQN, Double DQN and Dueling DQN to improve crop yield prediction. The DQN algorithm retains the overestimation problem. Whereas, Double DQN declines the over-estimations and leads to getting better results. The proposed models achieves these by reducing the falsehood and increasing the prediction exactness.