• Title/Summary/Keyword: Learning based algorithm

Search Result 3,009, Processing Time 0.034 seconds

컴퓨터 평가 기반 학습 시스템 설계 및 개발 연구 (A Study on the Design and Development of Computer Based Learning and Test System)

  • 허균
    • 수산해양교육연구
    • /
    • 제27권4호
    • /
    • pp.1160-1171
    • /
    • 2015
  • The purpose of this study is to design and develop a computer based learning and test system, which supports not only testing learner's ability but also learning contents with giving feedback and hint. In order to design and develop a computer based learning and test system, Visual Basic dot Net software is used. The system works in three stages: sequential problem solving stage, randomized problem solving stage, and the challenge stage of pass/fail. The results of this study are as follows: (a) We propose the context of design for the computer based learning and test system. (b) We design and develop items display function with sequential and random algorithm in this system. (c) We design and develop pass/fail function by applying SPRT(Sequential Probability Ratio Testing) algorithm in the computer based learning and test system.

선형 회분식 공정을 위한 이차 성능 지수에 의한 모델 기반 반복 학습 제어 (Model-based iterative learning control with quadratic criterion for linear batch processes)

  • 이광순;김원철;이재형
    • 제어로봇시스템학회논문지
    • /
    • 제2권3호
    • /
    • pp.148-157
    • /
    • 1996
  • Availability of input trajectories corresponding to desired output trajectories is often important in designing control systems for batch and other transient processes. In this paper, we propose a predictive control-type model-based iterative learning algorithm which is applicable to finding the nominal input trajectories of a linear time-invariant batch process. Unlike the other existing learning control algorithms, the proposed algorithm can be applied to nonsquare systems and has an ability to adjust noise sensitivity as well as convergence rate. A simple model identification technique with which performance of the proposed learning algorithm can be significantly enhanced is also proposed. Performance of the proposed learning algorithm is demonstrated through numerical simulations.

  • PDF

Avoidance Behavior of Small Mobile Robots based on the Successive Q-Learning

  • Kim, Min-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.164.1-164
    • /
    • 2001
  • Q-learning is a recent reinforcement learning algorithm that does not need a modeling of environment and it is a suitable approach to learn behaviors for autonomous agents. But when it is applied to multi-agent learning with many I/O states, it is usually too complex and slow. To overcome this problem in the multi-agent learning system, we propose the successive Q-learning algorithm. Successive Q-learning algorithm divides state-action pairs, which agents can have, into several Q-functions, so it can reduce complexity and calculation amounts. This algorithm is suitable for multi-agent learning in a dynamically changing environment. The proposed successive Q-learning algorithm is applied to the prey-predator problem with the one-prey and two-predators, and its effectiveness is verified from the efficient avoidance ability of the prey agent.

  • PDF

ON THE STRUCTURE AND LEARNING OF NEURAL-NETWORK-BASED FUZZY LOGIC CONTROL SYSTEMS

  • C.T. Lin;Lee, C.S. George
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.993-996
    • /
    • 1993
  • This paper addresses the structure and its associated learning algorithms of a feedforward multi-layered connectionist network, which has distributed learning abilities, for realizing the basic elements and functions of a traditional fuzzy logic controller. The proposed neural-network-based fuzzy logic control system (NN-FLCS) can be contrasted with the traditional fuzzy logic control system in their network structure and learning ability. An on-line supervised structure/parameter learning algorithm dynamic learning algorithm can find proper fuzzy logic rules, membership functions, and the size of output fuzzy partitions simultaneously. Next, a Reinforcement Neural-Network-Based Fuzzy Logic Control System (RNN-FLCS) is proposed which consists of two closely integrated Neural-Network-Based Fuzzy Logic Controllers (NN-FLCS) for solving various reinforcement learning problems in fuzzy logic systems. One NN-FLC functions as a fuzzy predictor and the other as a fuzzy controller. As ociated with the proposed RNN-FLCS is the reinforcement structure/parameter learning algorithm which dynamically determines the proper network size, connections, and parameters of the RNN-FLCS through an external reinforcement signal. Furthermore, learning can proceed even in the period without any external reinforcement feedback.

  • PDF

An improvement of LEM2 algorithm

  • The, Anh-Pham;Lee, Young-Koo;Lee, Sung-Young
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(A)
    • /
    • pp.302-304
    • /
    • 2011
  • Rule based machine learning techniques are very important in our real world now. We can list out some important application which we can apply rule based machine learning algorithm such as medical data mining, business transaction mining. The different between rules based machine learning and model based machine learning is that model based machine learning out put some models, which often are very difficult to understand by expert or human. But rule based techniques output are the rule sets which is in IF THEN format. For example IF blood pressure=90 and kidney problem=yes then take this drug. By this way, medical doctor can easy modify and update some usable rule. This is the scenario in medical decision support system. Currently, Rough set is one of the most famous theory which can be used for produce the rule. LEM2 is the algorithm use this theory and can produce the small set of rule on the database. In this paper, we present an improvement of LEM2 algorithm which incorporates the variable precision techniques.

Design of Disease Prediction Algorithm Applying Machine Learning Time Series Prediction

  • Hye-Kyeong Ko
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권3호
    • /
    • pp.321-328
    • /
    • 2024
  • This paper designs a disease prediction algorithm to diagnose migraine among the types of diseases in advance by learning algorithms using machine learning-based time series analysis. This study utilizes patient data statistics, such as electroencephalogram activity, to design a prediction algorithm to determine the onset signals of migraine symptoms, so that patients can efficiently predict and manage their disease. The results of the study evaluate how accurate the proposed prediction algorithm is in predicting migraine and how quickly it can predict the onset of migraine for disease prevention purposes. In this paper, a machine learning algorithm is used to analyze time series of data indicators used for migraine identification. We designed an algorithm that can efficiently predict and manage patients' diseases by quickly determining the onset signaling symptoms of disease development using existing patient data as input. The experimental results show that the proposed prediction algorithm can accurately predict the occurrence of migraine using machine learning algorithms.

변형된 돌연변이를 가진 대화형 유전자 알고리즘을 이용한 학습 콘텐츠의 설계 및 구현 (Design and Implementation of Learning Contents Using Interactive Genetic Algorithms with Modified Mutation)

  • 김정숙
    • 한국컴퓨터정보학회논문지
    • /
    • 제10권6호
    • /
    • pp.85-92
    • /
    • 2005
  • 본 논문에서는 변형된 돌연변이 연산자를 적용한 대화형 유전자 알고리즘을 사용해서 웹-기반 학습 콘텐츠를 개발하였다. 대화형 유전자 알고리즘은 주로 상호 교환(reciprocal exchange) 돌연변이를 사용한다. 그러나 본 논문에서는 학습자의 학습 효과를 높이기 위해 돌연변이 연산자를 변형하였다. 그리고, 대화형 유전자 알고리즘을 이용한 웹 기반 학습 콘텐츠는 동적인 학습 내용과 실시간 테스트 시스템을 제공한다. 특히 학습자가 자신의 특성과 흥미에 따라 대화형 유전자 알고리즘을 수행하면서 효율적인 학습 환경과 콘텐츠 배열 순서를 선택할 수 있다.

  • PDF

A Data-centric Analysis to Evaluate Suitable Machine-Learning-based Network-Attack Classification Schemes

  • Huong, Truong Thu;Bac, Ta Phuong;Thang, Bui Doan;Long, Dao Minh;Quang, Le Anh;Dan, Nguyen Minh;Hoang, Nguyen Viet
    • International Journal of Computer Science & Network Security
    • /
    • 제21권6호
    • /
    • pp.169-180
    • /
    • 2021
  • Since machine learning was invented, there have been many different machine learning-based algorithms, from shallow learning to deep learning models, that provide solutions to the classification tasks. But then it poses a problem in choosing a suitable classification algorithm that can improve the classification/detection efficiency for a certain network context. With that comes whether an algorithm provides good performance, why it works in some problems and not in others. In this paper, we present a data-centric analysis to provide a way for selecting a suitable classification algorithm. This data-centric approach is a new viewpoint in exploring relationships between classification performance and facts and figures of data sets.

딥러닝 기반의 특징점 추출 알고리즘을 활용한 고해상도 해저지형 생성기법 연구 (Research on High-resolution Seafloor Topography Generation using Feature Extraction Algorithm Based on Deep Learning)

  • 김현승;장재덕;현철;이성균
    • 시스템엔지니어링학술지
    • /
    • 제20권spc1호
    • /
    • pp.90-96
    • /
    • 2024
  • In this paper, we propose a technique to model high resolution seafloor topography with 1m intervals using actual water depth data near the east coast of the Korea with 1.6km distance intervals. Using a feature point extraction algorithm that harris corner based on deep learning, the location of the center of seafloor mountain was calculated and the surrounding topology was modeled. The modeled high-resolution seafloor topography based on deep learning was verified within 1.1m mean error between the actual warder dept data. And average error that result of calculating based on deep learning was reduced by 54.4% compared to the case that deep learning was not applied. The proposed algorithm is expected to generate high resolution underwater topology for the entire Korean peninsula and be used to establish a path plan for autonomous navigation of underwater vehicle.

영상기반 딥러닝 및 이미지 프로세싱 기법을 이용한 볼트풀림 손상 검출 (Bolt-Loosening Detection using Vision-Based Deep Learning Algorithm and Image Processing Method)

  • 이소영;현탄칸;박재형;김정태
    • 한국전산구조공학회논문집
    • /
    • 제32권4호
    • /
    • pp.265-272
    • /
    • 2019
  • 본 연구에서는 영상기반 딥러닝 및 이미지 프로세싱 기법을 이용한 볼트풀림 손상검출 기법을 제안하였다. 이를 위해 먼저, 딥러닝 및 이미지 프로세싱 기반 볼트풀림 검출 기법을 설계하였다. 영상기반 볼트풀림 검출 기법은 볼트 이미지 검출 과정 및 볼트풀림 각도 추정 과정으로 구성된다. 볼트 이미지의 검출을 위하여 RCNN기반 딥러닝 알고리즘을 이용하였다. 영상의 원근왜곡 교정을 위해 호모그래피 개념을 이용하였으며 볼트풀림 각도를 추정을 위하여 Hough 변환을 이용하였다. 다음으로 제안된 기법의 성능을 검증을 위하여 거더의 볼트 연결부 모형을 대상으로 볼트풀림 손상검출 실험을 수행하였다. 다양한 원근 왜곡 조건에 대하여 RCNN 기반 볼트 검출기와 Hough 변환 기반 볼트풀림 각도 추정기의 성능을 검토하였다.