• 제목/요약/키워드: Distributed Learning

검색결과 602건 처리시간 0.026초

Design of a ParamHub for Machine Learning in a Distributed Cloud Environment

  • Su-Yeon Kim;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권2호
    • /
    • pp.161-168
    • /
    • 2024
  • As the size of big data models grows, distributed training is emerging as an essential element for large-scale machine learning tasks. In this paper, we propose ParamHub for distributed data training. During the training process, this agent utilizes the provided data to adjust various conditions of the model's parameters, such as the model structure, learning algorithm, hyperparameters, and bias, aiming to minimize the error between the model's predictions and the actual values. Furthermore, it operates autonomously, collecting and updating data in a distributed environment, thereby reducing the burden of load balancing that occurs in a centralized system. And Through communication between agents, resource management and learning processes can be coordinated, enabling efficient management of distributed data and resources. This approach enhances the scalability and stability of distributed machine learning systems while providing flexibility to be applied in various learning environments.

Online Evolution for Cooperative Behavior in Group Robot Systems

  • Lee, Dong-Wook;Seo, Sang-Wook;Sim, Kwee-Bo
    • International Journal of Control, Automation, and Systems
    • /
    • 제6권2호
    • /
    • pp.282-287
    • /
    • 2008
  • In distributed mobile robot systems, autonomous robots accomplish complicated tasks through intelligent cooperation with each other. This paper presents behavior learning and online distributed evolution for cooperative behavior of a group of autonomous robots. Learning and evolution capabilities are essential for a group of autonomous robots to adapt to unstructured environments. Behavior learning finds an optimal state-action mapping of a robot for a given operating condition. In behavior learning, a Q-learning algorithm is modified to handle delayed rewards in the distributed robot systems. A group of robots implements cooperative behaviors through communication with other robots. Individual robots improve the state-action mapping through online evolution with the crossover operator based on the Q-values and their update frequencies. A cooperative material search problem demonstrated the effectiveness of the proposed behavior learning and online distributed evolution method for implementing cooperative behavior of a group of autonomous mobile robots.

개방·공유·참여의 대학 교육환경 구축 사례 (A Framework for Open, Flexible and Distributed Learning Environment for Higher Education)

  • 강명희;유지원
    • 지식경영연구
    • /
    • 제9권4호
    • /
    • pp.17-33
    • /
    • 2008
  • This study proposes University 2.0 as a model case of open, flexible, and distributed learning environment for higher education based on theoretical foundations and perspectives. As web 2.0 technologies emerge into the field of education, ways of generating and disseminating information and knowledge have been drastically changed. Professors are no longer the only source of knowledge. Students using internet often become prosumers of knowledge who search and access information through the web as well as publish their own knowledge using the web. A concept and framework of University 2.0 is introduced for implementing the new interactive learning paradigm with an open, flexible and distributed learning environment for higher education. University 2.0 incorporates online and offline learning environments with various educational media. Furthermore, it employs various learning strategies and integrates formal and informal learning through learning communities. Both instructors and students in University 2.0 environment are expected to be active knowledge generators as well as creative designers of their own learning and teaching.

  • PDF

강화학습과 분산유전알고리즘을 이용한 자율이동로봇군의 행동학습 및 진화 (Behavior leaning and evolution of collective autonomous mobile robots using reinforcement learning and distributed genetic algorithms)

  • 이동욱;심귀보
    • 전자공학회논문지S
    • /
    • 제34S권8호
    • /
    • pp.56-64
    • /
    • 1997
  • In distributed autonomous robotic systems, each robot must behaves by itself according to the its states and environements, and if necessary, must cooperates with other orbots in order to carray out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, the new learning and evolution method based on reinforement learning having delayed reward ability and distributed genectic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. Reinforement learning having delayed reward is still useful even though when there is no immediate reward. And by distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the perfodrmance of evolution, selective crossover using the characteristic of reinforcement learning is adopted in this paper, we verify the effectiveness of the proposed method by applying it to cooperative search problem.

  • PDF

쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법 (Distributed In-Memory Caching Method for ML Workload in Kubernetes)

  • 윤동현;송석일
    • Journal of Platform Technology
    • /
    • 제11권4호
    • /
    • pp.71-79
    • /
    • 2023
  • 이 논문에서는 기계학습 워크로드의 특징을 분석하고 이를 기반으로 기계학습 워크로드의 성능 향상을 위한 분산 인-메모리 캐싱 기법을 제안한다. 기계학습 워크로드의 핵심은 모델 학습이며 모델 학습은 컴퓨팅 집약적 (Computation Intensive)인 작업이다. 쿠버네티스 기반 클라우드 환경에서 컴퓨팅 프레임워크와 스토리지를 분리한 구조에서 기계학습 워크로드를 수행하는 것은 자원을 효과적으로 할당할 수 있지만, 네트워크 통신을 통해 IO가 수행되야 하므로 지연이 발생할 수 있다. 이 논문에서는 이런 환경에서 수행되는 머신러닝 워크로드의 성능을 향상하기 위한 분산 인-메모리 캐싱 기법을 제안한다. 특히, 제안하는 방법은 쿠버네티스 기반의 머신러닝 파이프라인 관리 도구인 쿠브플로우를 고려하여 머신러닝 워크로드에 필요한 데이터를 분산 인-메모리 캐시에 미리 로드하는 새로운 방법을 제안한다.

  • PDF

프라이버시를 보호하는 분산 기계 학습 연구 동향 (Systematic Research on Privacy-Preserving Distributed Machine Learning)

  • 이민섭;신영아;천지영
    • 정보처리학회 논문지
    • /
    • 제13권2호
    • /
    • pp.76-90
    • /
    • 2024
  • 인공지능 기술은 스마트 시티, 자율 주행, 의료 분야 등 다양한 분야에서 활용 가능성을 높이 평가받고 있으나, 정보주체의 개인정보 및 민감정보의 노출 문제로 모델 활용이 제한되고 있다. 이에 따라 데이터를 중앙 서버에 모아서 학습하지 않고, 보유 데이터셋을 바탕으로 일차적으로 학습을 진행한 후 글로벌 모델을 최종적으로 학습하는 분산 기계 학습의 개념이 등장하였다. 그러나, 분산 기계 학습은 여전히 협력하여 학습을 진행하는 과정에서 데이터 프라이버시 위협이 발생한다. 본 연구는 분산 기계 학습 연구 분야에서 프라이버시를 보호하기 위한 연구를 서버의 존재 유무, 학습 데이터셋의 분포 환경, 참여자의 성능 차이 등 현재까지 제안된 분류 기준들을 바탕으로 유기적으로 분석하여 최신 연구 동향을 파악한다. 특히, 대표적인 분산 기계 학습 기법인 수평적 연합학습, 수직적 연합학습, 스웜 학습에 집중하여 활용된 프라이버시 보호 기법을 살펴본 후 향후 진행되어야 할 연구 방향을 모색한다.

Global Optimization for Energy Efficient Resource Management by Game Based Distributed Learning in Internet of Things

  • Ju, ChunHua;Shao, Qi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권10호
    • /
    • pp.3771-3788
    • /
    • 2015
  • This paper studies the distributed energy efficient resource management in the Internet of Things (IoT). Wireless communication networks support the IoT without limitation of distance and location, which significantly impels its development. We study the communication channel and energy management in the wireless communication network supported IoT to improve the ability of connection, communication, share and collaboration, by using the game theory and distributed learning algorithm. First, we formulate an energy efficient neighbor collaborative game model and prove that the proposed game is an exact potential game. Second, we design a distributed energy efficient channel selection learning algorithm to obtain the global optimum in a distributed manner. We prove that the proposed algorithm will asymptotically converge to the global optimum with geometric speed. Finally, we make the simulations to verify the theoretic analysis and the performance of proposed algorithm.

Reinforcement learning multi-agent using unsupervised learning in a distributed cloud environment

  • Gu, Seo-Yeon;Moon, Seok-Jae;Park, Byung-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권2호
    • /
    • pp.192-198
    • /
    • 2022
  • Companies are building and utilizing their own data analysis systems according to business characteristics in the distributed cloud. However, as businesses and data types become more complex and diverse, the demand for more efficient analytics has increased. In response to these demands, in this paper, we propose an unsupervised learning-based data analysis agent to which reinforcement learning is applied for effective data analysis. The proposal agent consists of reinforcement learning processing manager and unsupervised learning manager modules. These two modules configure an agent with k-means clustering on multiple nodes and then perform distributed training on multiple data sets. This enables data analysis in a relatively short time compared to conventional systems that perform analysis of large-scale data in one batch.

IT교육에서 분산인지를 지원하는 학습몰입모형 (A Learning-Flow Model Supporting Distributed Cognition in IT Education)

  • 김성기;배지혜
    • 융합보안논문지
    • /
    • 제12권6호
    • /
    • pp.51-59
    • /
    • 2012
  • 본 논문에서 제안하는 학습몰입모형 "BoX" 는 B-Boy들이 대회에서 우승하기 위해 분산인지(distributed cognition) 경쟁을 능동적으로 즐기는 문화에서 그 발상의 기초를 두고 있다. "Battle"은 경쟁을 의미하며 X는 본 연구에서 제안하려는 교수법 적용과목이다. 본 논문의 목적은 학습자가 높은 수준의 학습몰입 상태에서 창조적 문제해결능력을 배양하는 학습모형을 제시하는 것이다. "BoX" 구현의 핵심은 개인인지부하를 줄이는 분산인지 활동을 극대화하도록 학습자간의 경쟁질서와 통제원리를 설계하는 것이다. 두 학기의 IT교육과정에서 비교학생군의 학습 성취도를 분석한 결과, "BoX" 학습모형이 기존의 수업방식에 비해서 학습몰입도와 학습 성취도를 크게 높이는 것을 확인하였다.

Design of Block Codes for Distributed Learning in VR/AR Transmission

  • Seo-Hee Hwang;Si-Yeon Pak;Jin-Ho Chung;Daehwan Kim;Yongwan Kim
    • Journal of information and communication convergence engineering
    • /
    • 제21권4호
    • /
    • pp.300-305
    • /
    • 2023
  • Audience reactions in response to remote virtual performances must be compressed before being transmitted to the server. The server, which aggregates these data for group insights, requires a distribution code for the transfer. Recently, distributed learning algorithms such as federated learning have gained attention as alternatives that satisfy both the information security and efficiency requirements. In distributed learning, no individual user has access to complete information, and the objective is to achieve a learning effect similar to that achieved with the entire information. It is therefore important to distribute interdependent information among users and subsequently aggregate this information following training. In this paper, we present a new extension technique for minimal code that allows a new minimal code with a different length and Hamming weight to be generated through the product of any vector and a given minimal code. Thus, the proposed technique can generate minimal codes with previously unknown parameters. We also present a scenario wherein these combined methods can be applied.