• Title/Summary/Keyword: Artificial Agent

Search Result 319, Processing Time 0.028 seconds

KubEVC-Agent : Kubernetes Edge Vision Cluster Agent for Optimal DNN Inference and Operation (KubEVC-Agent : 머신러닝 추론 엣지 컴퓨팅 클러스터 관리 자동화 시스템)

  • Moohyun Song;Kyumin Kim;Jihun Moon;Yurim Kim;Chaewon Nam;Jongbin Park;Kyungyong Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.293-301
    • /
    • 2023
  • With the advancement of artificial intelligence and its various use cases, accessing it through edge computing environments is gaining traction. However, due to the nature of edge computing environments, efficient management and optimization of clusters distributed in different geographical locations is considered a major challenge. To address these issues, this paper proposes a centralization and automation tool called KubEVC-Agent based on Kubernetes. KubEVC-Agent centralizes the deployment, operation, and management of edge clusters and presents a use case of the data transformation for optimizing intra-cluster communication. This paper describes the components of KubEVC-Agent, its working principle, and experimental results to verify its effectiveness.

An Artificial Intelligence Game Agent Using CNN Based Records Learning and Reinforcement Learning (CNN 기반 기보학습 및 강화학습을 이용한 인공지능 게임 에이전트)

  • Jeon, Youngjin;Cho, Youngwan
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1187-1194
    • /
    • 2019
  • This paper proposes a CNN architecture as value function network of an artificial intelligence Othello game agent and its learning scheme using reinforcement learning algorithm. We propose an approach to construct the value function network by using CNN to learn the records of professional players' real game and an approach to enhance the network parameter by learning from self-play using reinforcement learning algorithm. The performance of value function network CNN was compared with existing ANN by letting two agents using each network to play games each other. As a result, the winning rate of the CNN agent was 69.7% and 72.1% as black and white, respectively. In addition, as a result of applying the reinforcement learning, the performance of the agent was improved by showing 100% and 78% winning rate, respectively, compared with the network-based agent without the reinforcement learning.

A Cooperation Strategy of Multi-agents in Real-Time Dynamic Environments (실시간 동적인 환경에서 다중 에이전트의 협동 기법)

  • Yoo, Han-Ha;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Game Society
    • /
    • v.6 no.3
    • /
    • pp.13-22
    • /
    • 2006
  • Games such as sports, RTS, RPG, which teams of players play, require advanced artificial intelligence technology for team management. The existing artificial intelligence enables an intelligent agent to have the autonomy solving problem by itself, but to lack interaction and cooperation between agents. This paper presents "Level Unified Approach Method" with effective role allocation and autonomy in multiagent system. This method allots sub-goals to agents using role information to accomplish a global goal. Each agent makes a decision and takes actions by itself in dynamic environments. Global goal of Team coordinates to allocated role in tactics approach. Each agent leads interactive cooperation by sharing state information with another using Databoard, As each agent has planning capacity, an agent takes appropriate actions for playing allocated roles in dynamic environments. This cooperation and interactive operation between agents causes a collision problem, so it approaches at tactics side for controlling this problem. Our experimental result shows that "Level Unified Approach Method" has better performance than existing rental approach method or de-centralized approach method.

  • PDF

Swarm Control of Distributed Autonomous Robot System based on Artificial Immune System using PSO (PSO를 이용한 인공면역계 기반 자율분산로봇시스템의 군 제어)

  • Kim, Jun-Yeup;Ko, Kwang-Eun;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.5
    • /
    • pp.465-470
    • /
    • 2012
  • This paper proposes a distributed autonomous control method of swarm robot behavior strategy based on artificial immune system and an optimization strategy for artificial immune system. The behavior strategies of swarm robot in the system are depend on the task distribution in environment and we have to consider the dynamics of the system environment. In this paper, the behavior strategies divided into dispersion and aggregation. For applying to artificial immune system, an individual of swarm is regarded as a B-cell, each task distribution in environment as an antigen, a behavior strategy as an antibody and control parameter as a T-cell respectively. The executing process of proposed method is as follows: When the environmental condition changes, the agent selects an appropriate behavior strategy. And its behavior strategy is stimulated and suppressed by other agent using communication. Finally much stimulated strategy is adopted as a swarm behavior strategy. In order to decide more accurately select the behavior strategy, the optimized parameter learning procedure that is represented by stimulus function of antigen to antibody in artificial immune system is required. In this paper, particle swarm optimization algorithm is applied to this learning procedure. The proposed method shows more adaptive and robustness results than the existing system at the viewpoint that the swarm robots learning and adaptation degree associated with the changing of tasks.

Dependence of physical properties of artificial lightweight aggregates upon a flux and a bloating agent addition (인공경량골재 특성의 발포제 및 융제 첨가 의존성)

  • Kang, Seung-Gu
    • Journal of the Korean Crystal Growth and Crystal Technology
    • /
    • v.19 no.1
    • /
    • pp.48-53
    • /
    • 2009
  • The effect of bloating and fluxing agent on the microstructure and physical properties were studied in manufacturing the artificial lightweight aggregates of bulk density below] using clay and stone sludge. In case of the aggregates added only with bloating agent, the bulk density and water absorption were $0.5{\sim}1.0$ and $41{\sim}110%$ respectively but the microstucture was not uniform with a rough appearance. For the aggregates added with a fluxing agent and one bloating agent, a part of shell was lost due to explosion of specimen caused by over-bloating during a sintering. The mixed addition of bloating agents with vacuum oil, carbon and ${Fe_2}{O_3}$ made the microstructure homogeneous by generating an uniform black core and shell structure. The aggregates added with mixed agents and sintered at $1200^{\circ}C$ showed the bulk density 67 % lower and water absorption 48 times higher than those of the specimen with no additives. ]n this study, the artificial lightweight aggregates showing the bulk density of $0.5{\sim}1.0$ and water absorption of $50{\sim}125%$ could be fabricated to apply to various fields.

Learning soccer robot using genetic programming

  • Wang, Xiaoshu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.292-297
    • /
    • 1999
  • Evolving in artificial agent is an extremely difficult problem, but on the other hand, a challenging task. At present the studies mainly centered on single agent learning problem. In our case, we use simulated soccer to investigate multi-agent cooperative learning. Consider the fundamental differences in learning mechanism, existing reinforcement learning algorithms can be roughly classified into two types-that based on evaluation functions and that of searching policy space directly. Genetic Programming developed from Genetic Algorithms is one of the most well known approaches belonging to the latter. In this paper, we give detailed algorithm description as well as data construction that are necessary for learning single agent strategies at first. In following step moreover, we will extend developed methods into multiple robot domains. game. We investigate and contrast two different methods-simple team learning and sub-group loaming and conclude the paper with some experimental results.

  • PDF

Comparison of Reinforcement Learning Activation Functions to Improve the Performance of the Racing Game Learning Agent

  • Lee, Dongcheul
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1074-1082
    • /
    • 2020
  • Recently, research has been actively conducted to create artificial intelligence agents that learn games through reinforcement learning. There are several factors that determine performance when the agent learns a game, but using any of the activation functions is also an important factor. This paper compares and evaluates which activation function gets the best results if the agent learns the game through reinforcement learning in the 2D racing game environment. We built the agent using a reinforcement learning algorithm and a neural network. We evaluated the activation functions in the network by switching them together. We measured the reward, the output of the advantage function, and the output of the loss function while training and testing. As a result of performance evaluation, we found out the best activation function for the agent to learn the game. The difference between the best and the worst was 35.4%.

Distributed Data Platform Collaboration Agent Design Using EMRA

  • Park, Ho-Kyun;Moon, Seok-Jae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.40-46
    • /
    • 2022
  • Recently, as the need for data access by integrating information in a distributed cloud environment increases in enterprise-wide enterprises, interoperability for collaboration between existing legacy systems is emphasized. In order to interconnect independent legacy systems, it is necessary to overcome platform heterogeneity and semantic heterogeneity. To solve this problem, middleware was built using EMRA (Extended MetaData Registry Access) based on ISO/IEC 11179. However, the designed middleware cannot guarantee the efficiency of information utilization because it does not have an adjustment function for each node's resource status and work status. Therefore, it is necessary to manage and adjust the legacy system. In this paper, we coordinate the correct data access between the information requesting agent and the information providing agent, and integrate it by designing a cooperative agent responsible for information monitoring and task distribution of each legacy system and resource management of local nodes. to make efficient use of the available information.

Experiments of soccer robots system

  • Sugisaka, Masanori;Nakanishi, Kiyokazu;Hara, Masayoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1105-1108
    • /
    • 2003
  • The micro robot soccer playing system is introduced. Studying and learning, evolving in artificial agents are very difficult problem, but on the other hand we think more powerfully challenging task. In our laboratory, this soccer-system studies mainly centered on single agent learning problem. The construction of such experimental system has involved lots of kinds of challenges such as robot designing, vision processing, motion controlling. At last we will give some results showing that the proposed approach is feasible to guide the design of common agents system.

  • PDF

AI Voice Agent and Users' Response (AI 음성 에이전트의 음성 특성에 대한 사용자 반응 연구)

  • Beak, Seung Ju;Jung, Yoon Hyuk
    • The Journal of Information Systems
    • /
    • v.31 no.2
    • /
    • pp.137-158
    • /
    • 2022
  • Purpose As artificial intelligence voice agents (AIVA) have been widely adopted in services, diverse forms of their voices, which are the main interface with users, have been experimented. The purpose of this study is to examine how users evaluate vocal characteristics (gender, voice pitch, and voice pace) of AIVA, depending on prior research on human voice attractiveness. Design/methodology/approach This study employed an experimental survey which 516 participated in. Each participant was randomly assigned into one of eight situations (e.g., male - higher pitch - faster pace) and listened a AIVA voice sample, which introduce weather information. Next, a participant answered three consequence factors (attractiveness, trust, and anthropomorphism). Findings The results reveal that female voices of AIVA were perceived as more attractive and trustworthy than male voices. As far as voice pitch goes, while lower-pitch voices were preferred in female voices, higher-pitch voices were preferred in male voices. Finally, faster voices of AIVA were more attractive than slower voices.