• Title/Summary/Keyword: 로봇 행동선택

Search Result 48, Processing Time 0.023 seconds

The effects of an increase in self - determination experience on the behavior of young children with autism spectrum disorder by telepresence robot. (텔레프레젠스 로봇을 이용한 자기결정 경험의 증대가 자폐범주성 장애유아의 행동에 미치는 효과 (자기결정 활동 멀티미디어 콘텐츠의 적용을 통하여))

  • Kim, Su-Jin
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.12 no.1
    • /
    • pp.38-45
    • /
    • 2018
  • The purpose of this study was to investigate the effects of an increase in self - determination experience on the behavior of young children with autism spectrum disorder by telepresence robot. As for research method, the study used AB design, two selected children engaged in activities with a telepresence robot in free play time in the morning. The activities were conducted in 19 sessions, twice a week, 15 to 40 minutes each. To investigate the effect of the activity on the child's behavior was observed using the behaviors of free play time and work time in the afternoon. All the process was recorded by a camera and then analyzed by frequency recording. The results of the study are as follows. First, the participation of young children with autism spectrum disorder in free play time increased. Second, choice-making or preference behavior of young children with autistic spectrum disorder were increased. This study suggests that increasing the self-determination experience of young children with autism spectrum disorders using telepresence robots increases their participation and increases their choice-making or preference behavior.

The Robot Soccer Strategy and Tactic by Fuzzy Logic on Shoot Propriety (슛 적정성에 퍼지 논리를 고려한 로봇축구 전략 및 전술)

  • Lee Jeongjun;Joo Moon G.;Lee Wonchang;Kang Geuntaek
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.317-320
    • /
    • 2005
  • 본 논문에서는 퍼지 로직을 이용하여 로봇의 여러 환경변수에 따라 로봇들의 행동을 적절히 선택하는 알고리즘을 제시한다. 전략 및 전술 알고리즘으로 많이 알려진 Modular Q-학습 알고리즘은 개체의 수에 따른 상태수를 지수 함수적으로 증가시킬 뿐만 아니라, 로봇이 협력하기 위해 중재자모듈이라는 별도의 알고리즘을 필요로 한다. 그러나 앞으로 제시하는 로봇 행동의 퍼지 적정성을 고려한 로봇축구 전략 및 전술 알고리즘은 환경 변수에 따라 로봇 행동의 적절성을 퍼지 로직을 통하여 얻어내게 하였으며, 이를 이용함으로써 다수 로봇의 상호작용도 고려할 수 있게 하였다.

  • PDF

Bayesian Inference of Behavior Network for Perceiving Moving Objects and Generating Behaviors of Agent (에이전트의 움직이는 물체 인지와 행동 생성을 위한 행동 네트워크의 베이지안 추론)

  • 민현정;조성배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.46-48
    • /
    • 2003
  • 본 논문에서는 실제환경에서와 같이 예측할 수 없는 상황에서 에이전트의 인지와 자동 행동 생성 방법을 제안한다. 전통적인 에이전트의 지능제어 방법은 환경에 대해 알고 있는 정보를 이용한다는 제약 때문에 다양하고 복잡한 환경에 적응할 수 없었다. 최근, 미리 알려지지 않은 환경에서 자동으로 행동을 생성할 수 있는 센서와 행동을 연결하는 행동 기반의 방법과 추론, 학습 및 계획 기능의 부여를 위한 하이브리드 방법이 연구되고 있다. 본 논문에서는 다양한 환경조건으로 움직이는 장애물을 인지하고 피할 수 있는 행동을 생성하기 위해 행동 네트워크에 Bayesian 네트워크를 결합한 방법을 제안한다. 행동 네트워크는 입력된 센서 정보와 미리 정의된 목적 정보를 가지고 다음에 수행할 가장 높은 우선순위의 행동을 선택한다. 그리고 Bayesian 네트워크는 센서 정보들로부터 상황을 미리 추론하고 이 확률 값을 행동 네트워크의 가중치로 주어 행동 선택을 조정하도록 한다. 로봇 시뮬레이터를 이용한 실험을 통해 제안한 행동 네트워크와 Bayesian 네트워크의 결합 방법으로 움직이는 장애물을 피하고 목적지를 찾아가는 것을 확인할 수 있었다.

  • PDF

Autonomous Mobile Robot System based on a Fuzzy Artificial Immune System (퍼지 인공 면역망 시스템을 이용한 자율이동로봇 시스템)

  • Lee, Dong-Je;Choi, Young-Kiu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2083-2089
    • /
    • 2007
  • In this paper addresses the low-level behavior of fuzzy control and the high-level behavior selector for Autonomous Mobile Robots(AMRs) based on a Fuzzy Artificial Immune Network. The sensing information that comes from ultrasonic sensors is the antigen it, and stimulates antibodies. There are many possible combinations of actions between action-patterns and external situations. The question is how to handle the situations to decide the proper action. We propose a fuzzy artificial immune network to solve the above problem. and the computer simulation for an AMR action selector shows the usefulness of the proposed action selector.

The study on environmental adaptation and expansion of the intelligent agent (지능형 에이전트의 환경 적응성 및 확장성에 대한 연구)

  • 백혜정;박영택
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.136-138
    • /
    • 2003
  • 로봇이나 가상 캐릭터와 같은 지능형 에이전트가 자율적으로 살아가기 위해서는 주어진 환경을 인식하고, 그에 맞는 최적의 행동을 선택하는 능력을 가지고 있어야 한다. 본 논문은 이러한 지능형 에이전트를 구현하기 위하여, 외부 환경에 적응하면서 최적의 행동을 배우고 선택하는 방법을 연구하였다. 본 논문에서 제안한 방식은 강화 학습을 이용한 행동기반 학습 방법과 기호 학습을 이용한 인지 학습 방법을 통합한 방식으로 다음과 같은 특징을 가진다. 첫째, 외부 환경의 적응성을 수행하기 위하여 강화 학습을 이용하였으며. 이는 지능형 에이전트가 변화하는 환경에 대한 유연성을 가지도록 하였다. 둘째. 경험들에서 귀납적 기계학습과 연관 규칙을 이용하여 규칙을 추출하여 에이전트의 목적에 맞는 환경 요인을 학습함으로 주어진 환경에서 보다 빠르게, 확장된 환경에서 보다 효율적으로 행동을 선택을 하도록 하였다. 제안한 통합방식은 기존의 강화 학습만을 고려한 학습 알고리즘에 비하여 학습 속도를 향상 시킬수 있으며, 기호 학습만을 고려한 학습 알고리즘에 비하여 환경에 유연성을 가지고 행동을 적용할 수 있는 장점을 가진다.

  • PDF

Cooperative Strategies and Swarm Behavior in Distributed Autonomous Robotic Systems based on Artificial Immune System (인공면역 시스템 기반 자율분산로봇 시스템의 협조 전략과 군행동)

  • 심귀보
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.6
    • /
    • pp.627-633
    • /
    • 1999
  • In this paper, we propose a method of cooperative control (T-cell modeling) and selection of group behavior strategy (B-cell modeling) based on immune system in distributed autonomous robotic system (DARS). Immune system is living body's self-protection and self-maintenance system. These features can be applied to decision making of optimal swarm behavior in dynamically changing environment. For applying immune system to DARS, a robot is regarded as a ?3-cell, each environmental condition as an antigen, a behavior strategy as an antibody and control parameter as a T-cell respectively. When the environmental condition (antigen) changes, a robot selects an appropriate behavior strategy (antibody). And its behavior strategy is stimulated and suppressed by other robot using communication (immune network). Finally much stimulated strateby is adopted as a swarm behavior strategy. This control scheme is based on clonal selection and immune network hypothesis, and it is used for decision making of optimal swarm strategy. Adaptation ability of robot is enhanced by adding T-cell model as a control parameter in dynamic environments.

  • PDF

Autonomous Mobile Robot System based on a Fuzzy Artificial Immune System (퍼지 인공 면역망 시스템을 이용한 자율이동로봇 시스템)

  • Lee, Dong-Je;Choi, Young-Kui
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.257-260
    • /
    • 2007
  • In this paper addresses the low-level behavior of fuzzy control and the high-level behavior selector for Autonomous Mobile Robots (AMRs) based on a Fuzzy Artificial Immune Network. The sensing information that comes from ultrasonic sensors is the antigen it, and stimulates antibodies. There are many possible combinations of actions between action-patterns and external situations. The question is how to handle the situations to decide the proper action. We propose a fuzzy artificial immune network to solve the above problem. and the computer simulation for an AMR action selector shows the usefulness of the proposed action selector.

  • PDF

Behavior Learning and Evolution of Individual Robot for Cooperative Behavior of Swarm Robot System (군집 로봇의 협조 행동을 위한 로봇 개체의 행동학습과 진화)

  • Sim, Kwee-Bo;Lee, Dong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-137
    • /
    • 2006
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, the new learning and evolution method based on reinforcement learning having delayed reward ability and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. Reinforcement learning having delayed reward is still useful even though when there is no immediate reward. And by distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning is adopted in this paper. we verify the effectiveness of the proposed method by applying it to cooperative search problem.

Behavior Learning and Evolution of Swarm Robot System using Support Vector Machine (SVM을 이용한 군집로봇의 행동학습 및 진화)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.712-717
    • /
    • 2008
  • In swarm robot systems, each robot must act by itself according to the its states and environments, and if necessary, must cooperate with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, reinforcement learning method with SVM based on structural risk minimization and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. By distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning that basis of SVM is adopted in this paper.

Behavior Learning and Evolution of Swarm Robot System using Q-learning and Cascade SVM (Q-learning과 Cascade SVM을 이용한 군집로봇의 행동학습 및 진화)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.279-284
    • /
    • 2009
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, reinforcement learning method using many SVM based on structural risk minimization and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. By distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning that basis of Cascade SVM is adopted in this paper.