• 제목/요약/키워드: human-interactive actions

검색결과 10건 처리시간 0.022초

유전알고리즘을 이용한 사족 보행로봇의 인간친화동작 구현 (The Implementation of Human-Interactive Motions for a Quadruped Robot Using Genetic Algorithm)

  • 공정식;이인구;이보희
    • 제어로봇시스템학회논문지
    • /
    • 제8권8호
    • /
    • pp.665-672
    • /
    • 2002
  • This paper deals with the human-interactive actions of a quadruped robot by using Genetic Algorithm. In case we have to work out the designed plan under the special environments, our robot will be required to have walking capability, and patterns with legs, which are designed like gaits of insect, dog and human. Our quadruped robot (called SERO) is capable of not only the basic actions operated with sensors and actuators but also the various advanced actions including walking trajectories, which are generated by Genetic Algorithm. In this paper, the body and the controller structures are proposed and kinematics analysis are performed. All of the suggested motions of SERO are generated by PC simulation and implemented in real environment successfully.

Co-Operative Strategy for an Interactive Robot Soccer System by Reinforcement Learning Method

  • Kim, Hyoung-Rock;Hwang, Jung-Hoon;Kwon, Dong-Soo
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권2호
    • /
    • pp.236-242
    • /
    • 2003
  • This paper presents a cooperation strategy between a human operator and autonomous robots for an interactive robot soccer game, The interactive robot soccer game has been developed to allow humans to join into the game dynamically and reinforce entertainment characteristics. In order to make these games more interesting, a cooperation strategy between humans and autonomous robots on a team is very important. Strategies can be pre-programmed or learned by robots themselves with learning or evolving algorithms. Since the robot soccer system is hard to model and its environment changes dynamically, it is very difficult to pre-program cooperation strategies between robot agents. Q-learning - one of the most representative reinforcement learning methods - is shown to be effective for solving problems dynamically without explicit knowledge of the system. Therefore, in our research, a Q-learning based learning method has been utilized. Prior to utilizing Q-teaming, state variables describing the game situation and actions' sets of robots have been defined. After the learning process, the human operator could play the game more easily. To evaluate the usefulness of the proposed strategy, some simulations and games have been carried out.

상태 분할 기법을 이용한 집사 로봇의 작업 계획 시스템 (A Task Planning System of a Steward Robot with a State Partitioning Technique)

  • 김용휘;이형욱;김헌희;박광현;변증남
    • 로봇학회논문지
    • /
    • 제3권1호
    • /
    • pp.23-32
    • /
    • 2008
  • This paper presents a task planning system for a steward robot, which has been developed as an interactive intermediate agent between an end-user and a complex smart home environment called the ISH (Intelligent Sweet Home) at KAIST (Korea Advanced Institute of Science and Technology). The ISH is a large-scale robotic environment with various assistive robots and home appliances for independent living of the elderly and the people with disabilities. In particular, as an approach for achieving human-friendly human-robot interaction, we aim at 'simplification of task commands' by the user. In this sense, a task planning system has been proposed to generate a sequence of actions effectively for coordinating subtasks of the target subsystems from the given high-level task command. Basically, the task planning is performed under the framework of STRIPS (Stanford Research Institute Problem Solver) representation and the split planning method. In addition, we applied a state-partitioning technique to the backward split planning method to reduce computational time. By analyzing the obtained graph, the planning system decomposes an original planning problem into several independent sub-problems, and then, the planning system generates a proper sequence of actions. To show the effectiveness of the proposed system, we deal with a scenario of a planning problem in the ISH.

  • PDF

비전 기반 신체 제스처 인식을 이용한 상호작용 콘텐츠 인터페이스 (Interface of Interactive Contents using Vision-based Body Gesture Recognition)

  • 박재완;송대현;이칠우
    • 스마트미디어저널
    • /
    • 제1권2호
    • /
    • pp.40-46
    • /
    • 2012
  • 본 논문은 비전 기반 신체 제스처 인식 결과를 입력인터페이스로 사용하는 상호작용 콘텐츠에 대해 기술한다. 제작된 콘텐츠 는 아시아의 공통문화요소인 도깨비를 소재로 사용하여 지역 문화에 친숙하게 접근할 수 있도록 하였다. 그리고 콘텐츠를 구성 하는 시나리오는 도깨비와의 결투장면에서 사용자의 제스처 인식을 통해 결투를 진행하므로 사용자는 자연스럽게 콘텐츠 시나리오에 몰입할 수 있다. 시나리오의 후반부에서는 사용자는 시간과 공간이 다른 다중의 결말을 선택할 수 있다. 신체 제스처 인식 부분에서는 키넥트(KINECT)를 통해 얻을 수 있는 각 신체 부분의 3차원좌표를 이용하여 정지동작인 포즈를 활용한다. 비전기반 3차원 인체 포즈 인식 기술은 HCI(Human-Computer Interaction)에서 인간의 제스처를 전달하기 위한 방법으로 사용된다. 특수한 환경에서 단순한 2차원 움직임 포즈만 인식할 수 있는 2차원 포즈모델 기반 인식 방법에 비해 3차원 관절을 묘사한 포즈모델은 관절각에 대한 정보와 신체 부위의 모양정보를 선행지식으로 사용할 수 있어서 좀 더 일반적인 환경에서 복잡한 3차원 포즈도 인식할 수 있다는 장점이 있다. 인간이 사용하는 제스처는 정지동작인 포즈들의 연속적인 동작을 통해 표현이 가능하므로 HMM을 이용하여 정지동작 포즈들로 구성된 제스처를 인식하였다. 본 논문에서 기술한 체험형 콘텐츠는 사용자가 부가적인 장치의 사용 없이 제스처 인식 결과를 입력인터페이스로 사용하였으며 사용자의 몸동작만으로 자연스럽게 콘텐츠를 조작할 수 있도록 해준다. 본 논문에서 기술한 체험형 콘텐츠는 평소 접하기 어려운 도깨비를 이용하여 사용자와 실시간 상호작용이 가능케 함으로써 몰입도와 재미를 향상시키고자 하였다.

  • PDF

The World as Seen from Venice (1205-1533) as a Case Study of Scalable Web-Based Automatic Narratives for Interactive Global Histories

  • NANETTI, Andrea;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • 제4권1호
    • /
    • pp.3-34
    • /
    • 2016
  • This introduction is both a statement of a research problem and an account of the first research results for its solution. As more historical databases come online and overlap in coverage, we need to discuss the two main issues that prevent 'big' results from emerging so far. Firstly, historical data are seen by computer science people as unstructured, that is, historical records cannot be easily decomposed into unambiguous fields, like in population (birth and death records) and taxation data. Secondly, machine-learning tools developed for structured data cannot be applied as they are for historical research. We propose a complex network, narrative-driven approach to mining historical databases. In such a time-integrated network obtained by overlaying records from historical databases, the nodes are actors, while thelinks are actions. In the case study that we present (the world as seen from Venice, 1205-1533), the actors are governments, while the actions are limited to war, trade, and treaty to keep the case study tractable. We then identify key periods, key events, and hence key actors, key locations through a time-resolved examination of the actions. This tool allows historians to deal with historical data issues (e.g., source provenance identification, event validation, trade-conflict-diplomacy relationships, etc.). On a higher level, this automatic extraction of key narratives from a historical database allows historians to formulate hypotheses on the courses of history, and also allow them to test these hypotheses in other actions or in additional data sets. Our vision is that this narrative-driven analysis of historical data can lead to the development of multiple scale agent-based models, which can be simulated on a computer to generate ensembles of counterfactual histories that would deepen our understanding of how our actual history developed the way it did. The generation of such narratives, automatically and in a scalable way, will revolutionize the practice of history as a discipline, because historical knowledge, that is the treasure of human experiences (i.e. the heritage of the world), will become what might be inherited by machine learning algorithms and used in smart cities to highlight and explain present ties and illustrate potential future scenarios and visionarios.

Digital Maps and Automatic Narratives for the Interactive Global Histories

  • CHEONG, Siew Ann;NANETTI, Andrea;FHILIPPOV, Mikhail
    • Asian review of World Histories
    • /
    • 제4권1호
    • /
    • pp.83-123
    • /
    • 2016
  • We describe a vision of historical analysis at the world scale, through the digital assembly of historical sources into a cloud-based database, where machine-learning techniques can be used to summarize the database into a time-integrated actor-to-actor complex network. Using this time-integrated network as a template, we then apply the method of automatic narratives to discover key actors ('who'), key events ('what'), key periods ('when'), key locations ('where'), key motives ('why'), and key actions ('how') that can be presented as hypotheses to world historians. We show two test cases on how this method works. To accelerate the pace of knowledge discovery and verification, we describe how historians would interact with these automatic narratives through an online, map-based knowledge aggregator that learns how scholars filter information, and eventually takes over this function to free historians from the more important tasks of verification, and stitching together coherent storylines. Ultimately, multiple coherent storylines that are not necessary compatible with each other can be discovered through human-computer interactions by the map-based knowledge aggregator.

Interactive Experience Room Using Infrared Sensors and User's Poses

  • Bang, Green;Yang, Jinsuk;Oh, Kyoungsu;Ko, Ilju
    • Journal of Information Processing Systems
    • /
    • 제13권4호
    • /
    • pp.876-892
    • /
    • 2017
  • A virtual reality is a virtual space constructed by a computer that provides users the opportunity to indirectly experience a situation they have not experienced in real life through the realization of information for virtual environments. Various studies have been conducted to realize virtual reality, in which the user interface is a major factor in maximizing the sense of immersion and usability. However, most existing methods have disadvantages, such as costliness or being limited to the physical activity of the user due to the use of special devices attached to the user's body. This paper proposes a new type of interface that enables the user to apply their intentions and actions to the virtual space directly without special devices, and test content is introduced using the new system. Users can interact with the virtual space by throwing an object in the space; to do this, moving object detectors are produced using infrared sensors. In addition, the users can control the virtual space with their own postures. The method can heighten interest and concentration, increasing the sense of reality and immersion and maximizing user's physical experiences.

Primary Study for dialogue based on Ordering Chatbot

  • Kim, Ji-Ho;Park, JongWon;Moon, Ji-Bum;Lee, Yulim;Yoon, Andy Kyung-yong
    • Journal of Multimedia Information System
    • /
    • 제5권3호
    • /
    • pp.209-214
    • /
    • 2018
  • Today is the era of artificial intelligence. With the development of artificial intelligence, machines have begun to impersonate various human characteristics today. Chatbot is one instance of this interactive artificial intelligence. Chatbot is a computer program that enables to conduct natural conversations with people. As mentioned above, Chatbot conducted conversations in text, but Chatbot, in this study evolves to perform commands based on speech-recognition. In order for Chatbot to perfectly emulate a human dialogue, it is necessary to analyze the sentence correctly and extract appropriate response. To accomplish this, the sentence is classified into three types: objects, actions, and preferences. This study shows how objects is analyzed and processed, and also demonstrates the possibility of evolving from an elementary model to an advanced intelligent system. By this study, it will be evaluated that speech-recognition based Chatbot have improved order-processing time efficiency compared to text based Chatbot. Once this study is done, speech-recognition based Chatbot have the potential to automate customer service and reduce human effort.

B-COV:Bio-inspired Virtual Interaction for 3D Articulated Robotic Arm for Post-stroke Rehabilitation during Pandemic of COVID-19

  • Allehaibi, Khalid Hamid Salman;Basori, Ahmad Hoirul;Albaqami, Nasser Nammas
    • International Journal of Computer Science & Network Security
    • /
    • 제21권2호
    • /
    • pp.110-119
    • /
    • 2021
  • The Coronavirus or COVID-19 is contagiousness virus that infected almost every single part of the world. This pandemic forced a major country did lockdown and stay at a home policy to reduce virus spread and the number of victims. Interactions between humans and robots form a popular subject of research worldwide. In medical robotics, the primary challenge is to implement natural interactions between robots and human users. Human communication consists of dynamic processes that involve joint attention and attracting each other. Coordinated care involves sharing among agents of behaviours, events, interests, and contexts in the world from time to time. The robotics arm is an expensive and complicated system because robot simulators are widely used instead of for rehabilitation purposes in medicine. Interaction in natural ways is necessary for disabled persons to work with the robot simulator. This article proposes a low-cost rehabilitation system by building an arm gesture tracking system based on a depth camera that can capture and interpret human gestures and use them as interactive commands for a robot simulator to perform specific tasks on the 3D block. The results show that the proposed system can help patients control the rotation and movement of the 3D arm using their hands. The pilot testing with healthy subjects yielded encouraging results. They could synchronize their actions with a 3D robotic arm to perform several repetitive tasks and exerting 19920 J of energy (kg.m2.S-2). The average of consumed energy mentioned before is in medium scale. Therefore, we relate this energy with rehabilitation performance as an initial stage and can be improved further with extra repetitive exercise to speed up the recovery process.

스마트 TV 환경에서 정보 검색을 위한 사용자 프로파일 기반 필터링 방법 (A User Profile-based Filtering Method for Information Search in Smart TV Environment)

  • 신위살;오경진;조근식
    • 지능정보연구
    • /
    • 제18권3호
    • /
    • pp.97-117
    • /
    • 2012
  • 인터넷 사용자는 비디오를 보면서 소셜 네트워크 서비스를 이용하고 웹 검색을 하고, 비디오에 나타난 상품에 관심이 있을 경우 검색엔진을 통해 정보를 찾는다. 비디오와 사용자의 직접적인 상호작용을 위해 비디오 어노테이션에 대한 연구가 진행되었고, 스마트 TV 환경에서 어노테이션 된 비디오가 활용될 경우 사용자는 객체에 대한 링크를 통해 원하는 상품의 정보를 쉽게 확인할 수 있게 된다. 사용자가 상품에 대한 구매를 원할 경우 상품에 대한 정보검색 이외에 상품평이나 소셜 네트워크 친구의 의견을 통해 구매 결정을 한다. 소셜 네트워크로부터 발생되는 정보는 다른 정보에 비해 신뢰도가 높아 구매 결정에 큰 영향을 미친다. 하지만 현재 소셜 네트워크 서비스는 의견을 얻고자 할 경우 모든 소셜 네트워크 친구들에게 전달되고 많은 의견을 얻게 되어 이들로부터 유용한 정보를 파악하는 것은 쉽지 않다. 본 논문에서는 소셜 네트워크 사용자의 프로파일을 기반으로 상품에 대해 유용한 정보를 제공할 수 있는 친구를 규명하기 위한 필터링 방법을 제안한다. 사용자 프로파일은 페이스북의 사용자 정보와 페이스북 페이지의 'Like' 정보를 이용하여 구성된다. 프로파일의 상품 정보는 GoodRelations 온톨로지와 BestBuy 데이터를 이용하여 의미적으로 표현된다. 사용자가 비디오를 보면서 상품 정보를 얻고자 할 경우 어노테이션된 URI를 이용하여 정보가 전달된다. 시스템은 소셜 네트워크 친구들에 대한 사용자 프로파일과 BestBuy를 기반으로 어노테이션된 상품에 대한 의미적 유사도를 계산하고 유사도 값에 따라 순위가 결정한다. 결정된 순위는 유용한 정보를 제공할 수 있는 소셜 네트워크 상의 친구를 규명하는데 사용된다. 참가자의 동의하에 페이스북 정보를 활용하였고, 시스템에 의해 도출된 결과와 참가자 인터뷰를 통해 평가된 결과를 이용하여 타당성을 검증하였다. 비교 실험의 결과는 제안하는 시스템이 상품 구매결정을 하기 위해 유용한 정보를 획득할 수 있는 방법임을 증명한다.