• Title/Summary/Keyword: human-friendly robot

Search Result 87, Processing Time 0.025 seconds

Compliant Ultrasound Proximity Sensor for the Safe Operation of Human Friendly Robots Integrated with Tactile Sensing Capability

  • Cho, Il-Joo;Lee, Hyung-Kew;Chang, Sun-Il;Yoon, Euisik
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.310-316
    • /
    • 2017
  • The robot proximity and tactile sensors can be categorized into two groups: grip sensors and safety sensors. They have different performance requirements. The safety sensor should have long proximity range and fast response in order to secure enough response time before colliding with ambient objects. As for the tactile sensing function, the safety sensor need to be fast and compliant to mitigate the impact from a collision. In order to meet these requirements, we proposed and demonstrated a compliant integrated safety sensor suitable to human-friendly robots. An ultrasonic proximity sensor and a piezoelectric tactile sensor made of PVDF films have been integrated in a compliant PDMS structure. The implemented sensor demonstrated the maximum proximity range of 35 cm. The directional tolerance for 30 cm detection range was about ${\pm}15^{\circ}$ from the normal axis. The integrated PVDF tactile sensor was able to detect various impacts of up to 20 N in a controlled experimental setup.

Speaker Detection and Recognition for a Welfare Robot

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.835-838
    • /
    • 2003
  • Computer vision and natural-language dialogue play an important role in friendly human-machine interfaces for service robots. In this paper we describe an integrated face detection and face recognition system for a welfare robot, which has also been combined with the robot's speech interface. Our approach to face detection is to combine neural network (NN) and genetic algorithm (GA): ANN serves as a face filter while GA is used to search the image efficiently. When the face is detected, embedded Hidden Markov Model (EMM) is used to determine its identity. A real-time system has been created by combining the face detection and recognition techniques. When motivated by the speaker's voice commands, it takes an image from the camera, finds the face inside the image and recognizes it. Experiments on an indoor environment with complex backgrounds showed that a recognition rate of more than 88% can be achieved.

  • PDF

Oil Spill Skimmer using Magnetic Oil Adsorbent (자성유류흡착제를 사용한 수면유출기름 처리 스키머)

  • Soh, Dae-Wha;Soh, Hyun-Jun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.555-558
    • /
    • 2008
  • For trying to frontal attack of new solution by fusion of technical tasks and conditions with it's solving methods of the essential tasks of marine resource development and environmental conservation in addition with elements of electronic high-technologies, the skimmer robot was proposed by using of oil spill disaster prevention and its disposal system with sequentially circular collection type of magnetic oil spill adsorbent powder and fabrics on the electronic barge robot for the scheme of sustainable development of environment-friendly technology. It was verified from the experiment of electronic barge robot demonstrator that the skimmer system of magnetic oil spill adsorbent powder and fabrics was very effective and useful technique to collect oil spill samples. At this point, the barge-based electronic remote control was very useful system operating easily on the marine fields to skim oil spill with dangerous toxic substances of crude oil and very harmful to human. Therefore, fusion technology proposed in this study combined with electronic and marine technology is the novel contributable technology for developing marine environmental conservation and environment-friendly disaster prevention, and also its management techniques.

  • PDF

A Task Planning System of Steward Robot for Human-friendly Human-Robot Interaction (인간 친화적 로봇 상호작용을 위한 집사 로봇의 작업 관리 시스템)

  • Kim, Yong-Hwi;Lee, Hyong-Euk;Kim, Heon-Hui;Park, Kwang-Hyun;Bien, Zeung-Nam
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.228-234
    • /
    • 2007
  • 한국과학기술원 인간친화복지로봇 연구센터에서 개발 중인 ISH(Intelligent Sweet Home)는 다양한 서비스 로봇 및 인간-기계 인터페이스(HMI:Human-Machine Interface)를 통해서 노약자 및 장애인의 일상 생활을 도와 줄 수 있는 지능형 주거 공간이다. ISH에서는 홈네트워크를 통해 연결된 가전 기기 및 환경 정보 취득이 가능한 센서 장비, 그리고 지능형 침대, 휠체어, 이동 보조 로봇 등이 거주자가 독립 생활을 영위할 수 있도록 여러 가지 서비스를 제공한다. 하지만 노약자 및 장애인의 관점에서 서비스 양의 증가뿐만 아니라, 이를 쉽고 편하게 운용할 수 있는 서비스 질의 측면 또한 중요하게 고려하여야 한다. 이러한 이유 때문에, ISH에서는 집사 로봇(steward robot)의 개념을 도입하여 거주자와 복잡한 시스템의 효율적인 매개체로 사용하고 있다. 사용자의 편의를 추구하기 위한 공학적인 접근방법 중의 하나로, 본 논문에서는 집사 로봇의 작업 계획 기능에 대해서 설명하도록 한다. 작업 계획 시스템을 이용하여, 집사 로봇은 사용자의 상위 레벨 명령을 해석하여 각 로봇 또는 제어 가능 개체들을 제어하게 된다. 제안하는 시스템은 STRIPS(STanford Research Institute Problem Solver) 상태 표현 방법과 그래프계획(Graphplan) 방법에 기반하여 작업 계획을 수행한다. 또한 작업 계획 속도를 증가 시키기 위하여 공간 추상화(world abstraction)와 하위 목표 계획(subgoal planning)의 개념을 적용하였다. 그리고 ISH에서 정의된 시나리오를 이용한 상위 레벨 명령을 통해 제안된 시스템의 효용성을 검증하도록 한다.

  • PDF

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

A Task Planning System of a Steward Robot with a State Partitioning Technique (상태 분할 기법을 이용한 집사 로봇의 작업 계획 시스템)

  • Kim, Yong-Hwi;Lee, Hyong-Euk;Kim, Heon-Hui;Park, Kwang-Hyun;Bien, Z. Zenn
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.23-32
    • /
    • 2008
  • This paper presents a task planning system for a steward robot, which has been developed as an interactive intermediate agent between an end-user and a complex smart home environment called the ISH (Intelligent Sweet Home) at KAIST (Korea Advanced Institute of Science and Technology). The ISH is a large-scale robotic environment with various assistive robots and home appliances for independent living of the elderly and the people with disabilities. In particular, as an approach for achieving human-friendly human-robot interaction, we aim at 'simplification of task commands' by the user. In this sense, a task planning system has been proposed to generate a sequence of actions effectively for coordinating subtasks of the target subsystems from the given high-level task command. Basically, the task planning is performed under the framework of STRIPS (Stanford Research Institute Problem Solver) representation and the split planning method. In addition, we applied a state-partitioning technique to the backward split planning method to reduce computational time. By analyzing the obtained graph, the planning system decomposes an original planning problem into several independent sub-problems, and then, the planning system generates a proper sequence of actions. To show the effectiveness of the proposed system, we deal with a scenario of a planning problem in the ISH.

  • PDF

Control and VR Navigation of a Gait Rehabilitation Robot with Upper and Lower Limbs Connections (상하지가 연동된 보행재활 로봇의 제어 및 VR 네비게이션)

  • Novandy, Bondhan;Yoon, Jung-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.3
    • /
    • pp.315-322
    • /
    • 2009
  • This paper explains a control and navigation algorithm of a 6-DOF gait rehabilitation robot, which can allow a patient to navigate in virtual reality (VR) by upper and lower limbs interactions. In gait rehabilitation robots, one of the important concerns is not only to follow the robot motions passively, but also to allow the patient to walk by his/her intention. Thus, this robot allows automatic walking velocity update by estimating interaction torques between the human and the upper limb device, and synchronizing the upper limb device to the lower limb device. In addition, the upper limb device acts as a user-friendly input device for navigating in virtual reality. By pushing the switches located at the right and left handles of the upper limb device, a patient is able to do turning motions during navigation in virtual reality. Through experimental results of a healthy subject, we showed that rehabilitation training can be more effectively combined to virtual environments with upper and lower limb connections. The suggested navigation scheme for gait rehabilitation robot will allow various and effective rehabilitation training modes.

Rapid Development of a Humanoid Robot using Concurrent Implementation of CAD/CAM/CAE and RP (CAD/CAM/CAE/RP의 동시공학적 적용을 통한 휴머노이드 로봇의 쾌속 개발)

  • Park, Keun;Kim, Young-Seog;Kim, Chung-Seok;Park, Sung-Ho
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.1
    • /
    • pp.50-57
    • /
    • 2007
  • In recent years, many robotics researches have been focused on developing human-friendly robots, that is, humanoid biped robots. The researches of humanoid robots include various areas such as hardware development, control of biped locomotion, artificial intelligence, human interaction, etc. The present work concerns the hardware development of a mid-size humanoid robot, BONOBO, focusing on rapid development of outer body parts with integrated application if CAD/CAM/CAE/RP. Most parts are three-dimensionally designed using 3D CAD, and effectively connected with CAE analyses using both kinematic simulation and structural analysis. In order to reduce lead time and investment cost for parts developments, Rapid Prototyping (RP) and CAM are selectively utilized for manufacturing body parts. These master parts are then replicated using the vacuum casting process, from which we can obtain plastic parts repeatedly. Through this integrated approach, the first prototype of BONOBO can be successfully developed with relatively low time and investment costs.

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

A Ubiquitous Interface System for Mobile Robot Control in Indoor Environment (실내 환경에서의 이동로봇 제어를 위한 유비쿼터스 인터페이스 시스템)

  • Ahn Hyunsik;Song Jae-Sung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.1
    • /
    • pp.66-71
    • /
    • 2006
  • Recently, there are lots of concerning on ubiquitous environment of robots and URC (Ubiquitous Robotic Companion). In this paper, a practical ubiquitous interface system far controlling mobile robots in indoor environments was proposed. The interface system was designed as a manager-agent model including a PC manager, a mobile manager, and robot agents for being able to be accessed by any network. In the system, the PC manager has a 3D virtual environment and shows real images for a human-friendly interface, and share the computation load of the robot such as path planning and managing geographical information. It also contains Hybrid Format Manager(HFM) working for transforming the image, position, and control data and interchanging them between the robots and the managers. Mobile manager working in the minimized computing condition of handsets has a mobile interface environment displaying the real images and the position of the robot and being able to control the robots by pressing keys. Experimental results showed the proposed system was able to control robots rising wired and wireless LAN and mobile Internet.