• 제목/요약/키워드: Robot Sensor

검색결과 1,590건 처리시간 0.028초

2D Map generation Using Omnidirectional Image sensor and Stereo Vision for MobileRobot MAIRO (자율이동로봇MAIRO의 전방향 이미지센서와 스테레오 비전 시스템을 이용한 2차원 지도 생성)

  • Kim, Kyung-Ho;Lee, Hyung-Kyu;Son, Young-Jun;Song, Jae-Keun
    • Proceedings of the KIEE Conference
    • /
    • 대한전기학회 2002년도 합동 추계학술대회 논문집 정보 및 제어부문
    • /
    • pp.495-500
    • /
    • 2002
  • Recently, a service robot industry outstands as an up and coming industry of the next generation. Specially, there are so many research in self-steering movement(SSM). In order to implement SSM, robot must effectively recognize all around, detect objects and make a surrounding map with sensors. So, many robots have a sonar and a infrared sensor, etc. But, in these sensors, We only know informations about between the robot and the object as well as resolution faculty is of inferior quality. In this paper, we will introduce new algorithm that recognizes objects around robot and makes a two dimension surrounding map with a omni-direction vision camera and two stereo vision cameras.

  • PDF

A User Interface for Vision Sensor based Indirect Teaching of a Robotic Manipulator (시각 센서 기반의 다 관절 매니퓰레이터 간접교시를 위한 유저 인터페이스 설계)

  • Kim, Tae-Woo;Lee, Hoo-Man;Kim, Joong-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제19권10호
    • /
    • pp.921-927
    • /
    • 2013
  • This paper presents a user interface for vision based indirect teaching of a robotic manipulator with Kinect and IMU (Inertial Measurement Unit) sensors. The user interface system is designed to control the manipulator more easily in joint space, Cartesian space and tool frame. We use the skeleton data of the user from Kinect and Wrist-mounted IMU sensors to calculate the user's joint angles and wrist movement for robot control. The interface system proposed in this paper allows the user to teach the manipulator without a pre-programming process. This will improve the teaching time of the robot and eventually enable increased productivity. Simulation and experimental results are presented to verify the performance of the robot control and interface system.

Implementation and Control of Crack Tracking Robot Using Force Control : Crack Detection by Laser and Camera Sensor Using Neural Network (힘제어 기반의 틈새 추종 로봇의 제작 및 제어에 관한 연구 : Part Ⅰ. 신경회로망을 이용한 레이저와 카메라에 의한 틈새 검출 및 로봇 제작)

  • Cho Hyun Taek;Jung Seul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제11권4호
    • /
    • pp.290-296
    • /
    • 2005
  • This paper presents the implementation of a crack tracking mobile robot. The crack tracking robot is built for tracking cracks on the pavement. To track cracks, crack must be detected by laser and camera sensors. Laser sensor projects laser on the pavement to detect the discontinuity on the surface and the camera captures the image to find the crack position. Then the robot is commanded to follow the crack. To detect crack position correctly, neural network is used to minimize the positional errors of the captured crack position obtained by transformation from 2 dimensional images to 3 dimensional images.

Design and Implementation of Paddle Type End of Arm Tool for Rescue Robot (인명 구조용 로봇의 패들형 말단 장치 설계 및 구현)

  • Kim, Hyeonjung;Lee, Ikho;An, Jinung
    • The Journal of Korea Robotics Society
    • /
    • 제13권4호
    • /
    • pp.205-212
    • /
    • 2018
  • This paper deals with the paddle type end of arm tool for rescue robot instead of rescue worker in dangerous environments such as fire, earthquake, national disaster and defense. It is equipped at the dual arm manipulator of the rescue robot to safely lift up an injured person. It consists of the paddle for lifting person, sensors for detecting insertion of person onto the paddle, sensor for measuring the tilting angle of the paddle, and mechanical compliance part for preventing incidental injuries. The electronics is comprised of the DAQ module to acquire the sensors data, the control module to treat the sensors data and to manage the errors, and the communication module to transmit the sensors data. After optimally designing the mechanical and electronical parts, we successfully made the paddle type end of arm tool and evaluated its performance by using specially designed jigs. The developed paddle type end of arm tool is going to be applied to the rescue robot for performance verification through field testing.

Development of a Monitoring Module for a Steel Bridge-repainting Robot Using a Vision Sensor (비전센서를 이용한 강교량 재도장 로봇의 주행 모니터링 모듈 개발)

  • Seo, Myoung Kook;Lee, Ho Yeon;Jang, Dong Wook;Chang, Byoung Ha
    • Journal of Drive and Control
    • /
    • 제19권1호
    • /
    • pp.1-7
    • /
    • 2022
  • Recently, a re-painting robot was developed to semi-automatically conduct blasting work in bridge spaces to improve work productivity and worker safety. In this study, a vision sensor-based monitoring module was developed to automatically move the re-painting robot along the path. The monitoring module provides direction information to the robot by analyzing the boundary between the painting surface and the metal surface. To stably measure images in unstable environments, various techniques for improving image visibility were applied in this study. Then, the driving performance was verified in a similar environment.

Target Detection of Mobile Robot by Vision (시각 정보에 의한 이동 로봇의 대상 인식)

  • 변정민;김종수;김성주;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.29-32
    • /
    • 2002
  • This paper suggest target detection algorithm for mobile robot control using color and shape recognition. In many cases, ultrasonic sensor(USS) is used in mobile robot system to measure the distance between obstacles. But with only USS, it may have many restrictions. So we attached CCD camera to mobile robot to overcome its restrictions. If visual information is given to robot system then robot system will be able to accomplish more complex mission successfully. With acquired vision data, robot looks for target by color and recognize its shape.

  • PDF

RSSI based Intelligent Indoor Location Estimation Robot using Wireless Sensor Network technology (무선 센서네트워크 기술을 활용한 RSSI기반의 지능형 실내위치추정 로봇)

  • Seo, Won-Kyo;Jang, Seong-Gyun;Shin, Kwang-Sik;Chung, Wan-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국해양정보통신학회 2007년도 춘계종합학술대회
    • /
    • pp.375-378
    • /
    • 2007
  • This paper describes indoor location estimation intelligent robot. It is loaded indoor location estimation function using RSSI based indoor location estimation system and wireless sensor networks. Spartan III(Xilinx, U.S.A.) is used as a main control device in the mobile robot and the current direction data is collected in the indoor location estimation system. The data is transferred to the wireless sensor network node attached to the mobile robot through Zigbee/IEEE 802.15.4, a wireless communication. After receiving it, with the data of magnetic compass the node is aware of and senses the direction the robot head for and the robot moves to its destination. Indoor location estimation intelligent robot is can be moved efficiently and actively without obstacle on flat ground to the appointment position by user.

  • PDF

Design and Implementation of Robot-Based Alarm System of Emergency Situation Due to Falling of The Eldely (고령자 낙상에 의한 응급 상황의 4족 로봇 기반 알리미 시스템 설계 및 구현)

  • Park, ChulHo;Lim, DongHa;Kim, Nam Ho;Yu, YunSeop
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제17권4호
    • /
    • pp.781-788
    • /
    • 2013
  • In this paper, we introduce a quadruped robot-based alarm system for monitoring the emergency situation due to falling in the elderly. Quadruped robot includes the FPGA Board(Field Programmable Gate Array) applying a red-color tracking algorithm. To detect a falling of the elderly, a sensor node is worn on chest and accelerations and angular velocities measured by the sensor node are transferred to quadruped robot, and then the emergency signal is transmitted to manager if a fall is detected. Manager controls the robot and then he judges the situation by monitoring the real-time images transmitted from the robot. If emergency situation is decided by the manager, he calls 119. When the fall detection system using only sensor nodes is used, sensitivity of 100% and specificity of 98.98% were measured. Using the combination of the fall detection system and portable camera (robot), the emergency situation was detected to 100 %.

Tip-over Terrain Detection Method based on the Support Inscribed Circle of a Mobile Robot (지지내접원을 이용한 이동 로봇의 전복 지형 검출 기법)

  • Lee, Sungmin;Park, Jungkil;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제20권10호
    • /
    • pp.1057-1062
    • /
    • 2014
  • This paper proposes a tip-over detection method for a mobile robot using a support inscribed circle defined as an inscribed circle of a support polygon. A support polygon defined by the contact points between the robot and the terrain is often used to analyze the tip-over. For a robot moving on uneven terrain, if the intersection between the extended line of gravity from the robot's COG and the terrain is inside the support polygon, tip-over will not occur. On the contrary, if the intersection is outside, tip-over will occur. The terrain is detected by using an RGB-D sensor. The terrain is locally modeled as a plane, and thus the normal vector can be obtained at each point on the terrain. The support polygon and the terrain's normal vector are used to detect tip-over. However, tip-over cannot be detected in advance since the support polygon is determined depending on the orientation of the robot. Thus, the support polygon is approximated as its inscribed circle to detect the tip-over regardless of the robot's orientation. To verify the effectiveness of the proposed method, the experiments are carried out using a 4-wheeled robot, ERP-42, with the Xtion RGB-D sensor.

Implementation of Underwater Entertainment Robots Based on Ubiquitous Sensor Networks (유비쿼터스 센서 네트워크에 기반한 엔터테인먼트용 수중 로봇의 구현)

  • Shin, Dae-Jung;Na, Seung-You;Kim, Jin-Young;Song, Min-Gyu
    • The KIPS Transactions:PartA
    • /
    • 제16A권4호
    • /
    • pp.255-262
    • /
    • 2009
  • We present an autonomous entertainment dolphin robot system based on ubiquitous sensor networks(USN). Generally, It is impossible to apply to USN and GPS in underwater bio-mimetic robots. But An Entertainment dolphin robot which presented in this paper operates on the water not underwater. Navigation of the underwater robot in a given area is based on GPS data and the acquired position information from deployed USN motes with emphasis on user interaction. Body structures, sensors and actuators, governing microcontroller boards, and swimming and interaction features are described for a typical entertainment dolphin robot. Actions of mouth-opening, tail splash or water blow through a spout hole are typical responses of interaction when touch sensors on the body detect users' demand. Dolphin robots should turn towards people who demand to interact with them, while swimming autonomously. The functions that are relevant to human-robot interaction as well as robot movement such as path control, obstacle detection and avoidance are managed by microcontrollers on the robot for autonomy. Distance errors are calibrated periodically by the known position data of the deployed USN motes.