• Title/Summary/Keyword: robot systems

Search Result 3,643, Processing Time 0.037 seconds

Recognizing User Engagement and Intentions based on the Annotations of an Interaction Video (상호작용 영상 주석 기반 사용자 참여도 및 의도 인식)

  • Jang, Minsu;Park, Cheonshu;Lee, Dae-Ha;Kim, Jaehong;Cho, Young-Jo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.612-618
    • /
    • 2014
  • A pattern classifier-based approach for recognizing internal states of human participants in interactions is presented along with its experimental results. The approach includes a step for collecting video recordings of human-human interactions or humanrobot interactions and subsequently analyzing the videos based on human coded annotations. The annotation includes social signals directly observed in the video recordings and the internal states of human participants indirectly inferred from those observed social signals. Then, a pattern classifier is trained using the annotation data, and tested. In our experiments on human-robot interaction, 7 video recordings were collected and annotated with 20 social signals and 7 internal states. Several experiments were performed to obtain an 84.83% recall rate for interaction engagement, 93% for concentration intention, and 81% for task comprehension level using a C4.5 based decision tree classifier.

Computational Model of a Mirror Neuron System for Intent Recognition through Imitative Learning of Objective-directed Action (목적성 행동 모방학습을 통한 의도 인식을 위한 거울뉴런 시스템 계산 모델)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.606-611
    • /
    • 2014
  • The understanding of another's behavior is a fundamental cognitive ability for primates including humans. Recent neuro-physiological studies suggested that there is a direct matching algorithm from visual observation onto an individual's own motor repertories for interpreting cognitive ability. The mirror neurons are known as core regions and are handled as a functionality of intent recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper, we addressed previous works used to model the function and mechanisms of mirror neurons and proposed a computational model of a mirror neuron system which can be used in human-robot interaction environments. The major focus of the computation model is the reproduction of an individual's motor repertory with different embodiments. The model's aim is the design of a continuous process which combines sensory evidence, prior task knowledge and a goal-directed matching of action observation and execution. We also propose a biologically inspired plausible equation model.

Fuzzy Distance Estimation for a Fish Robot

  • Shin, Daejung;Na, Seung-You;Kim, Jin-Young
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.4
    • /
    • pp.316-321
    • /
    • 2005
  • We designed and implemented fish robots for various purposes such as autonomous navigation, maneuverability control, posture balancing and improvement of quick turns in a tank of 120 X 120 X 180cm size. Typically, fish robots have 30-50 X 15-25 X 10-20cm dimensions; length, width and height, respectively. It is essential to have the ability of quick and smooth turning to avoid collision with obstacles or walls of the water pool at a close distance. Infrared distance sensors are used to detect obstacles, magneto-resistive sensors are used to read direction information, and a two-axis accelerometer is mounted to compensate output of direction sensors. Because of the swing action of its head due to the tail fin movement, the outputs of an infrared distance sensor contain a huge amount of noise around true distances. With the information from accelerometers and e-compass, much improved distance data can be obtained by fuzzy logic based estimation. Successful swimming and smooth turns without collision demonstrated the effectiveness of the distance estimation.

Intelligent Balancing Control of Inverted Pendulum on a ROBOKER Arm Using Visual Information (영상 정보를 이용한 ROBOKER 팔 위의 역진자 시스템의 지능 밸런싱 제어 구현)

  • Kim, Jeong-Seop;Jung, Seul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.595-601
    • /
    • 2011
  • This paper presents balancing control of inverted pendulum on the ROBOKER arm using visual information. The angle of the inverted pendulum placed on the robot arm is detected by a stereo camera and the detected angle is used as a feedback and tracking error for the controller. Thus, the overall closed loop forms a visual servoing control task. To improve control performance, neural network is introduced to compensate for uncertainties. The learning algorithm of radial basis function(RBF) network is performed by the digital signal controller which is designed to calculate floating format data and embedded on a field programmable gate array(FPGA) chip. Experimental studies are conducted to confirm the performance of the overall system implementation.

Hybrid Silhouette Extraction Using Color and Gradient Informations (색상 및 기울기 정보를 이용한 인간 실루엣 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.913-918
    • /
    • 2007
  • Human motion analysis is an important research subject in human-robot interaction (HRI). However, before analyzing the human motion, silhouette of human body should be extracted from sequential images obtained by CCD camera. The intelligent robot system requires more robust silhouette extraction method because it has internal vibration and low resolution. In this paper, we discuss the hybrid silhouette extraction method for detecting and tracking the human motion. The proposed method is to combine and optimize the temporal and spatial gradient information. Also, we propose some compensation methods so as not to miss silhouette information due to poor images. Finally, we have shown the effectiveness and feasibility of the proposed method through some experiments.

Compliance Analysis for Effective handling of Peg-In/Out-Hole Tasks Using Robot Hands (로봇 손을 이용한 팩의 조립 및 분해 작업을 효율적으로 수행하기 위한 컴플라이언스 해석)

  • Kim, Byoung-Ho;Yi, Byung-Ju;Suh, Il-Hong;Oh, Sang-Rok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.9
    • /
    • pp.777-785
    • /
    • 2000
  • This paper provides a guideline for the determination of compliance characteristics and the proper location of the compliance center in typical peg-in-hole and peg-out-hole tasks using hands. We first observe the fact that some of coupling stiffness elements cannot be planned arbitrarily. The given peg-in/out-hole tasks are classified into two contact styles. Then, we analyze concluded of the operational siffness matrix, which achieve the give peg-in/out-hole tasks effectively for each case. It is concluded that the location of the compliance center on the peg and the coupling stiffness element existing between the translational and the rotational direction play ompliance on the peg and the coupling siffness element existing between the translational and the rotational direction play important roles for successful peg-in/out-hole tasks. The analytic results verified through simulations.

  • PDF

A Stereo Camera Based Method of Plane Detection for Path Finding of Walking Robot (보행로봇의 이동경로 인식을 위한 스테레오카메라 기반의 평면영역 추출방법)

  • Kang, Dong-Joong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.3
    • /
    • pp.236-241
    • /
    • 2008
  • This paper presents a method to recognize the plane regions for movement of walking robots. When the autonomous agencies using stereo camera or laser scanning sensor is under unknown 3D environment, the mobile agency has to detect the plane regions to decide the moving direction and perform the given tasks. In this paper, we propose a very fast method for plane detection using normal vector of a triangle by 3 vertices defined on a small circular region. To reduce the effect of noises and outliers, the triangle rotates with respect to the center position of the circular region and generates a series of triangles with different normal vectors based on different three points on the boundary of the circular region. The vectors for several triangles are normalized and then median direction of the normal vectors is used to test the planarity of the circular region. The method is very fast and we prove the performance of algorithm for real range data obtained from a stereo camera system.

Unified Approach to Path Planning Algorithm for SMT Inspection Machines Considering Inspection Delay Time (검사지연시간을 고려한 SMT 검사기의 통합적 경로 계획 알고리즘)

  • Lee, Chul-Hee;Park, Tae-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.788-793
    • /
    • 2015
  • This paper proposes a path planning algorithm to reduce the inspection time of AOI (Automatic Optical Inspection) machines for SMT (Surface Mount Technology) in-line system. Since the field-of-view of the camera attached at the machine is much less than the entire inspection region of board, the inspection region should be clustered to many groups. The image acquisition time depends on the number of groups, and camera moving time depends on the sequence of visiting the groups. The acquired image is processed while the camera moves to the next position, but it may be delayed if the group includes many components to be inspected. The inspection delay has influence on the overall job time of the machine. In this paper, we newly considers the inspection delay time for path planning of the inspection machine. The unified approach using genetic algorithm is applied to generates the groups and visiting sequence simultaneously. The chromosome, crossover operator, and mutation operator is proposed to develop the genetic algorithm. The experimental results are presented to verify the usefulness of the proposed method.

An Efficient Outdoor Localization Method Using Multi-Sensor Fusion for Car-Like Robots (다중 센서 융합을 사용한 자동차형 로봇의 효율적인 실외 지역 위치 추정 방법)

  • Bae, Sang-Hoon;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.995-1005
    • /
    • 2011
  • An efficient outdoor local localization method is suggested using multi-sensor fusion with MU-EKF (Multi-Update Extended Kalman Filter) for car-like mobile robots. In outdoor environments, where mobile robots are used for explorations or military services, accurate localization with multiple sensors is indispensable. In this paper, multi-sensor fusion outdoor local localization algorithm is proposed, which fuses sensor data from LRF (Laser Range Finder), Encoder, and GPS. First, encoder data is used for the prediction stage of MU-EKF. Then the LRF data obtained by scanning the environment is used to extract objects, and estimates the robot position and orientation by mapping with map objects, as the first update stage of MU-EKF. This estimation is finally fused with GPS as the second update stage of MU-EKF. This MU-EKF algorithm can also fuse more than three sensor data efficiently even with different sensor data sampling periods, and ensures high accuracy in localization. The validity of the proposed algorithm is revealed via experiments.

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.