• Title/Summary/Keyword: face robot

Search Result 190, Processing Time 0.224 seconds

A study on the control of two-cooperating robot manipulators for fixtureless assembly (무고정 조립작업을 위한 협조로봇 매니퓰레이터의 제어에 관한 연구)

  • Choi, Hyeung-Sik
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.8
    • /
    • pp.1209-1217
    • /
    • 1997
  • This paper proposes the modeling of the dynamics of two cooperating robot manipulators performing the assembly job such as peg-in-hole while coordinating the payload along the desired path. The mass and moment of inertia of the manipulators and the payload are assumed to be unknown. To control the uncertain system, a robust control algorithm based on the computed torque control is proposed. Usually, the robust controller requires high input torques such that it may face input saturation in actual application. In this reason, the robust control algorithm includes fuzzy logic such that the magnitude of the input torque of the manipulators is controlled not to go over the hardware saturation while keeping path tracking errors bounded. A numerical example using dual three degree-of-freedom manipulators is shown.

Audio-Visual Localization and Tracking of Sound Sources Using Kalman Filter (칼만 필터를 이용한 시청각 음원 정위 및 추적)

  • Song, Min-Gyu;Kim, Jin-Young;Na, Seung-You
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.4
    • /
    • pp.519-525
    • /
    • 2007
  • With the high interest on robot technology and application, the research on artificial auditory systems for robot is very active. In this paper we discuss sound source localization and tracing based on audio-visual information. For video signals we use face detection based on skin color model. Also, binaural-based DOA is used as audio information. We integrate both informations using Kalman filter. The experimental results show that audio-visual person tracking Is useful, specially in the case that some informations are not observed.

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

3-D Profile Measurement System of Live Human Faces for the '93 Taejon Expo Kumdori Robot Scupltor (93 대전엑스포 꿈돌이 조각가로보트의 인물형상 측정시스템)

  • 김승우;박현구;김문상
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.19 no.3
    • /
    • pp.670-679
    • /
    • 1995
  • This paper presents the 3-D profile measurement system of live human faces, which was developed specially for 'KUMDORI sculptor robot' of the '93 Taejon Exposition. '93 Taejon EXPO. The basic principle for measurement adopts the slit beam projection which is a method of measuring 3-D surface profiles using geometric optics between the slit beam and the CCD camera. Since the slit beam projection consumes long measuring time, it is unfit to measure the 3-D profiles of living objects as human. Therefore, the projection type slit beam method which consumes short measuring time is newly suggested. And an algorithm to reconstruct the 3-D profile from the deformed images using finite approximated calibration is suggested and practically implemented. The projection type slit beam method was applied to spectators in a period of '93 Taejon EXPO. The measurement results show that the technique is suitable for 3-D face profile measurement on a living body.

Paint Removal of Airplane & Water Jet Application

  • Xue, Sheng-Xiong;Chen, Zheng-Wen;Ren, Qi-Le;Su, Ji-Xin;Han, Cai-Hong;Pang, lei
    • International Journal of Fluid Machinery and Systems
    • /
    • v.7 no.3
    • /
    • pp.125-129
    • /
    • 2014
  • The paint removal and recoating are the very important process in airplane maintenance. The traditional technology is to use the chemical way corroding the paint with paint remover. For changing the defects, corrosion & pollution & manual working, of the traditional technology, the physical process which removes the paint of airplane with 250MPa/250kW ultra-high pressure rotary water jetting though the surface cleaner installed on the six axes robot is studied. The paint layer of airplane is very thin and close. The contradiction of water jetting paint removal is to remove the paint layer wholly and not damage the surface of airplane. In order to solve the contradiction, the best working condition must be reached through tests. The paint removal efficiency with ultra-high pressure and move speed of not damaged to the surface. The move speed of this test is about 2m/min, and the paint removal efficiency is about $30{\sim}40m^2/h$, and the paint removal active area is 85-90%. No-repeat and no-omit are the base requests of the robot program. The physical paint removal technology will be applied in airplane maintenance, and will face the safety detection of application permission.

A Study on the Mechanism of Social Robot Attitude Formation through Consumer Gaze Analysis: Focusing on the Robot's Face (소비자 시선 분석을 통한 소셜로봇 태도 형성 메커니즘 연구: 로봇의 얼굴을 중심으로)

  • Ha, Sangjip;Yi, Eun-ju;Yoo, In-jin;Park, Do-Hyung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.409-414
    • /
    • 2021
  • 본 연구는 소셜로봇 디자인 연구의 흐름 중 하나인 로봇의 외형에 관하여 시선 추적을 활용하고자 한다. 소셜로봇의 몸 전체, 얼굴, 눈, 입술 등의 관심 영역으로부터 측정된 사용자의 시선 추적 지표와 디자인평가 설문을 통하여 파악된 사용자의 태도를 연결하여 소셜로봇의 디자인에 연구 모형을 구성하였다. 구체적으로 로봇에 대한 사용자의 태도를 형성하는 메커니즘을 발견하여 로봇 디자인 시 참고할 수 있는 구체적인 인사이트를 발굴하고자 하였다. 구체적으로 본 연구에서 사용된 시선 추적 지표는 고정된 시간(Fixation), 첫 응시 시간(First Visit), 전체 머문 시간(Total Viewed), 그리고 재방문 횟수(Revisits)이며, 관심 영역인 AOI(Areas of Interests)는 소셜로봇의 얼굴, 눈, 입술, 그리고 몸체로 설계하였다. 그리고 디자인평가 설문을 통하여 소셜로봇의 감정 표현, 인간다움, 얼굴 두각성 등의 소비자 신념을 수집하였고, 종속변수로 로봇에 대한 태도로 설정하였다.

  • PDF

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Engagement Analysis Technology for Tele-presence Services (텔레프레즌스 서비스를 위한 몰입도 분석 기술)

  • Yoon, H.J.;Han, M.K.;Jang, J.H.
    • Electronics and Telecommunications Trends
    • /
    • v.32 no.5
    • /
    • pp.10-19
    • /
    • 2017
  • A Telepresence service is an advanced video conferencing service at aimed providing remote users with the feeling of being present together at a particular location for a face-to-face group meeting. The effectiveness in this type of meeting can be further increased by automatically recognizing the audiovisual behaviors of the video conferencing users, accurately inferring their level of engagement from the recognized reactions, and providing proper feedback on their engagement state. In this paper, we review the recent developments of such engagement analysis techniques being utilized in various applications, such as human-robot interaction, content evaluation, telematics, and online collaboration services. In addition, we introduce a real-time engagement analysis framework employed in our telepresence service platform for an increased participation in online group collaboration settings.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".