• Title/Summary/Keyword: face robot

Search Result 190, Processing Time 0.024 seconds

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

IoT Based Intelligent Position and Posture Control of Home Wellness Robots (홈 웰니스 로봇의 사물인터넷 기반 지능형 자기 위치 및 자세 제어)

  • Lee, Byoungsu;Hyun, Chang-Ho;Kim, Seungwoo
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.636-644
    • /
    • 2014
  • This paper is to technically implement the sensing platform for Home-Wellness Robot. First, self-localization technique is based on a smart home and object in a home environment, and IOT(Internet of Thing) between Home Wellness Robots. RF tag is set in a smart home and the absolute coordinate information is acquired by a object included RF reader. Then bluetooth communication between object and home wellness robot provides the absolute coordinate information to home wellness robot. After that, the relative coordinate of home wellness robot is found and self-localization through a stereo camera in a home wellness robot. Second, this paper proposed fuzzy control methode based on a vision sensor for approach object of home wellness robot. Based on a stereo camera equipped with face of home wellness robot, depth information to the object is extracted. Then figure out the angle difference between the object and home wellness robot by calculating a warped angle based on the center of the image. The obtained information is written Look-Up table and makes the attitude control for approaching object. Through the experimental with home wellness robot and the smart home environment, confirm performance about the proposed self-localization and posture control method respectively.

Real-Time Facial Recognition Using the Geometric Informations

  • Lee, Seong-Cheol;Kang, E-Sok
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.55.3-55
    • /
    • 2001
  • The implementation of human-like robot has been advanced in various parts such as mechanic arms, legs, and applications of five senses. The vision applications have been developed in several decades and especially the face recognition have become a prominent issue. In addition, the development of computer systems makes it possible to process complex algorithms in realtime. The most of human recognition systems adopt the discerning method using fingerprint, iris, and etc. These methods restrict the motion of the person to be discriminated. Recently, the researchers of human recognition systems are interested in facial recognition by using machine vision. Thus, the object of this paper is the implementation of the realtime ...

  • PDF

Development of Rotation Invariant Real-Time Multiple Face-Detection Engine (회전변화에 무관한 실시간 다중 얼굴 검출 엔진 개발)

  • Han, Dong-Il;Choi, Jong-Ho;Yoo, Seong-Joon;Oh, Se-Chang;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.116-128
    • /
    • 2011
  • In this paper, we propose the structure of a high-performance face-detection engine that responds well to facial rotating changes using rotation transformation which minimize the required memory usage compared to the previous face-detection engine. The validity of the proposed structure has been verified through the implementation of FPGA. For high performance face detection, the MCT (Modified Census Transform) method, which is robust against lighting change, was used. The Adaboost learning algorithm was used for creating optimized learning data. And the rotation transformation method was added to maintain effectiveness against face rotating changes. The proposed hardware structure was composed of Color Space Converter, Noise Filter, Memory Controller Interface, Image Rotator, Image Scaler, MCT(Modified Census Transform), Candidate Detector / Confidence Mapper, Position Resizer, Data Grouper, Overlay Processor / Color Overlay Processor. The face detection engine was tested using a Virtex5 LX330 FPGA board, a QVGA grade CMOS camera, and an LCD Display. It was verified that the engine demonstrated excellent performance in diverse real life environments and in a face detection standard database. As a result, a high performance real time face detection engine that can conduct real time processing at speeds of at least 60 frames per second, which is effective against lighting changes and face rotating changes and can detect 32 faces in diverse sizes simultaneously, was developed.

Design and Implementation of the ChamCham and WordChain Play Robot for Reduction of Symptoms of Depressive Disorder Patient (우울증 진단 환자의 증상 완화를 위한 참참참, 끝말잇기 놀이 로봇 설계 및 구현)

  • Eom, Hyun-Young;Seo, Dong-Yoon;Lee, Gyeong-Min;Lee, Seong-Ung;Choi, Ji-Hwan;Lee, Kang-Hee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.561-566
    • /
    • 2020
  • We propose to design and to implement a recreational and end - of - play robot for symptom relief in patients with depression. The main symptom of depression is the loss of interest and interest in life. The depression diagnosis patient confirms the emotional analysis revealed by his / her robot through the robot, and performs the greeting or ending play. After analyzing the emotions in the expressions after the play, the function of the embodying robot is confirmed by receiving the report. A simple play can not completely cure a patient with a diagnosis of depression, but it can contribute to symptom relief through gradual use. The design of the play-by-play robot is using Q.bo One, an open-source robot that can interact with Thecorpora. Q.bo One's system captures a user's face, takes a picture, passes the value to the Azure server, and checks the emotional analysis before and after the play with the accumulated data.Play is implemented in Rasubian, the OS of Q.bo One, using the programming language Python and interacting with external sensors. The purpose of this paper is to help the symptom relief of depressive patients in a relatively short time with a play robot.

Development of Evaluation Indicators and Analysis of Usability on Learning with a Robot for the Elderly - the case of Content using the Humanoid Robot 'LiKU' (장노년층을 위한 로봇 활용 교육의 사용성 평가 지표 개발 및 평가 분석 - 휴머노이드 로봇 'LiKU'의 콘텐츠 사례)

  • Sin, Eun-joo;Song, Joo-bong;Lim, Soon-bum
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.11
    • /
    • pp.56-63
    • /
    • 2021
  • To solve the digital divide of the elderly, various institutions are educating how to use various smart devices for the elderly. However, it cannot be expected to have a great effect due to the disadvantages of instructor-to-face education and the learning characteristics of the elderly. Accordingly, educational contents using digital devices using robots for the elderly were developed. In this study, Evaluation Indicators were developed to evaluate the usability of digital education using robots. Also, by using usability evaluation based on the developed Evaluation Indicators, we tried to verify the usability of education using robots and to confirm the possibility of expanding the application area. In order to successfully apply the developing robot technology to various fields, it is essential to verify the usability of contents using robots, and this study on Evaluation Indicators and Evaluation methods is expected to serve as a foundation.

A Study on Non-Contact Care Robot System through Deep Learning

  • Hyun-Sik Ham;Sae Jun Ko
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.33-40
    • /
    • 2023
  • As South Korea enters the realm of an super-aging society, the demand for elderly welfare services has been steadily rising. However, the current shortage of welfare personnel has emerged as a social issue. To address this challenge, there is active research underway on elderly care robots designed to mitigate the social isolation of the elderly and provide emergency contact capabilities in critical situations. Nonetheless, these functionalities require direct user contact, which represents a limitation of conventional elderly care robots. In this paper, we propose a solution to overcome these challenges by introducing a care robot system capable of interacting with users without the need for direct physical contact. This system leverages commercialized elderly care robots and cameras. We have equipped the care robot with an edge device that incorporates facial expression recognition and action recognition models. The models were trained and validated using public available data. Experimental results demonstrate high accuracy rates, with facial expression recognition achieving 96.5% accuracy and action recognition reaching 90.9%. Furthermore, the inference times for these processes are 50ms and 350ms, respectively. These findings affirm that our proposed system offers efficient and accurate facial and action recognition, enabling seamless interaction even in non-contact situations.

An Intelligent Moving Wireless Camera Surveillance System with Motion sensor and Remote Control (무선조종과 모션 센서를 이용한 지능형 이동 무선감시카메라 구현)

  • Lee, Young Woong;Kim, Jong-Nam
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.661-664
    • /
    • 2009
  • Recently, intelligent surveillance camera systems are needed popularly. However, current researches are focussed on improvement of a single module rather than implementation of an integrated system. In this paper, we implemented a moving wireless surveillance camera system which is composed of face detection, and using motion sensor. In our implementation, we used a camera module from SHARP, a pair of wireless video transmission module from ECOM, body of moving robot used for A4WD1 Combo kit for RC, a pair of ZigBee RF wireless transmission module from ROBOBLOCK, and a motion sensor module (AMN14111) from PANASONIC. We used OpenCV library for face dection and MFC for implement software. We identified real-time operations of face detection, PTT control, and motion sensor detecton. Thus, the implemented system will be useful for the applications of remote control, human detection, and using motion sensor.

  • PDF

Autonomous Mobile Robot System Using Adaptive Spatial Coordinates Detection Scheme based on Stereo Camera (스테레오 카메라 기반의 적응적인 공간좌표 검출 기법을 이용한 자율 이동로봇 시스템)

  • Ko Jung-Hwan;Kim Sung-Il;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.26-35
    • /
    • 2006
  • In this paper, an automatic mobile robot system for a intelligent path planning using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth information can be detected. Finally, based-on the analysis of these calculated coordinates, a mobile robot system is derived as a intelligent path planning and a estimation. From some experiments on robot driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the distance between the mobile robot and the objects, and relative distance between the other objects is found to be very low value of $2.19\%$ and $1.52\%$ on average, respectably.

Multi-classifier Decision-level Fusion for Face Recognition (다중 분류기의 판정단계 융합에 의한 얼굴인식)

  • Yeom, Seok-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.77-84
    • /
    • 2012
  • Face classification has wide applications in intelligent video surveillance, content retrieval, robot vision, and human-machine interface. Pose and expression changes, and arbitrary illumination are typical problems for face recognition. When the face is captured at a distance, the image quality is often degraded by blurring and noise corruption. This paper investigates the efficacy of multi-classifier decision level fusion for face classification based on the photon-counting linear discriminant analysis with two different cost functions: Euclidean distance and negative normalized correlation. Decision level fusion comprises three stages: cost normalization, cost validation, and fusion rules. First, the costs are normalized into the uniform range and then, candidate costs are selected during validation. Three fusion rules are employed: minimum, average, and majority-voting rules. In the experiments, unfocusing and motion blurs are rendered to simulate the effects of the long distance environments. It will be shown that the decision-level fusion scheme provides better results than the single classifier.