• Title/Summary/Keyword: Face motion

Search Result 324, Processing Time 0.021 seconds

Face and Hand Activity Detection Based on Haar Wavelet and Background Updating Algorithm

  • Shang, Yiting;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.992-999
    • /
    • 2011
  • This paper proposed a human body posture recognition program based on haar-like feature and hand activity detection. Its distinguishing features are the combination of face detection and motion detection. Firstly, the program uses the haar-like feature face detection to receive the location of human face. The haar-like feature is provided with the advantages of speed. It means the less amount of calculation the haar-like feature can exclude a large number of interference, and it can discriminate human face more accurately, and achieve the face position. Then the program uses the frame subtraction to achieve the position of human body motion. This method is provided with good performance of the motion detection. Afterwards, the program recognises the human body motion by calculating the relationship of the face position with the position of human body motion contour. By the test, we know that the recognition rate of this algorithm is more than 92%. The results show that, this algorithm can achieve the result quickly, and guarantee the exactitude of the result.

Realtime Face Tracking using Motion Analysis and Color Information (움직임분석 및 색상정보를 이용한 실시간 얼굴추적)

  • Lee, Kyu-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.5
    • /
    • pp.977-984
    • /
    • 2007
  • A realtime face tracking algorithm using motion analysis from image sequences and color information is proposed. Motion area from the realtime moving images is detected by calculating temporal derivatives first, candidate pixels which represent face region is extracted by the fusion filtering with multiple color models, and realtime face tracking is performed by discriminating face components which includes eyes and lips. We improve the stability of face tracking performance by using template matching with face region in an image sequence and the reference template of face components.

3D FACE RECONSTRUCTION FROM ROTATIONAL MOTION

  • Sugaya, Yoshiko;Ando, Shingo;Suzuki, Akira;Koike, Hideki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.714-718
    • /
    • 2009
  • 3D reconstruction of a human face from an image sequence remains an important problem in computer vision. We propose a method, based on a factorization algorithm, that reconstructs a 3D face model from short image sequences exhibiting rotational motion. Factorization algorithms can recover structure and motion simultaneously from one image sequence, but they usually require that all feature points be well tracked. Under rotational motion, however, feature tracking often fails due to occlusion and frame out of features. Additionally, the paucity of images may make feature tracking more difficult or decrease reconstruction accuracy. The proposed 3D reconstruction approach can handle short image sequences exhibiting rotational motion wherein feature points are likely to be missing. We implement the proposal as a reconstruction method; it employs image sequence division and a feature tracking method that uses Active Appearance Models to avoid the failure of feature tracking. Experiments conducted on an image sequence of a human face demonstrate the effectiveness of the proposed method.

  • PDF

Human Face Tracking and Modeling using Active Appearance Model with Motion Estimation

  • Tran, Hong Tai;Na, In Seop;Kim, Young Chul;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.49-56
    • /
    • 2017
  • Images and Videos that include the human face contain a lot of information. Therefore, accurately extracting human face is a very important issue in the field of computer vision. However, in real life, human faces have various shapes and textures. To adapt to these variations, A model-based approach is one of the best ways in which unknown data can be represented by the model in which it is built. However, the model-based approach has its weaknesses when the motion between two frames is big, it can be either a sudden change of pose or moving with fast speed. In this paper, we propose an enhanced human face-tracking model. This approach included human face detection and motion estimation using Cascaded Convolutional Neural Networks, and continuous human face tracking and modeling correction steps using the Active Appearance Model. A proposed system detects human face in the first input frame and initializes the models. On later frames, Cascaded CNN face detection is used to estimate the target motion such as location or pose before applying the old model and fit new target.

Detection of Face Direction by Using Inter-Frame Difference

  • Jang, Bongseog;Bae, Sang-Hyun
    • Journal of Integrative Natural Science
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2016
  • Applying image processing techniques to education, the face of the learner is photographed, and expression and movement are detected from video, and the system which estimates degree of concentration of the learner is developed. For one learner, the measuring system is designed in terms of estimating a degree of concentration from direction of line of learner's sight and condition of the eye. In case of multiple learners, it must need to measure each concentration level of all learners in the classroom. But it is inefficient because one camera per each learner is required. In this paper, position in the face region is estimated from video which photographs the learner in the class by the difference between frames within the motion direction. And the system which detects the face direction by the face part detection by template matching is proposed. From the result of the difference between frames in the first image of the video, frontal face detection by Viola-Jones method is performed. Also the direction of the motion which arose in the face region is estimated with the migration length and the face region is tracked. Then the face parts are detected to tracking. Finally, the direction of the face is estimated from the result of face tracking and face parts detection.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

3D Face Tracking using Particle Filter based on MLESAC Motion Estimation (MLESAC 움직임 추정 기반의 파티클 필터를 이용한 3D 얼굴 추적)

  • Sung, Ha-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.8
    • /
    • pp.883-887
    • /
    • 2010
  • 3D face tracking is one of essential techniques in computer vision such as surveillance, HCI (Human-Computer Interface), Entertainment and etc. However, 3D face tracking demands high computational cost. It is a serious obstacle to applying 3D face tracking to mobile devices which usually have low computing capacity. In this paper, to reduce computational cost of 3D tracking and extend 3D face tracking to mobile devices, an efficient particle filtering method using MLESAC(Maximum Likelihood Estimation SAmple Consensus) motion estimation is proposed. Finally, its speed and performance are evaluated experimentally.

Sleep Mode Detection for Smart TV using Face and Motion Detection

  • Lee, Suwon;Seo, Yong-Ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3322-3337
    • /
    • 2018
  • Sleep mode detection is a significant power management and green computing feature. However, it is difficult for televisions and smart TVs to detect deactivation events because we can use these devices without the assistance of an input device. In this paper, we propose a robust method for smart TVs to detect deactivation events based on a visual combination of face and motion detection. The results of experiments conducted indicate that the proposed method significantly reduces incorrect face detection and human absence by means of motion detection. The results also show that the proposed method is robust and effective for smart TVs to reduce power consumption.

Emergency Signal Detection based on Arm Gesture by Motion Vector Tracking in Face Area

  • Fayyaz, Rabia;Park, Dae Jun;Rhee, Eun Joo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.1
    • /
    • pp.22-28
    • /
    • 2019
  • This paper presents a method for detection of an emergency signal expressed by arm gestures based on motion segmentation and face area detection in the surveillance system. The important indicators of emergency can be arm gestures and voice. We define an emergency signal as the 'Help Me' arm gestures in a rectangle around the face. The 'Help Me' arm gestures are detected by tracking changes in the direction of the horizontal motion vectors of left and right arms. The experimental results show that the proposed method successfully detects 'Help Me' emergency signal for a single person and distinguishes it from other similar arm gestures such as hand waving for 'Bye' and stretching. The proposed method can be used effectively in situations where people can't speak, and there is a language or voice disability.

A Movement Tracking Model for Non-Face-to-Face Excercise Contents (비대면 운동 콘텐츠를 위한 움직임 추적 모델)

  • Chung, Daniel;Cho, Mingu;Ko, Ilju
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.6
    • /
    • pp.181-190
    • /
    • 2021
  • Sports activities conducted by multiple people are difficult to proceed in a situation where a widespread epidemic such as COVID-19 is spreading, and this causes a lack of physical activity in modern people. This problem can be overcome by using online exercise contents, but it is difficult to check detailed postures such as during face-to-face exercise. In this study, we present a model that detects posture and tracks movement using IT system for better non-face-to-face exercise content management. The proposed motion tracking model defines a body model with reference to motion analysis methods widely used in physical education and defines posture and movement accordingly. Using the proposed model, it is possible to recognize and analyze movements used in exercise, know the number of specific movements in the exercise program, and detect whether or not the exercise program is performed. In order to verify the validity of the proposed model, we implemented motion tracking and exercise program tracking programs using Azure Kinect DK, a markerless motion capture device. If the proposed motion tracking model is improved and the performance of the motion capture system is improved, more detailed motion analysis is possible and the number of types of motions can be increased.