• Title/Summary/Keyword: Face Sequences

Search Result 78, Processing Time 0.02 seconds

PERSONAL SPACE-BASED MODELING OF RELATIONSHIPS BETWEEN PEOPLE FOR NEW HUMAN-COMPUTER INTERACTION

  • Amaoka, Toshitaka;Laga, Hamid;Saito, Suguru;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.746-750
    • /
    • 2009
  • In this paper we focus on the Personal Space (PS) as a nonverbal communication concept to build a new Human Computer Interaction. The analysis of people positions with respect to their PS gives an idea on the nature of their relationship. We propose to analyze and model the PS using Computer Vision (CV), and visualize it using Computer Graphics. For this purpose, we define the PS based on four parameters: distance between people, their face orientations, age, and gender. We automatically estimate the first two parameters from image sequences using CV technology, while the two other parameters are set manually. Finally, we calculate the two-dimensional relationship of multiple persons and visualize it as 3D contours in real-time. Our method can sense and visualize invisible and unconscious PS distributions and convey the spatial relationship of users by an intuitive visual representation. The results of this paper can be used to Human Computer Interaction in public spaces.

  • PDF

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

Hand Gesture Recognition using Optical Flow Field Segmentation and Boundary Complexity Comparison based on Hidden Markov Models

  • Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.504-516
    • /
    • 2011
  • In this paper, we will present a method to detect human hand and recognize hand gesture. For detecting the hand region, we use the feature of human skin color and hand feature (with boundary complexity) to detect the hand region from the input image; and use algorithm of optical flow to track the hand movement. Hand gesture recognition is composed of two parts: 1. Posture recognition and 2. Motion recognition, for describing the hand posture feature, we employ the Fourier descriptor method because it's rotation invariant. And we employ PCA method to extract the feature among gesture frames sequences. The HMM method will finally be used to recognize these feature to make a final decision of a hand gesture. Through the experiment, we can see that our proposed method can achieve 99% recognition rate at environment with simple background and no face region together, and reduce to 89.5% at the environment with complex background and with face region. These results can illustrate that the proposed algorithm can be applied as a production.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Discovery of novel haplotypes from wild populations of Kappaphycus (Gigartinales, Rhodophyta) in the Philippines

  • Roleda, Michael Y.;Aguinaldo, Zae-Zae A.;Crisostomo, Bea A.;Hinaloc, Lourie Ann R.;Projimo, Vicenta Z.;Dumilag, Richard V.;Lluisma, Arturo O.
    • ALGAE
    • /
    • v.36 no.1
    • /
    • pp.1-12
    • /
    • 2021
  • As the global demand for the carrageenophyte Kappaphycus is steadily increasing, its overall productivity, carrageenan quality, and disease resistance are gradually declining. In the face of this dilemma, wild Kappaphycus populations are viewed as sources of new cultivars that could potentially enhance production; therefore, assessment of their diversity is crucial. This study highlights the morphological and genetic diversity of wild Kappaphycus species obtained from two sites in the Philippines. Nucleotide alignments of available 5' region of the mitochondrial cytochrome c oxidase subunit I (COI-5P) and cox2-3 spacer sequences of Kappaphycus confirmed the presence of K. alvarezii in Guiuan, Eastern Samar and K. striatus in Bolinao, Pangasinan. Based on the concatenated sequences of the COI-5P and the cox2-3 spacer, nine novel haplotypes were observed along with other published haplotypes. However, there was no relationship between haplotype and morphology. These newly recognized haplotypes indicate a reservoir of unutilized wild genotypes in the Philippines, which could be taken advantage of in developing new cultivars with superior traits. DNA barcodes generated from this study effectively expand the existing databank of Kappaphycus sequences and can provide insights in elucidating the genetic diversity of Kappaphycus species in the country.

A Study on an Open/Closed Eye Detection Algorithm for Drowsy Driver Detection (운전자 졸음 검출을 위한 눈 개폐 검출 알고리즘 연구)

  • Kim, TaeHyeong;Lim, Woong;Sim, Donggyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.7
    • /
    • pp.67-77
    • /
    • 2016
  • In this paper, we propose an algorithm for open/closed eye detection based on modified Hausdorff distance. The proposed algorithm consists of two parts, face detection and open/closed eye detection parts. To detect faces in an image, MCT (Modified Census Transform) is employed based on characteristics of the local structure which uses relative pixel values in the area with fixed size. Then, the coordinates of eyes are found and open/closed eyes are detected using MHD (Modified Hausdorff Distance) in the detected face region. Firstly, face detection process creates an MCT image in terms of various face images and extract criteria features by PCA(Principle Component Analysis) on offline. After extraction of criteria features, it detects a face region via the process which compares features newly extracted from the input face image and criteria features by using Euclidean distance. Afterward, the process finds out the coordinates of eyes and detects open/closed eye using template matching based on MHD in each eye region. In performance evaluation, the proposed algorithm achieved 94.04% accuracy in average for open/closed eye detection in terms of test video sequences of gray scale with 30FPS/$320{\times}180$ resolution.

Face and Iris Detection Algorithm based on SURF and circular Hough Transform (서프 및 하프변환 기반 운전자 동공 검출기법)

  • Artem, Lenskiy;Lee, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.175-182
    • /
    • 2010
  • The paper presents a novel algorithm for face and iris detection with the application for driver iris monitoring. The proposed algorithm consists of the following major steps: Skin-color segmentation, facial features segmentation, and iris positioning. For the skin-segmentation we applied a multi-layer perceptron to approximate the statistical probability of certain skin-colors, and filter out those with low probabilities. The next step segments the face region into the following categories: eye, mouth, eye brow, and remaining facial regions. For this purpose we propose a novel segmentation technique based on estimation of facial class probability density functions (PDF). Each facial class PDF is estimated on the basis of salient features extracted from a corresponding facial image region. Then pixels are classified according to the highest probability selected from four estimated PDFs. The final step applies the circular Hough transform to the detected eye regions to extract the position and radius of the iris. We tested our system on two data sets. The first one is obtained from the Web and contains faces under different illuminations. The second dataset was collected by us. It contains images obtained from video sequences recorded by a CCD camera while a driver was driving a car. The experimental results are presented, showing high detection rates.

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

A PERSONAL AUTHENTICATION FROM VIDEO USING HANDHELD CAMERA BY PARAMETRIC EIGENSPACE METHOD

  • Morizumi, Yusuke;Matsuo, Kenji;Kubota, Akira;Hatori, Yoshinori
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.628-631
    • /
    • 2009
  • In this paper, we proposed a new authentication method using video that was taken during moving a hand-held camera in front of the face. The proposed method extracted individuality from the obtained image sequences using the parametric eigenspace scheme. Changes of facial appearance through authentication trials draw continuous tracks in the low dimensional igenspace. The similarity between their continuous tracks are calculated by DP-matching to verify their identities. Experimental results confirmed that different motions and persons change the shapes of continuous tracks, so the proposed method could identify the person.

  • PDF

The Acquisition of Spanish Clitic Pronouns as a Third Language: A Corpus-based Study

  • Lu, Hui-Chuan;Cheng, An Chung;Chu, Yu-Hsin
    • Asia Pacific Journal of Corpus Research
    • /
    • v.1 no.2
    • /
    • pp.15-26
    • /
    • 2020
  • This corpus-based study investigated third language acquisition by Taiwanese college students in learning Spanish clitic pronouns at beginning and intermediate levels. It examined the acquisition sequences of Spanish clitic pronouns of the Chinese-speaking learners whose second language was English and third language was Spanish. The results indicated that indirect object pronouns (OP) preceded direct OP (case), first person preceded third person OP (person), masculine preceded feminine OP (gender), and animate preceded inanimate OP (animacy). The findings presented similar patterns as those of previous studies on English-speaking learners of Spanish. In further comparisons of the target forms in Chinese, English, and Spanish, the results suggested that L1 Chinese had strong influence on L3 Spanish, which accounts for the challenges that Taiwanese learners of Spanish face as they learn the Spanish clitic pronouns in the beginning stage.