• Title/Summary/Keyword: 머리와 얼굴

Search Result 126, Processing Time 0.026 seconds

A Head Gesture Recognition Method based on Eigenfaces using SOM and PRL (SOM과 PRL을 이용한 고유얼굴 기반의 머리동작 인식방법)

  • Lee, U-Jin;Gu, Ja-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.971-976
    • /
    • 2000
  • In this paper a new method for head gesture recognition is proposed. A the first stage, face image data are transformed into low dimensional vectors by principal component analysis (PCA), which utilizes the high correlation between face pose images. The a self organization map(SM) is trained by the transformed face vectors, in such a that the nodes at similar locations respond to similar poses. A sequence of poses which comprises each model gesture goes through PCA and SOM, and the result is stored in the database. At the recognition stage any sequence of frames goes through the PCA and SOM, and the result is compared with the model gesture stored in the database. To improve robustness of classification, probabilistic relaxation labeling(PRL) is used, which utilizes the contextural information imbedded in the adjacent poses.

  • PDF

Markerless Image-to-Patient Registration Using Stereo Vision : Comparison of Registration Accuracy by Feature Selection Method and Location of Stereo Bision System (스테레오 비전을 이용한 마커리스 정합 : 특징점 추출 방법과 스테레오 비전의 위치에 따른 정합 정확도 평가)

  • Joo, Subin;Mun, Joung-Hwan;Shin, Ki-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.1
    • /
    • pp.118-125
    • /
    • 2016
  • This study evaluates the performance of image to patient registration algorithm by using stereo vision and CT image for facial region surgical navigation. For the process of image to patient registration, feature extraction and 3D coordinate calculation are conducted, and then 3D CT image to 3D coordinate registration is conducted. Of the five combinations that can be generated by using three facial feature extraction methods and three registration methods on stereo vision image, this study evaluates the one with the highest registration accuracy. In addition, image to patient registration accuracy was compared by changing the facial rotation angle. As a result of the experiment, it turned out that when the facial rotation angle is within 20 degrees, registration using Active Appearance Model and Pseudo Inverse Matching has the highest accuracy, and when the facial rotation angle is over 20 degrees, registration using Speeded Up Robust Features and Iterative Closest Point has the highest accuracy. These results indicate that, Active Appearance Model and Pseudo Inverse Matching methods should be used in order to reduce registration error when the facial rotation angle is within 20 degrees, and Speeded Up Robust Features and Iterative Closest Point methods should be used when the facial rotation angle is over 20 degrees.

Interaction with Agents in the Virtual Space Combined by Recognition of Face Direction and Hand Gestures (얼굴 방향과 손 동작 인식을 통합한 가상 공간에 존재하는 Agent들과의 상호 작용)

  • Jo, Gang-Hyeon;Kim, Seong-Eun;Lee, In-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.3
    • /
    • pp.62-78
    • /
    • 2002
  • In this paper, we describe a system that can interact with agents in the virtual space incorporated in the system. This system is constructed by an analysis system for analyzing human gesture and an interact system for interacting with agents in the virtual space using analyzed information. An implemented analysis system for analyzing gesture extracts a head and hands region after taking image sequence of an operator's continuous behavior using CCD cameras. In interact system, we construct the virtual space that exist an avatar which incarnating operator himself, an autonomous object (like a Puppy), and non-autonomous objects which are table, door, window and object. Recognized gesture is transmitted to the avatar in the virtual space, then transit to next state based on state transition diagram. State transition diagram is represented in a graph in which each state represented as node and connect with link. In the virtual space, the agent link an avatar can open and close a window and a door, grab or move an object like a ball, order a puppy to do and respond to the Puppy's behavior as does the puppy.

Efficient Object Selection Algorithm by Detection of Human Activity (행동 탐지 기반의 효율적인 객체 선택 알고리듬)

  • Park, Wang-Bae;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.61-69
    • /
    • 2010
  • This paper presents an efficient object selection algorithm by analyzing and detecting of human activity. Generally, when people point any something, they will put a face on the target direction. Therefore, the direction of the face and fingers and was ordered to be connected to a straight line. At first, in order to detect the moving objects from the input frames, we extract the interesting objects in real time using background subtraction. And the judgment of movement is determined by Principal Component Analysis and a designated time period. When user is motionless, we estimate the user's indication by estimation in relation to vector from the head to the hand. Through experiments using the multiple views, we confirm that the proposed algorithm can estimate the movement and indication of user more efficiently.

Face Tracking Combining Active Contour Model and Color-Based Particle Filter (능동적 윤곽 모델과 색상 기반 파티클 필터를 결합한 얼굴 추적)

  • Kim, Jin-Yul;Jeong, Jae-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.10
    • /
    • pp.2090-2101
    • /
    • 2015
  • We propose a robust tracking method that combines the merits of ACM(active contour model) and the color-based PF(particle filter), effectively. In the proposed method, PF and ACM track the color distribution and the contour of the target, respectively, and Decision part merges the estimate results from the two trackers to determine the position and scale of the target and to update the target model. By controlling the internal energy of ACM based on the estimate of the position and scale from PF tracker, we can prevent the snake pointers from falsely converging to the background clutters. We appled the proposed method to track the head of person in video and have conducted computer experiments to analyze the errors of the estimated position and scale.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

A Study of Measurement on the Head and Face for Korean Adults (한국 성인의 머리 및 얼굴부위 측정치에 관한 연구)

  • Yoon, Hoon-Yong;Jung, Suk-Gil
    • IE interfaces
    • /
    • v.15 no.2
    • /
    • pp.199-208
    • /
    • 2002
  • This study was performed to measure the various dimensions of the head and face for Korean adults. Three hundred and eighteen males and two hundred and sixty females, age ranged 18 to 60, participated for this study. Thirty-six dimensions were selected to measure. Subjects were divided into three age groups - 18 to 29, 30 to 39, and 40 to 60 - for each sex. The data were analyzed ta see the differences between the age groups and sex using SAS program. Also, the results of this study were compared to the data of Japanese and US. army. The results showed that the 'ear length', 'bigonial breadth' and 'bitragion submandibular arc' increased as the age increased(p<0.01). However, not much of differences were shown between the age groups in most of other dimensions. Males were significantly bigger than females in every dimensions. The comparison between Korea and Japanese showed significant differences in many dimensions. Due to this reason, it is considered that more caution has to be exercised in using Japanese data for the Korean. The Americans showed to be significantly bigger than Korean in most dimensions. It showed that Koreans have more roundish face and wider nose ridge than Americans. The results of this study can be used to design the products that related to the head and face.

Movement Detection Algorithm Using Virtual Skeleton Model (가상 모델을 이용한 움직임 추출 알고리즘)

  • Joo, Young-Hoon;Kim, Se-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.731-736
    • /
    • 2008
  • In this paper, we propose the movement detection algorithm by using virtual skeleton model. To do this, first, we eliminate error values by using conventioanl method based on RGB color model and eliminate unnecessary values by using the HSI color model. Second, we construct the virtual skeleton model with skeleton information of 10 peoples. After matching this virtual model to original image, we extract the real head silhouette by using the proposed circle searching method. Third, we extract the object by using the mean-shift algorithm and this head information. Finally, we validate the applicability of the proposed method through the various experiments in a complex environments.

Sonographic and Strain Elastographic Findings of a Clear Cell Hidradenoma that Looked Like an Epidermoid Tumor: A Case Report (표피 종양처럼 보이는 투명 세포 열선 종의 초음파 및 변형 탄성 소견: 증례 보고)

  • Jin Hee Kim;Hee Jin Park;Ji Na Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.1
    • /
    • pp.194-198
    • /
    • 2022
  • Clear cell hidradenoma (CCH) is a rare tumor of the sweat glands of eccrine or apocrine differentiation. It can occur anywhere in the body, but common sites of involvement are the head, face, trunk, and extremities. Although several reports have described sonographic findings of CCH, only one study on the axilla mentioned its strain elastographic findings. Here, we present a case of CCH in the right calf with its sonographic and strain elastographic findings in a tumor that looked like an epidermoid tumor.

Discrimination between spontaneous and posed smile: Humans versus computers (자발적 웃음과 인위적 웃음 간의 구분: 사람 대 컴퓨터)

  • Eom, Jin-Sup;Oh, Hyeong-Seock;Park, Mi-Sook;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.95-106
    • /
    • 2013
  • The study compares accuracies between humans and computer algorithms in the discrimination of spontaneous smiles from posed smiles. For this purpose, subjects performed two tasks, one was judgment with single pictures and the other was judgment with pair comparison. At the task of judgment with single pictures, in which pictures of smiling facial expression were presented one by one, subjects were required to judge whether smiles in the pictures were spontaneous or posed. In the task for judgment with pair comparison, in which two kinds of smiles from one person were presented simultaneously, subjects were to select spontaneous smile. To calculate the discrimination algorithm accuracy, 8 kinds of facial features were used. To calculate the discriminant function, stepwise linear discriminant analysis (SLDA) was performed by using approximately 50 % of pictures, and the rest of pictures were classified by using the calculated discriminant function. In the task of single pictures, the accuracy rate of SLDA was higher than that of humans. In the analysis of accuracy on pair comparison, the accuracy rate of SLDA was also higher than that of humans. Among the 20 subjects, none of them showed the above accuracy rate of SLDA. The facial feature contributed to SLDA effectively was angle of inner eye corner, which was the degree of the openness of the eyes. According to Ekman's FACS system, this feature corresponds to AU 6. The reason why the humans had low accuracy while classifying two kinds of smiles, it appears that they didn't use the information coming from the eyes enough.

  • PDF