• 제목/요약/키워드: Head and face

검색결과 458건 처리시간 0.023초

SOM과 PRL을 이용한 고유얼굴 기반의 머리동작 인식방법 (A Head Gesture Recognition Method based on Eigenfaces using SOM and PRL)

  • 이우진;구자영
    • 한국정보처리학회논문지
    • /
    • 제7권3호
    • /
    • pp.971-976
    • /
    • 2000
  • In this paper a new method for head gesture recognition is proposed. A the first stage, face image data are transformed into low dimensional vectors by principal component analysis (PCA), which utilizes the high correlation between face pose images. The a self organization map(SM) is trained by the transformed face vectors, in such a that the nodes at similar locations respond to similar poses. A sequence of poses which comprises each model gesture goes through PCA and SOM, and the result is stored in the database. At the recognition stage any sequence of frames goes through the PCA and SOM, and the result is compared with the model gesture stored in the database. To improve robustness of classification, probabilistic relaxation labeling(PRL) is used, which utilizes the contextural information imbedded in the adjacent poses.

  • PDF

A case report of congenitally abnormal rabbit-headed stillbirth Najdi lamb

  • Elsokary, Mohamed M.M.;Shehata, Seham F.;Saadedin, Islam M.
    • 한국동물생명공학회지
    • /
    • 제35권3호
    • /
    • pp.265-267
    • /
    • 2020
  • The congenital head anomalies are most often used to describe defects in the eyes, mouth, nose, skull, and or brain. The faulty embryogenesis most likely found to be associated with abnormal genetic or epigenetic causes during pregnancy. Eventually it leads to congenital anomalies. Rabbit-headed Lamb (RH) is a disorder in sheep breeding that is characterized by some deformities in the head and the face. A dead -day old- crossbred white Najdi lamb with a deformed face and head was reported to be born in the current case. The external and physical examination revealed a deformed skull and facial region with a unilateral eye, fused mouth, pig-like nose, and patent skull with the brain coming out from left eye orbit. Additionally, the lamb was very skinny with unusual long extremities. This is the first report of this syndrome that describes such deformities in a stillbirth Najdi breed lamb.

딥러닝 기반의 운전자의 안전/위험 상태 인지 시스템 개발 (Development of Driver's Safety/Danger Status Cognitive Assistance System Based on Deep Learning)

  • 미아오 쉬;이현순;강보영
    • 로봇학회논문지
    • /
    • 제13권1호
    • /
    • pp.38-44
    • /
    • 2018
  • In this paper, we propose Intelligent Driver Assistance System (I-DAS) for driver safety. The proposed system recognizes safety and danger status by analyzing blind spots that the driver cannot see because of a large angle of head movement from the front. Most studies use image pre-processing such as face detection for collecting information about the driver's head movement. This not only increases the computational complexity of the system, but also decreases the accuracy of the recognition because the image processing system dose not use the entire image of the driver's upper body while seated on the driver's seat and when the head moves at a large angle from the front. The proposed system uses a convolutional neural network to replace the face detection system and uses the entire image of the driver's upper body. Therefore, high accuracy can be maintained even when the driver performs head movement at a large angle from the frontal gaze position without image pre-processing. Experimental result shows that the proposed system can accurately recognize the dangerous conditions in the blind zone during operation and performs with 95% accuracy of recognition for five drivers.

얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적 (A Gaze Tracking based on the Head Pose in Computer Monitor)

  • 오승환;이희영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

Human Head Mouse System Based on Facial Gesture Recognition

  • Wei, Li;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제10권12호
    • /
    • pp.1591-1600
    • /
    • 2007
  • Camera position information from 2D face image is very important for that make the virtual 3D face model synchronize to the real face at view point, and it is also very important for any other uses such as: human computer interface (face mouth), automatic camera control etc. We present an algorithm to detect human face region and mouth, based on special color features of face and mouth in $YC_bC_r$ color space. The algorithm constructs a mouth feature image based on $C_b\;and\;C_r$ values, and use pattern method to detect the mouth position. And then we use the geometrical relationship between mouth position information and face side boundary information to determine the camera position. Experimental results demonstrate the validity of the proposed algorithm and the Correct Determination Rate is accredited for applying it into practice.

  • PDF

서베일런스에서 Adaptive Boosting을 이용한 실시간 헤드 트래킹 (Real-Time Head Tracking using Adaptive Boosting in Surveillance)

  • 강성관;이정현
    • 디지털융복합연구
    • /
    • 제11권2호
    • /
    • pp.243-248
    • /
    • 2013
  • 본 논문에서는 복잡한 배경에서의 사람의 머리 추적에 있어서 효과적인 Adaptive Boosting에 의한 방법을 제안한다. 하나의 특징 추출 방법은 사람의 머리를 모델링하기에는 부족하다. 따라서 본 연구에서는 여러 가지 특징 추출 방법을 병행하여 정확한 머리 검출을 시도하였다. 머리 영상의 특징 추출은 sub-region과 Haar 웨이블릿 변환(Haar wavelet transform)을 이용하였다. Sub-region은 머리의 지역적인 특징을 나타내고, Haar 웨이블릿 변환은 얼굴의 주파수 특성을 나타내기 때문에 이들을 이용하여 특징을 추출하면 효과적인 모델링이 가능해 진다. 실시간으로 입력되는 영상에서 사람의 머리를 추적하기 위하여 제안하는 방법에서는 3가지 형태의 Harr-wavelet 특징을 AdaBoosting 알고리즘으로 학습한 후 결과를 이용하였다. 원래 AdaBoosting 알고리즘은 학습시간이 매우 길며 학습데이터가 변하면 다시 학습을 수행해야 하는 단점이 존재한다. 이 단점을 극복하기 위하여 제안하는 방법에서는 캐스케이드를 이용한 AdaBoosting의 효율적인 학습방법을 제안한다. 이 방법은 머리 영상에 대한 학습시간은 감소시키며, 학습데이터의 변화에도 효율적으로 대처할 수 있다. 이 방법은 학습과정을 레벨별로 분리한 후 중요도가 높은 학습데이터를 다음 단계에 반복적으로 적용시킨다. 제안하는 방법이 적은 학습 시간과 학습 데이터를 사용해서 우수한 성능을 가지는 분류기를 생성하였다. 또한, 이 방법은 다양한 머리데이터를 가진 실시간 영상데이터에 적용한 결과 다양한 머리를 정확하게 검출 및 추적하였다.

한국인(韓國人) 남(男).여(女) 50-60대(代)의 사상체질별(四象體質別) 안면형태(顔面形態)에 관(關)한 표준화(標準化) 연구(硏究) (Morphological standardization research of head and face on the 50's and 60's in Korean according to Sasang Constitution)

  • 이수경;이의주;고병희;송일병;윤종현
    • 사상체질의학회지
    • /
    • 제12권2호
    • /
    • pp.123-131
    • /
    • 2000
  • 1. Background and Purpose The faces of human being change as they grow older. Therefore, we must consider the changes of the face when we diagnosed the Sasang Constitution of men through the analysis of facial appearance. As a study of all the standardization researches about the morphology of face, I carried out this study about the 50's and 60's of Korean men and women according to Sasang Constitution. 2. Objectives The object of this study is selected from the patients who were already diagnosed Sasang Constitutions at the department of Sasang Constitutional Medicine in Kyunghee Oriental Medical Center. The number of the patients were 74 men and 73 women in 50's and 60's. The number of general age group were 182 men and 180 who were also diagnosed Sasang Constitutions at the department of Sasang Constitutional Medicine at the same period. 3. Method I took the photographs of front view and lateral view of the objectives by digital camera and obtained the 200 measure through the facial measurement program. I compared the measure of 50's and 60's by three constitutional groups and I also compared the measure between 50's to 60's and all ages by three constitutional groups. 4. Results In men group, the measures which made differences by each constitutional groups were 17, and they were 6 in 50's and 60's. In women group, the measures which made differences by each constitutional groups were 52, and they were 33 in 50's and 60's 5. Conclusion (1) In the men group of 50's and 60's, Taeumin showed wide bigonial breadth, Soyangin showed long brow and Soumin showed big eyes. (2) In the women group of 50's and 60's, Taeumin showed the longest level of facial length, width and metopion head length, Soyangin showed metopion head length was long and the nose was also long, Soumin showed the ratio of brow in the face didn't make any difference with other constitutions and the metopion head length was short. (3) The measures which made differences were more in the all ages than in the 50's and 60's. It means that the differences decrease as they grow older, especially in 50's and 60's.

  • PDF

비거리 향상을 위한 드라이버 헤드의 공기역학적 형상연구

  • 김태우;이영준
    • EDISON SW 활용 경진대회 논문집
    • /
    • 제5회(2016년)
    • /
    • pp.598-603
    • /
    • 2016
  • 현대의 스포츠는 과학기술의 발전과 함께 성장하고 있고 골프종목 역시 재료역학적, 공기역학적 발전에 따라 비거리를 점점 늘려가고 있다. 하지만 현재까지의 비거리에 대한 연구는 골프공과 골프채의 재료의 변화와 딤플이 있는 골프공의 공기역학적 연구에 집중해 있었고 요즘에 들어서야 클럽헤드의 공기역학적 형상에 대한 연구가 활발히 진행되고 있다. 본 연구에서는 골프 경기 중 가장 먼 비거리를 만들어 내는 골프채인 드라이버의 단순화된 2차원 클럽헤드 형상을 설정하고 항력을 줄일 여러 가지 방법을 적용하여 최소의 항력을 갖는 헤드 형상을 찾아보았다. 연구결과 $10.2^{\circ}$의 로프트를 갖는 클럽헤드는 chord 길이가 face 높이의 3.2배이고, trailing edge가 face의 중심보다 전체 face높이의 10% 아래에 있을 때 가장 적은 항력을 얻을 수 있었다. 결과적으로 이 형상과 기존형상의 스윙 속도 차는 약 2mph로 5yard의 비거리 차이를 가져온다.

  • PDF

호흡보호구 선정을 위한 3차원 머리 인체측정학적 데이터의 분석 (An Analysis of Three-Dimensional Head Anthropometric Data to Select Respirators for Korean Users)

  • 박정근;김세동;조현민
    • 한국산업보건학회지
    • /
    • 제31권4호
    • /
    • pp.521-530
    • /
    • 2021
  • Objectives: This was to examine and explore the elements of Size Korea 6th 3D head anthropometric database and to provide basic information for the selection of respirators in Korea. Methods: This was a pilot study for the first year of work in a two-year-project initiated at KOSHA in 2021. 3D head dimensions data were obtained from the Size Korea Center managing the Size Korea 6th 3D national anthropometry survey databases. The 3D head dimensions data, including 45 dimensions, were used in line with ISO standards (e.g., ISO/TS 16976-2) for examinations, comparisons, statistical analyses, etc. Results: A total of 3,088 subjects were finally determined in this study. The main features were: Male subjects were 52.5%; the highest age group was 15-29 at 36.7%; unhealthy weight group based on BMI was 31.7%; and survey area was the capital region. For the 6th 3D head dimensions data with 45 items, the means and standard deviations for 'Face length' were 115.9±7.5 cm for males and 107.3±6.9 cm for females respectively while those for 'Face width' item were not available since there was no such item in the data. Numerous findings were discussed accordingly. Conclusions: This study showed that there were likely requirements for improvements in the 6th 3D head anthropometric data as follows: Standardization of Korean and English terms; addition of head dimensions items missed in the Size Korea survey; and reliability of generalizability for subjects, suggesting that the study results can be used for further studies or improvement of respirator selection in Korea.

RNN을 이용한 Expressive Talking Head from Speech의 합성 (Synthesis of Expressive Talking Heads from Speech with Recurrent Neural Network)

  • 사쿠라이 류헤이;심바 타이키;야마조에 히로타케;이주호
    • 로봇학회논문지
    • /
    • 제13권1호
    • /
    • pp.16-25
    • /
    • 2018
  • The talking head (TH) indicates an utterance face animation generated based on text and voice input. In this paper, we propose the generation method of TH with facial expression and intonation by speech input only. The problem of generating TH from speech can be regarded as a regression problem from the acoustic feature sequence to the facial code sequence which is a low dimensional vector representation that can efficiently encode and decode a face image. This regression was modeled by bidirectional RNN and trained by using SAVEE database of the front utterance face animation database as training data. The proposed method is able to generate TH with facial expression and intonation TH by using acoustic features such as MFCC, dynamic elements of MFCC, energy, and F0. According to the experiments, the configuration of the BLSTM layer of the first and second layers of bidirectional RNN was able to predict the face code best. For the evaluation, a questionnaire survey was conducted for 62 persons who watched TH animations, generated by the proposed method and the previous method. As a result, 77% of the respondents answered that the proposed method generated TH, which matches well with the speech.