• Title/Summary/Keyword: Facial capture

Search Result 64, Processing Time 0.02 seconds

A Study on the Correction of Face Motion Recognition Data Using Kinect Method (키넥트 방식을 활용한 얼굴모션인식 데이터 제어에 관한 연구)

  • Lee, Junsang;Park, Junhong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.513-515
    • /
    • 2019
  • Techniques to recognize depth values using Kinect infrared projectors continue to evolve. Techniques to track human movements are being developed from the Marcris method to the Bimarris method. Capture of facial movement using Kinect has disadvantages that are not sophisticated. In addition, a method to control the gestures and movements on the face in real time requires much research. Therefore, this paper proposes a technique to create natural 3D image contents by studying technology to apply and control branding technology to extracted face recognition data using Kinect infrared method.

  • PDF

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.

Driver's Status Recognition Using Multiple Wearable Sensors (다중 웨어러블 센서를 활용한 운전자 상태 인식)

  • Shin, Euiseob;Kim, Myong-Guk;Lee, Changook;Kang, Hang-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.6
    • /
    • pp.271-280
    • /
    • 2017
  • In this paper, we propose a new safety system composed of wearable devices, driver's seat belt, and integrating controllers. The wearable device and driver's seat belt capture driver's biological information, while the integrating controller analyzes captured signal to alarm the driver or directly control the car appropriately according to the status of the driver. Previous studies regarding driver's safety from driver's seat, steering wheel, or facial camera to capture driver's physiological signal and facial information had difficulties in gathering accurate and continuous signals because the sensors required the upright posture of the driver. Utilizing wearable sensors, however, our proposed system can obtain continuous and highly accurate signals compared to the previous researches. Our advanced wearable apparatus features a sensor that measures the heart rate, skin conductivity, and skin temperature and applies filters to eliminate the noise generated by the automobile. Moreover, the acceleration sensor and the gyro sensor in our wearable device enable the reduction of the measurement errors. Based on the collected bio-signals, the criteria for identifying the driver's condition were presented. The accredited certification body has verified that the devices has the accuracy of the level of medical care. The laboratory test and the real automobile test demonstrate that our proposed system is good for the measurement of the driver's condition.

Microdroplet Impact Dynamics at Very High Velocity on Face Masks for COVID-19 Protection (코로나-19 보호용 페이스 마스크에서의 액적 고속 충돌 거동)

  • Choi, Jaewon;Lee, Dongho;Eo, Jisu;Lee, Dong-Geun;Kang, Jeon-Woong;Ji, Inseo;Kim, Taeyung;Hong, Jiwoo
    • Korean Chemical Engineering Research
    • /
    • v.60 no.2
    • /
    • pp.282-288
    • /
    • 2022
  • Facial masks have become indispensable in daily life to prevent infection and spread through respiratory droplets in the era of the corona pandemic. To understand how effective two different types of masks (i.e., KF-94 mask and dental mask) are in blocking respiratory droplets, i) we preferentially analyze wettability characteristics (e.g., contact angle and contact angle hysteresis) of filters consisting of each mask, and ii) subsequently observe the dynamic behaviors of microdroplets impacting at high velocities on the filter surfaces. Different wetting properties (i.e., hydrophobicity and hydrophilicity) are found to exhibit depending on the constituent materials and pore sizes of each filter. In addition, the pneumatic conditions for stably and uniformly dispensing microdroplets with a certain volume and impacting behaviors associated with the impacting velocity and filter type change are systematically explored. Three distinctive dynamics (i.e., no penetration, capture, and penetration) after droplet impacting are observed depending on the type of filter constituting the masks and droplet impact velocity. The present experimental results not only provide very useful information in designing of face masks for prevention of transmission of infectious respiratory diseases, but also are helpful for academic researches on droplet impacts on various porous surfaces.

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Development of a Serious Game using EEG Monitor and Kinect (뇌파측정기와 키넥트를 이용한 기능성 게임 개발)

  • Jung, Sang-Hyub;Han, Seung-Wan;Kim, Hyo-Chan;Kim, Ki-Nam;Song, Min-Sun;Lee, Kang-Hee
    • Journal of Korea Game Society
    • /
    • v.15 no.4
    • /
    • pp.189-198
    • /
    • 2015
  • This paper is about a serious game controlled by EEG and motion capture. We developed our game for 2 users competitive and its method is as follows. One player uses a controlling interface using EEG signals based on the premise that the player's facial movements are a depiction of the player's emotion and intensity throughout the game play. The other player uses a controlling interface using kinect's motion capture technology which captures the player's vertical and lateral movements as well as state of running. The game shows the first player's EEG as a real-time graphic along the map on the game screen. The player will then be able to pace himself based on these visualization graphics of his brain activities. This results in higher concentration for the player throughout the game for a better score in the game. In addition, the second player will be able to improve his physical abilities since the game action is based on real movements from the player.

On Parameterizing of Human Expression Using ICA (독립 요소 분석을 이용한 얼굴 표정의 매개변수화)

  • Song, Ji-Hey;Shin, Hyun-Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.1
    • /
    • pp.7-15
    • /
    • 2009
  • In this paper, a novel framework that synthesizes and clones facial expression in parameter spaces is presented. To overcome the difficulties in manipulating face geometry models with high degrees of freedom, many parameterization methods have been introduced. In this paper, a data-driven parameterization method is proposed that represents a variety of expressions with a small set of fundamental independent movements based on the ICA technique. The face deformation due to the parameters is also learned from the data to capture the nonlinearity of facial movements. With this parameterization, one can control the expression of an animated character's face by the parameters. By separating the parameterization and the deformation learning process, we believe that we can adopt this framework for a variety applications including expression synthesis and cloning. The experimental result demonstrates the efficient production of realistic expressions using the proposed method.

  • PDF

Facial Animation Generation by Korean Text Input (한글 문자 입력에 따른 얼굴 에니메이션)

  • Kim, Tae-Eun;Park, You-Shin
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.4 no.2
    • /
    • pp.116-122
    • /
    • 2009
  • In this paper, we propose a new method which generates the trajectory of the mouth shape for the characters by the user inputs. It is based on the character at a basis syllable and can be suitable to the mouth shape generation. In this paper, we understand the principle of the Korean language creation and find the similarity for the form of the mouth shape and select it as a basic syllable. We also consider the articulation of this phoneme for it and create a new mouth shape trajectory and apply at face of an 3D avatar.

  • PDF

Gaze Detection System using Real-time Active Vision Camera (실시간 능동 비전 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1228-1238
    • /
    • 2003
  • This paper presents a new and practical method based on computer vision for detecting the monitor position where the user is looking. In general, the user tends to move both his face and eyes in order to gaze at certain monitor position. Previous researches use only one wide view camera, which can capture a whole user's face. In such a case, the image resolution is too low and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with dual camera systems(a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow view camera has the functionalities of auto focusing and auto panning/tilting based on the detected 3D facial feature positions from the wide view camera. In addition, we use dual R-LED illuminators in order to detect facial features and especially eye features. As experimental results, we can implement the real-time gaze detection system and the gaze position accuracy between the computed positions and the real ones is about 3.44 cm of RMS error.

A Study on Korean Speech Animation Generation Employing Deep Learning (딥러닝을 활용한 한국어 스피치 애니메이션 생성에 관한 고찰)

  • Suk Chan Kang;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.461-470
    • /
    • 2023
  • While speech animation generation employing deep learning has been actively researched for English, there has been no prior work for Korean. Given the fact, this paper for the very first time employs supervised deep learning to generate Korean speech animation. By doing so, we find out the significant effect of deep learning being able to make speech animation research come down to speech recognition research which is the predominating technique. Also, we study the way to make best use of the effect for Korean speech animation generation. The effect can contribute to efficiently and efficaciously revitalizing the recently inactive Korean speech animation research, by clarifying the top priority research target. This paper performs this process: (i) it chooses blendshape animation technique, (ii) implements the deep-learning model in the master-servant pipeline of the automatic speech recognition (ASR) module and the facial action coding (FAC) module, (iii) makes Korean speech facial motion capture dataset, (iv) prepares two comparison deep learning models (one model adopts the English ASR module, the other model adopts the Korean ASR module, however both models adopt the same basic structure for their FAC modules), and (v) train the FAC modules of both models dependently on their ASR modules. The user study demonstrates that the model which adopts the Korean ASR module and dependently trains its FAC module (getting 4.2/5.0 points) generates decisively much more natural Korean speech animations than the model which adopts the English ASR module and dependently trains its FAC module (getting 2.7/5.0 points). The result confirms the aforementioned effect showing that the quality of the Korean speech animation comes down to the accuracy of Korean ASR.