• Title/Summary/Keyword: face to face

Search Result 10,665, Processing Time 0.05 seconds

Comparing Learning Outcome of e-Learning with Face-to-Face Lecture of a Food Processing Technology Course in Korean Agricultural High School

  • PARK, Sung Youl;LEE, Hyeon-ah
    • Educational Technology International
    • /
    • v.8 no.2
    • /
    • pp.53-71
    • /
    • 2007
  • This study identified the effectiveness of e-learning by comparing learning outcome in conventional face-to-face lecture with the selected e-learning methods. Two e-learning contents (animation based and video based) were developed based on the rapid prototyping model and loaded onto the learning management system (LMS), which is http://www.enaged.co.kr. Fifty-four Korean agricultural high school students were randomly assigned into three groups (face-to-face lecture, animation based e-learning, and video based e-learning group). The students of the e-learning group logged on the LMS in school computer lab and completed each e-learning. All students were required to take a pretest and posttest before and after learning under the direction of the subject teacher. A one-way analysis of covariance was administered to verify whether there was any difference between face-to-face lecture and e-learning in terms of students' learning outcomes after controlling the covariate variable, pretest score. According to the results, no differences between animation based and video based e-learning as well as between face-to-face learning and e-learning were identified. Findings suggest that the use of well designed e-learning could be worthy even in agricultural education, which stresses hands-on experience and lab activities if e-learning was used appropriately in combination with conventional learning. Further research is also suggested, focusing on a preference of e-learning content type and its relationship with learning outcome.

Real-Time Arbitrary Face Swapping System For Video Influencers Utilizing Arbitrary Generated Face Image Selection

  • Jihyeon Lee;Seunghoo Lee;Hongju Nam;Suk-Ho Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.2
    • /
    • pp.31-38
    • /
    • 2023
  • This paper introduces a real-time face swapping system that enables video influencers to swap their faces with arbitrary generated face images of their choice. The system is implemented as a Django-based server that uses a REST request to communicate with the generative model,specifically the pretrained stable diffusion model. Once generated, the generated image is displayed on the front page so that the influencer can decide whether to use the generated face or not, by clicking on the accept button on the front page. If they choose to use it, both their face and the generated face are sent to the landmark extraction module to extract the landmarks, which are then used to swap the faces. To minimize the fluctuation of landmarks over time that can cause instability or jitter in the output, a temporal filtering step is added. Furthermore, to increase the processing speed the system works on a reduced set of the extracted landmarks.

Face Detection Using Multi-level Features for Privacy Protection in Large-scale Surveillance Video (대규모 비디오 감시 환경에서 프라이버시 보호를 위한 다중 레벨 특징 기반 얼굴검출 방법에 관한 연구)

  • Lee, Seung Ho;Moon, Jung Ik;Kim, Hyung-Il;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.11
    • /
    • pp.1268-1280
    • /
    • 2015
  • In video surveillance system, the exposure of a person's face is a serious threat to personal privacy. To protect the personal privacy in large amount of videos, an automatic face detection method is required to locate and mask the person's face. However, in real-world surveillance videos, the effectiveness of existing face detection methods could deteriorate due to large variations in facial appearance (e.g., facial pose, illumination etc.) or degraded face (e.g., occluded face, low-resolution face etc.). This paper proposes a new face detection method based on multi-level facial features. In a video frame, different kinds of spatial features are independently extracted, and analyzed, which could complement each other in the aforementioned challenges. Temporal domain analysis is also exploited to consolidate the proposed method. Experimental results show that, compared to competing methods, the proposed method is able to achieve very high recall rates while maintaining acceptable precision rates.

Comparison of Temporal Dark Image Sticking Produced by Face-to-Face and Coplanar Sustain Electrode Structures

  • Kim, Jae-Hyun;Park, Choon-Sang;Kim, Bo-Sung;Park, Ki-Hyung;Tae, Heung-Sik
    • Journal of Information Display
    • /
    • v.8 no.3
    • /
    • pp.29-33
    • /
    • 2007
  • The temporal dark image sticking phenomena are examined and compared for the two different electrode structures such as the face-to-face and coplanar sustain electrode structure. To compare the temporal dark image sticking phenomena for both structures, the differences in the infrared emission profile, luminance, and perceived luminance of the image sticking cells and the non image sticking cells were measured. It is observed that the temporal dark image sticking is mitigated for the face-to-face structure. The mitigation of the temporal dark image sticking for the face-to-face structure is due to the slight increase in the panel temperature induced by the ITO-less electrode structure.

DETECTION OF FACIAL FEATURES IN COLOR IMAGES WITH VARIOUS BACKGROUNDS AND FACE POSES

  • Park, Jae-Young;Kim, Nak-Bin
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.4
    • /
    • pp.594-600
    • /
    • 2003
  • In this paper, we propose a detection method for facial features in color images with various backgrounds and face poses. To begin with, the proposed method extracts face candidacy region from images with various backgrounds, which have skin-tone color and complex objects, via the color and edge information of face. And then, by using the elliptical shape property of face, we correct a rotation, scale, and tilt of face region caused by various poses of head. Finally, we verify the face using features of face and detect facial features. In our experimental results, it is shown that accuracy of detection is high and the proposed method can be used in pose-invariant face recognition system effectively

  • PDF

Scale Invariant Single Face Tracking Using Particle Filtering With Skin Color

  • Adhitama, Perdana;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.9 no.3
    • /
    • pp.9-14
    • /
    • 2013
  • In this paper, we will examine single face tracking algorithms with scaling function in a mobile device. Face detection and tracking either in PC or mobile device with scaling function is an unsolved problem. Standard single face tracking method with particle filter has a problem in tracking the objects where the object can move closer or farther from the camera. Therefore, we create an algorithm which can work in a mobile device and perform a scaling function. The key idea of our proposed method is to extract the average of skin color in face detection, then we compare the skin color distribution between the detected face and the tracking face. This method works well if the face position is located in front of the camera. However, this method will not work if the camera moves closer from the initial point of detection. Apart from our weakness of algorithm, we can improve the accuracy of tracking.

Affine Local Descriptors for Viewpoint Invariant Face Recognition

  • Gao, Yongbin;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.781-784
    • /
    • 2014
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we use Affine SIFT to detect affine invariant local descriptors for face recognition under large viewpoint change. Affine SIFT is an extension of SIFT algorithm. SIFT algorithm is scale and rotation invariant, which is powerful for small viewpoint changes in face recognition, but it fails when large viewpoint change exists. In our scheme, Affine SIFT is used for both gallery face and probe face, which generates a series of different viewpoints using affine transformation. Therefore, Affine SIFT allows viewpoint difference between gallery face and probe face. Experiment results show our framework achieves better recognition accuracy than SIFT algorithm on FERET database.

Precision Test of 3D Face Automatic Recognition Apparatus(3D-FARA) by Rotation (3차원 안면 자동 인식기(3D-FARA)의 안면 위치변화에 따른 정확도 검사)

  • Seok, Jae-Hwa;Cho, Kyung-Rae;Cho, Yong-Beum;Yoo, Jung-Hee;Kwak, Chang-Kyu;Lee, Soo-Kyung;Kho, Byung-Hee;Kim, Jong-Won;Kim, Kyu-Kon;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.18 no.3
    • /
    • pp.57-63
    • /
    • 2006
  • 1. Objectives The Face is an important standard for the classification of Sasang Contitutions. Now We are developing 3D Face Automatic Recognition Apparatus to analyse the facial characteristics. This apparatus show us 3D image of man's face and measure facial figure. We should examine accuracy of position recognition in 3D Face Automatic Recognition Apparatus. 2. Methods We took a photograph of Face status with Land Mark 8 times using Face Automatic Recognition Apparatus. Each taking-photo, We span Face statusby 10 degree. At last time, We took a photograph of Face status's lateral face. And We analysed Error Averige of Distance between seven Land Marks. So We examined the accuracy of position recognition in 3D Face Automatic Recognition Apparatus at indirectly in degree changing of Face status. 3. Results and Conclusions According to degree change of Face status, Error Averige of Distance between Seven Land Marks is 0.1848mm. In conclusion, We assessed that accuracy of position recognition in 3D Face Automatic Recognition Apparatus is considerably good in spite of degree changing of Face status

  • PDF

Face Region Extraction Algorithm Using Projection (투영 기법을 이용한 얼굴 영역 추출 알고리즘)

  • 임주혁;이준우;류권열;송근원
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.521-524
    • /
    • 2003
  • In this paper, we propose a face region extraction algorithm using color information and projection. After the extraction of face candidate image using adaptive color information, we project it into vertical direction to estimate the width of the face. Then the redundant parts of the face are efficiently removed by using the estimated face width. And the width information of the face is used at the horizontal projection step to extract the height of the face, and non-face region such as the neck and some background regions, which are represented as the similar skin color, effectively eliminated. From the experiment results for the various images, the proposed algorithm shows more accurate results than the conventional algorithm.

  • PDF

Face Detection Tracking in Sequential Images using Backpropagation (역전파 신경망을 이용한 동영상에서의 얼굴 검출 및 트래킹)

  • 지승환;김용주;김정환;박민용
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.11a
    • /
    • pp.124-127
    • /
    • 1997
  • In this paper, we propose the new face detection and tracking angorithm in sequential images which have complex background. In order to apply face deteciton algorithm efficently, we convert the conventional RGB coordiantes into CIE coordonates and make the input images insensitive to luminace. And human face shapes and colors are learned using ueural network's backpropagation. For variable face size, we make mosaic size of input images vary and get the face location with various size through neural network. Besides, in sequential images, we suggest face motion tracking algorithm through image substraction processing and thresholding. At this time, for accurate face tracking, we use the face location of previous. image. Finally, we verify the real-time applicability of the proposed algorithm by the simple simulation.

  • PDF