• Title/Summary/Keyword: Face Video

Search Result 416, Processing Time 0.033 seconds

Comparison of satisfaction, interest, and experience awareness of 360° virtual reality video and first-person video in non-face-to-face practical lectures in medical emergency departments (응급구조학과 비대면 실습 강의에서 360° 가상현실 영상과 1인칭 시점 영상의 만족도, 흥미도, 경험인식 비교)

  • Lee, Hyo-Ju;Shin, Sang-Yol;Jung, Eun-Kyung
    • The Korean Journal of Emergency Medical Services
    • /
    • v.24 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Purpose: This study aimed to establish effective training strategies and methods by comparing the effects of 360° virtual reality video and first-person video in non-face-to-face practical lectures. Methods: This crossover study, implemented May 18-31, 2020, included 27 participants. We compared 360° virtual reality video and first-person video. SPSS version 25.0 was used for statistical analysis. Results: The 360° virtual reality video had a higher score of experience recognition (p=.039), vividness (p=.045), presence (p=.000), fantasy factor (p=.000) than the first-person video, but no significant difference was indicated for satisfaction (p=.348) or interest (p=.441). Conclusion: 360° virtual reality video and first-person video can be used as training alternatives to achieve the standard educational objectives in non-face-to-face practical lectures.

A Comparative Study of Local Features in Face-based Video Retrieval

  • Zhou, Juan;Huang, Lan
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.24-31
    • /
    • 2017
  • Face-based video retrieval has become an active and important branch of intelligent video analysis. Face profiling and matching is a fundamental step and is crucial to the effectiveness of video retrieval. Although many algorithms have been developed for processing static face images, their effectiveness in face-based video retrieval is still unknown, simply because videos have different resolutions, faces vary in scale, and different lighting conditions and angles are used. In this paper, we combined content-based and semantic-based image analysis techniques, and systematically evaluated four mainstream local features to represent face images in the video retrieval task: Harris operators, SIFT and SURF descriptors, and eigenfaces. Results of ten independent runs of 10-fold cross-validation on datasets consisting of TED (Technology Entertainment Design) talk videos showed the effectiveness of our approach, where the SIFT descriptors achieved an average F-score of 0.725 in video retrieval and thus were the most effective, while the SURF descriptors were computed in 0.3 seconds per image on average and were the most efficient in most cases.

Development of Combined Architecture of Multiple Deep Convolutional Neural Networks for Improving Video Face Identification (비디오 얼굴 식별 성능개선을 위한 다중 심층합성곱신경망 결합 구조 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.655-664
    • /
    • 2019
  • In this paper, we propose a novel way of combining multiple deep convolutional neural network (DCNN) architectures which work well for accurate video face identification by adopting a serial combination of 3D and 2D DCNNs. The proposed method first divides an input video sequence (to be recognized) into a number of sub-video sequences. The resulting sub-video sequences are used as input to the 3D DCNN so as to obtain the class-confidence scores for a given input video sequence by considering both temporal and spatial face feature characteristics of input video sequence. The class-confidence scores obtained from corresponding sub-video sequences is combined by forming our proposed class-confidence matrix. The resulting class-confidence matrix is then used as an input for learning 2D DCNN learning which is serially linked to 3D DCNN. Finally, fine-tuned, serially combined DCNN framework is applied for recognizing the identity present in a given test video sequence. To verify the effectiveness of our proposed method, extensive and comparative experiments have been conducted to evaluate our method on COX face databases with their standard face identification protocols. Experimental results showed that our method can achieve better or comparable identification rate compared to other state-of-the-art video FR methods.

Automatic Video Management System Using Face Recognition and MPEG-7 Visual Descriptors

  • Lee, Jae-Ho
    • ETRI Journal
    • /
    • v.27 no.6
    • /
    • pp.806-809
    • /
    • 2005
  • The main goal of this research is automatic video analysis using a face recognition technique. In this paper, an automatic video management system is introduced with a variety of functions enabled, such as index, edit, summarize, and retrieve multimedia data. The automatic management tool utilizes MPEG-7 visual descriptors to generate a video index for creating a summary. The resulting index generates a preview of a movie, and allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, a face recognition technique is utilized to personalbased video summarization and indexing in stored video data.

  • PDF

Face Detection based on Video Sequence (비디오 영상 기반의 얼굴 검색)

  • Ahn, Hyo-Chang;Rhee, Sang-Burm
    • Journal of the Semiconductor & Display Technology
    • /
    • v.7 no.3
    • /
    • pp.45-49
    • /
    • 2008
  • Face detection and tracking technology on video sequence has developed indebted to commercialization of teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Complex background, color distortion by luminance effect and condition of luminance has hindered face recognition system. In this paper, we have proceeded to research of face recognition on video sequence. We extracted facial area using luminance and chrominance component on $YC_bC_r$ color space. After extracting facial area, we have developed the face recognition system applied to our improved algorithm that combined PCA and LDA. Our proposed algorithm has shown 92% recognition rate which is more accurate performance than previous methods that are applied to PCA, or combined PCA and LDA.

  • PDF

A Study on Real-time Face Detection in Video (동영상에서 실시간 얼굴검출에 관한 연구)

  • Kim, Hyeong-Gyun;Bae, Yong-Guen
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.2
    • /
    • pp.47-53
    • /
    • 2010
  • This paper proposed Residual Image detection and Color Info using the face detection technique. The proposed technique was fast processing speed and high rate of face detection on the video. In addition, this technique is to detection error rate reduced through the calibration tasks for tilted face image. The first process is to extract target image from the transmitted video images. Next, extracted image processed by window rotated algorithm for detection of tilted face image. Feature extraction for face detection was used for AdaBoost algorithm.

A study on the improvement of non-face-to-face environment video lectures using IPA (IPA를 활용한 비대면 환경 화상강의 개선 방안 연구)

  • Kwon, Youngae;Park, Hyejin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.17 no.3
    • /
    • pp.121-132
    • /
    • 2021
  • The purpose of this study is to explore ways to improve the quality of real-time video lectures in a non-face-to-face environment using IPA (Importance-Performance Analysis). Recently, due to the impact of COVID-19 in universities, all remote classes are being implemented, so research is needed to raise learner awareness. Accordingly, factor analysis, mean analysis, correspondence analysis, and IPA analysis were performed based on the data of 632 students who responded from March 21 to June 30, 2021 for learners of K University in Chungbuk. First, overall satisfaction was low compared to importance, and the difference in system perception was the largest. Second, the difference in learner perception of real-time video lectures through the IPA matrix showed that the system error and screen cutoff were the largest. Third, the difficulty of lecture content, task and test feedback, etc. are classified. Accordingly, the satisfaction of real-time video lectures in non-face-to-face environments is low, suggesting that school-level support for quality improvement to improve learner satisfaction in non-face-to-face environments and the role of instructors are needed to improve learners' academic achievement.

Automatic Cast-list Analysis System in Broadcasting Videos (방송 비디오 등장인물 자동 분석 시스템)

  • 김기남;김형준;김회율
    • Journal of Broadcast Engineering
    • /
    • v.9 no.2
    • /
    • pp.164-173
    • /
    • 2004
  • In this paper, we propose a system that can analyze appearance interval of casts by detecting and recognizing casts in broadcasting videos. The cast is one of the most important characteristics in broadcasting videos such as drama and sports. In this paper, we propose the ACAV(Automatic Cast-list Analysis in Videos) system that analyzes cast-list automatically in video. The ACAV system consists of FAGIS(FAce reGIStration) which registers detected faces into the face DB and FACOG(FAce reCOGnition) that analyses the cast-list in video sequence using the face DB. We evaluate performance of the ACAV system by comparing with FaceIt, one of the most well-known commercial systems for the cast-list analysis. The ACAV shows face detection and recognition rates of 84.3% and 75.7% that are about 30% and 27.5% higher than those of FaceIt, respectively. The ACAV system can be applied to mass broadcasting videos management system for broadcasters and video management system of PVR(Personal Video Recorder) and mobile phone for the public.

A Comparative Study on the Class Satisfaction between Remote Video Class and Face-to-face Class (대학의 원격화상수업과 대면수업의 만족도 비교 연구)

  • Lee, HanSaem;Seo, Eun Hee
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.7
    • /
    • pp.440-447
    • /
    • 2021
  • The purpose of this study is to verify the effectiveness of non-face-to-face lectures conducted at universities in Korea under the influence of COVID-19. So this study analyzed the satisfaction level of the students according to the type of class operation, such as face-to-face classes and remote video classes. To this end, this study compared the differences in class satisfaction by class type and class size for a total of 8,707 courses operated by a university between 2019 and 2020. The study found that the satisfaction level of the remote video class was significantly high. In addition, the combination of remote video classes and face-to-face was more satisfactory than other cases. On the other hand, the satisfaction level of small classes in both face-to-face and remote video classes was higher than that of medium or large classes. This means that even remote video classes are highly satisfactory in small-scale classes. Based on the findings, the study proposes a paradigm for new college classes.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.