• Title/Summary/Keyword: facial video

Search Result 126, Processing Time 0.027 seconds

The Development of Robot and Augmented Reality Based Contents and Instructional Model Supporting Childrens' Dramatic Play (로봇과 증강현실 기반의 유아 극놀이 콘텐츠 및 교수.학습 모형 개발)

  • Jo, Miheon;Han, Jeonghye;Hyun, Eunja
    • Journal of The Korean Association of Information Education
    • /
    • v.17 no.4
    • /
    • pp.421-432
    • /
    • 2013
  • The purpose of this study is to develop contents and an instructional model that support children's dramatic play by integrating the robot and augmented reality technology. In order to support the dramatic play, the robot shows various facial expressions and actions, serves as a narrator and a sound manager, supports the simultaneous interaction by using the camera and recognizing the markers and children's motions, records children's activities as a photo and a video that can be used for further activities. The robot also uses a projector to allow children to directly interact with the video object. On the other hand, augmented reality offers a variety of character changes and props, and allows various effects of background and foreground. Also it allows natural interaction between the contents and children through the real-type interface, and provides the opportunities for the interaction between actors and audiences. Along with these, augmented reality provides an experience-based learning environment that induces a sensory immersion by allowing children to manipulate or choose the learning situation and experience the results. In addition, the instructional model supporting dramatic play consists of 4 stages(i.e., teachers' preparation, introducing and understanding a story, action plan and play, evaluation and wrapping up). At each stage, detailed activities to decide or proceed are suggested.

Adaptive Skin Color Segmentation in a Single Image using Image Feedback (영상 피드백을 이용한 단일 영상에서의 적응적 피부색 검출)

  • Do, Jun-Hyeong;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.112-118
    • /
    • 2009
  • Skin color segmentation techniques have been widely utilized for face/hand detection and tracking in many applications such as a diagnosis system using facial information, human-robot interaction, an image retrieval system. In case of a video image, it is common that the skin color model for a target is updated every frame for the robust target tracking against illumination change. As for a single image, however, most of studies employ a fixed skin color model which may result in low detection rate or high false positive errors. In this paper, we propose a novel method for effective skin color segmentation in a single image, which modifies the conditions for skin color segmentation iteratively by the image feedback of segmented skin color region in a given image.

Effect of Systematic Educational Program for the Application of National Institutes of Health Stroke Scale (NIHSS) as a Neurologic Assessment Tool in Stroke Patients (뇌졸중의 신경학적 사정 도구인 NIHSS 적용을 위한 체계적인 간호사 교육 프로그램의 효과)

  • Han, Jung Hee;Lee, Gee Eun;An, Young Hee;Yoo, Sung Hee
    • Journal of Korean Clinical Nursing Research
    • /
    • v.19 no.1
    • /
    • pp.57-68
    • /
    • 2013
  • Purpose: In assessing patients' neurological status following a stroke it is very important to have a valid tool for early detection of neurological deterioration. NIHSS is considered the best tool to reflect neurological status in patients with ischemic stroke. An education program on use of NIHSS was planned for nurses caring for these patients and the effects of the program were evaluated. Methods: The NIHSS education program (NEP) which includes online and video lectures, and practical education was provided to the nurses from April to July, 2010. To examine the effect of NEP, nursing records of patients with ischemic stroke who were admitted to a stroke center were analyzed. Two groups, a historical control group (n=100) and the study group (n=115) were included. Results: Nursing records for neurologic symptoms for each patient increased (41.0% versus 100.0%, p<.001), and especially, visual disturbance, facial palsy. limb paralysis and ataxia, language disturbance, dysarthria, and neglect symptoms significantly increased (all p<.001). Nurse notification to the doctor of patients with neurological changes increased (21.0% versus 39.1%, p=.004), and nurses' neurological deterioration detection rates also increased (37.5% versus 84.6%, p=.009). Conclusion: NEP improved the quality of nursing records for neurological assessment and the detection rate of neurological deterioration.

Implementation of Driver Fatigue Monitoring System (운전자 졸음 인식 시스템 구현)

  • Choi, Jin-Mo;Song, Hyok;Park, Sang-Hyun;Lee, Chul-Dong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8C
    • /
    • pp.711-720
    • /
    • 2012
  • In this paper, we introduce the implementation of driver fatigue monitering system and its result. Input video device is selected commercially available web-cam camera. Haar transform is used to face detection and adopted illumination normalization is used for arbitrary illumination conditions. Facial image through illumination normalization is extracted using Haar face features easily. Eye candidate area through illumination normalization can be reduced by anthropometric measurement and eye detection is performed by PCA and Circle Mask mixture model. This methods achieve robust eye detection on arbitrary illumination changing conditions. Drowsiness state is determined by the level on illumination normalize eye images by a simple calculation. Our system alarms and operates seatbelt on vibration through controller area network(CAN) when the driver's doze level is detected. Our algorithm is implemented with low computation complexity and high recognition rate. We achieve 97% of correct detection rate through in-car environment experiments.

A Study on the Mode of Address and Meaning Creation of Underlight in Broadcasting Lighting (방송조명에서 언더라이트의 표현 양식과 의미 창출에 관한 연구)

  • Kim, Young-Jin;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.21 no.5
    • /
    • pp.749-759
    • /
    • 2016
  • As image contents in broadcasting have been created in HDTVs and monitors have been commercialized, facial expression of objects in broadcasting lighting has become a very significant task. Figure modeling of objects in HDTVs requires smoother and cleaner video images owing to the expansion of precision of image expression by light. Lighting methods that enlighten characters in the digital generation have come to require a new change. Character modeling methods used on expression features of underlight are receiving attention for aesthetic expression of figures in HD images. Accordingly, the influence of underlight light source intensity, distance, and size on character modeling characteristics was experimentally measured and comparatively analyzed. The experiment results show that good results can be obtained only when the intensity is 17%∼25.5% in contrast to total brightness, distance is beyond 40cm, and the size is at least 20cm, to exhibit the optimum effect of underlight. This data will become material with high usage to gain smoother and cleaner images of characters in future high-quality images.

The effects of the usability of products on user's emotions - with emphasis on suggestion of methods for measuring user's emotions expressed while using a product -

  • Jeong, Sang-Hoon
    • Archives of design research
    • /
    • v.20 no.2 s.70
    • /
    • pp.5-16
    • /
    • 2007
  • The main objective of our research is analyzing user's emotional changes while using a product, to reveal the influence of usability on human emotions. In this study we have extracted some emotional words that can come up during user interaction with a product and reveal emotional changes through three methods. Finally, we extracted 88 emotional words for measuring user's emotions expressed while using products. And we categorized the 88 words to form 6 groups by using factor analysis. The 6 categories that were extracted as a result of this study were found to be user's representative emotions expressed while using products. It is expected that emotional words and user's representative emotions extracted in this study will be used as subjective evaluation data that is required to measure user's emotional changes while using a product. Also, we proposed the effective methods for measuring user's emotion expressed while using a product in the environment which is natural and accessible for the field of design, by using the emotion mouse and the Eyegaze. An examinee performs several tasks with the emotion mouse through the mobile phone simulator on the computer monitor connected to the Eyegaze. While testing, the emotion mouse senses user's EDA and PPG and transmits the data to the computer. In addition, the Eyegaze can observe the change of pupil size. And a video camera records user's facial expression while testing. After each testing, a subjective evaluation on the emotional changes expressed by the user is performed by the user him/herself using the emotional words extracted from the above study. We aim to evaluate the satisfaction level of usability of the product and compare it with the actual experiment results. Through continuous studies based on these researches, we hope to supply a basic framework for the development of interface with consideration to the user's emotions.

  • PDF

A Study on The Expression of Digital Eye Contents for Emotional Communication (감성 커뮤니케이션을 위한 디지털 눈 콘텐츠 표현 연구)

  • Lim, Yoon-Ah;Lee, Eun-Ah;Kwon, Jieun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.563-571
    • /
    • 2017
  • The purpose of this paper is to establish an emotional expression factors of digital eye contents that can be applied to digital environments. The emotion which can be applied to the smart doll is derived and we suggest guidelines for expressive factors of each emotion. For this paper, first, we research the concepts and characteristics of emotional expression are shown in eyes by the publications, animation and actual video. Second, we identified six emotions -Happy, Angry, Sad, Relaxed, Sexy, Pure- and extracted the emotional expression factors. Third, we analyzed the extracted factors to establish guideline for emotional expression of digital eyes. As a result, this study found that the factors to distinguish and represent each emotion are classified four categories as eye shape, gaze, iris size and effect. These can be used as a way to enhance emotional communication effects such as digital contents including animations, robots and smart toys.

A Parallel Processing System for Visual Media Applications (시각매체를 위한 병렬처리 시스템)

  • Lee, Hyung;Pakr, Jong-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.1A
    • /
    • pp.80-88
    • /
    • 2002
  • Visual media(image, graphic, and video) processing poses challenge from several perpectives, specifically from the point of view of real-time implementation and scalability. There have been several approaches to obtain speedups to meet the computing demands in multimedia processing ranging from media processors to special purpose implementations. A variety of parallel processing strategies are adopted in these implementations in order to achieve the required speedups. We have investigated a parallel processing system for improving the processing speed o f visual media related applications. The parallel processing system we proposed is similar to a pipelined memory stystem(MAMS). The multi-access memory system is made up of m memory modules and a memory controller to perform parallel memory access with a variety of combinations of 1${\times}$pq, pq${\times}$1, and p${\times}$q subarray, which improves both cost and complexity of control. Facial recognition, Phong shading, and automatic segmentation of moving object in image sequences are some that have been applied to the parallel processing system and resulted in faithful processing speed. This paper describes the parallel processing systems for the speedup and its utilization to three time-consuming applications.

Understanding of Fetal Surgery and Application to the Cleft Lip and Palate Patient (태수술에 대한 이해와 구순구개열 환자에서의 적용)

  • Kim, Soung-Min;Park, Jung-Min;Myoung, Hoon;Choi, Jin-Young;Lee, Jong-Ho;Choung, Pill-Hoon;Kim, Myung-Jin
    • Korean Journal of Cleft Lip And Palate
    • /
    • v.11 no.2
    • /
    • pp.49-58
    • /
    • 2008
  • The development of fetal surgery has led to promising options for many congenital malformations, such as congenital diaphragmatic hernia (CDH), obstructive uropathy, twin-to-twin transfusion syndrome (TTTS), and sacrococcygeal teratoma. However, preterm labor (PTL) and premature rupture of membranes continue to be uniquitous risks for both mother and fetus. To reduce maternal morbidity and the risk of prematurity, minimal access techniques were developed and are increasingly employed recently. Lift-threatening diseases as well as severely disabling but not life-threatening conditions are potentially amenable to treatment. Recently, improvement of video-endoscopic technology has boosted the development of operative techniques for feto-endoscopic surgery, which has been demonstrated to be less invasive than the open approach. Fetal surgery for repair of cleft lip and palate, a congenital anomaly which is not life threatening, is inappropriate until such time that the benefits are shown to outweigh the risks of both the procedure itself and preterm delivery. Further animal studies will be needed before intrauterine surgery for humans should be considered. For the better understanding of recent techniques and complications associated with fetal intervention of congenital facial defect patients, we reviewed recent related articles about the current knowledge and new perspectives of experimental fetal fetal surgery in the cleft lip and palate defects.

  • PDF

Multi-view learning review: understanding methods and their application (멀티 뷰 기법 리뷰: 이해와 응용)

  • Bae, Kang Il;Lee, Yung Seop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.1
    • /
    • pp.41-68
    • /
    • 2019
  • Multi-view learning considers data from various viewpoints as well as attempts to integrate various information from data. Multi-view learning has been studied recently and has showed superior performance to a model learned from only a single view. With the introduction of deep learning techniques to a multi-view learning approach, it has showed good results in various fields such as image, text, voice, and video. In this study, we introduce how multi-view learning methods solve various problems faced in human behavior recognition, medical areas, information retrieval and facial expression recognition. In addition, we review data integration principles of multi-view learning methods by classifying traditional multi-view learning methods into data integration, classifiers integration, and representation integration. Finally, we examine how CNN, RNN, RBM, Autoencoder, and GAN, which are commonly used among various deep learning methods, are applied to multi-view learning algorithms. We categorize CNN and RNN-based learning methods as supervised learning, and RBM, Autoencoder, and GAN-based learning methods as unsupervised learning.