• Title/Summary/Keyword: video clips

Search Result 195, Processing Time 0.028 seconds

Students' Perceptions on Chemistry I Class Using YouTube Video Clips (유튜브 동영상을 활용한 화학 I 수업에 대한 학생들의 인식)

  • Jyun, Hwa-Young;Hong, Hun-Gi
    • Journal of the Korean Chemical Society
    • /
    • v.54 no.4
    • /
    • pp.465-470
    • /
    • 2010
  • Using interesting video clips corresponding to lesson subjects for students who favour visual representation is one of the good methods to enhance students' preference for science class. There are many moving picture web sites to get video clips easily via internet and 'YouTube' is very popular and one of the largest reservoir. In this study, every student in the 'Chemistry I' class, which is a class for 11th grade, was requested to search a video clip corresponding to lesson subjects and to make a presentation in the class. After 1st semester, students' response about the class using YouTube was examined by survey. As a result, students preferred and were interested in the class using YouTube than class centered on textbook. And students preferred YouTube clips showing unusual experiments that were related with contents of subject. In addition, experiments and watching their real phenomena were an interesting factor and helpful factor of learning chemistry in YouTube video clips, respectively. However, translation of English used in the video clips seemed to be a difficult part for students.

2D Adjacency Matrix Generation using DCT for UWV contents

  • Li, Xiaorui;Lee, Euisang;Kang, Dongjin;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.39-42
    • /
    • 2016
  • Since a display device such as TV or signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. In order to stich those video clips, it is required to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips. Discrete Cosine Transform (DCT), which is used as a compression transform method, can convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned compressed features, 2D adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

  • PDF

Comparison of experience recognition in 360° virtual reality videos and common videos (360° 가상현실 동영상과 일반 동영상 교육 콘텐츠의 경험인식 비교 분석)

  • Jung, Eun-Kyung;Jung, Ji-Yeon
    • The Korean Journal of Emergency Medical Services
    • /
    • v.23 no.3
    • /
    • pp.145-154
    • /
    • 2019
  • Purpose: This study simulates cardiac arrest situations in 360° virtual reality video clips and general video clips, and compares the correlations between educational media and experience recognition. Methods: Experimental research was carried out on a random control group (n=32) and experimental group (n=32) on March 20, 2019. Results: The groups where participants were trained with the 360° virtual reality video clips and a higher score of experience recognition (p=.047) than the group where participants were trained with the general video clips. Moreover, the subfactors of experience recognition including the sense of presence and vividness (p=.05), immersion (p<.05). experience (p<.01), fantasy factor (p<.05). and content satisfaction (p<.05) were positively correlated. Conclusion: Enhancing vividness and the sense of presence when developing virtual reality videos recorded with a 360° camera is thought to enable experience recognition without any direct interaction.

Perceived Substitutability between Video Clips of Naver TV Cast and Regular TV program (네이버TV캐스트 클립영상의 TV 방송프로그램에 대한 인지된 대체가능성)

  • Ham, Min-jeong;Lee, Sang Woo
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.6
    • /
    • pp.92-104
    • /
    • 2019
  • This study aims to figure out perceived substitutability between video clips distributed on Naver TV cast and regular TV program. An online survey was conducted for a week on October 2017, and we collected several responses, such as viewing time, viewing motive, the extent of viewing for video clip categories, and perceived substitutability between video clips and regular TV program. The dependent variable is Naver TV cast users' perceived substitutability between video clips and regular TV program, and the independent variables are viewing time, viewing motive and viewing degree by video clip categories. With these variables, hierarchical regression was conducted. The result shows that viewing time, use motives, including personal relations, booming issue and selective use, and the extent of web-based video content viewing positively affect on Naver TV cast users' perceived substitutability.

Virtual Reality to Help Relieve Travel Anxiety

  • Ahn, Jong-Chang;Cho, Sung-Phil;Jeong, Soon-Ki
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.6
    • /
    • pp.1433-1448
    • /
    • 2013
  • This study presents empirical evidence of the benefit of viewing narrative video clips on embedded virtual reality (VR) websites of hotels to relieve travel anxiety. As the effectiveness of using VR functions to relieve travel anxiety has been shown, we proposed that a VR website enhanced with narrative video clips could relieve travelers' anxiety about accommodations by showing the important aspects of a hotel. Thus, we created a website with a narrative video showing the escape route from a hotel room and another narrative video showing the surrounding neighborhood. We then conducted experiments by having human subjects explore the enhanced VR website and fill out a questionnaire. The results confirmed our hypothesis that there is a statistically significant relationship between relief from travel anxiety and the use of narrative videos on embedded VR websites of hotels.

Generation of Video Clips Utilizing Shot Boundary Detection (샷 경계 검출을 이용한 영상 클립 생성)

  • Kim, Hyeok-Man;Cho, Seong-Kil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.582-592
    • /
    • 2001
  • Video indexing plays an important role in the applications such as digital video libraries or web VOD which archive large volume of digital videos. Video indexing is usually based on video segmentation. In this paper, we propose a software tool called V2Web Studio which can generate video clips utilizing shot boundary detection algorithm. With the V2Web Studio, the process of clip generation consists of the following four steps: 1) Automatic detection of shot boundaries by parsing the video, 2) Elimination of errors by manually verifying the results of the detection, 3) Building a modeling structure of logical hierarchy using the verified shots, and 4) Generating multiple video clips corresponding to each logically modeled segment. The aforementioned steps are performed by shot detector, shot verifier, video modeler and clip generator in the V2Web Studio respectively.

  • PDF

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Sensibility Evaluation of Internet Shoppers with the Sportswear Rustling Sounds (스포츠의류 마찰음 정보 제공에 따른 인터넷 구매자의 감성평가)

  • Baek, Gyeong-Rang;Jo, Gil-Su
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.177-180
    • /
    • 2009
  • This study investigates the perception of different fabrics by consumers when provided with a video clip with rustling sounds of the fabric. We utilized sportswear products that are currently on the market and evaluated the emotional response of internet shoppers by measuring the physiological and psychological responses. Three kinds of vapor-permeable water-repellent fabric were selected to generate video clips each containing the fabric rustling sound and images of exercise activities wearing the sportswear made of the respective fabric. The new experimental website contained the video clips and was compared with the original website which served as a control. 30 subjects, who had experience to buy clothing online, took part in the physiological and psychological response to the video clip. Electroen-cephalography (EEG) was used to measure the physiological response while the psychological response consisted of evaluating accurate perception of the fabric, satisfaction, and consumer interest. When we offered video clips with fabric's rustling sound on the website, subjects answered they could get more accurate and rapid information to decide to purchase the products than otherwise they do the shopping without such information. However, such rustling sounds somewhat annoy customers, as proved psychological and physiological response. Our study is a critical step in evaluating the consumer's emotional response to sportswear fabric which will promote selling frequency, reduce the return rate and aid development of new sportswear fabric further evolution of the industry.

  • PDF

Video Classification Based on Viewer Acceptability of Olfactory Information and Suggestion for Reality Improvement (시청자의 후각정보 수용 특성에 따른 영상분류와 실감증대를 위한 제안)

  • Lee, Guk-Hee;Choi, Ji Hoon;Ahn, Chung Hyun;Li, Hyung-Chul O.;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.207-220
    • /
    • 2013
  • For video reality improvement, there has been much advancement in the methods of providing visual, auditory, and tactile information. On the other hand, there is little research on olfaction for video reality because it is difficult to define and knotty to manipulate. As a first step for video reality improvement using olfactory information, this research investigated users' acceptability of smell when they watch videos of various kinds and then classified the video clips based on their acceptability of different criteria. To do so, we first selected three questions of whether the scene in the video appears to have an odor (odor presence), whether a matching odor is likely to improve a sense of reality (effect on sense of reality), and whether s/he would like a matching odor to be present (preference for the matching odor). Then after collecting 51 video clips of various genres that would receive either high or low ratings for the questions above, we had participants to watch the videos and rate them for the above three questions on 7-point scale. For video classification, we paired each two questions to construct 2D space to draw scatterplot of video clips where the scales of the two questions represent X or Y axis. Clusters of video clips that locate at different quadrants of the 2D space would provide important insights in providing olfactory information for video reality improvement.

  • PDF