• Title/Summary/Keyword: video reality

Search Result 392, Processing Time 0.023 seconds

Comparison of satisfaction, interest, and experience awareness of 360° virtual reality video and first-person video in non-face-to-face practical lectures in medical emergency departments (응급구조학과 비대면 실습 강의에서 360° 가상현실 영상과 1인칭 시점 영상의 만족도, 흥미도, 경험인식 비교)

  • Lee, Hyo-Ju;Shin, Sang-Yol;Jung, Eun-Kyung
    • The Korean Journal of Emergency Medical Services
    • /
    • v.24 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Purpose: This study aimed to establish effective training strategies and methods by comparing the effects of 360° virtual reality video and first-person video in non-face-to-face practical lectures. Methods: This crossover study, implemented May 18-31, 2020, included 27 participants. We compared 360° virtual reality video and first-person video. SPSS version 25.0 was used for statistical analysis. Results: The 360° virtual reality video had a higher score of experience recognition (p=.039), vividness (p=.045), presence (p=.000), fantasy factor (p=.000) than the first-person video, but no significant difference was indicated for satisfaction (p=.348) or interest (p=.441). Conclusion: 360° virtual reality video and first-person video can be used as training alternatives to achieve the standard educational objectives in non-face-to-face practical lectures.

Video Reality Improvement Using Measurement of Emotion for Olfactory Information (후각정보의 감성측정을 이용한 영상실감향상)

  • Lee, Guk-Hee;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.18 no.3
    • /
    • pp.3-16
    • /
    • 2015
  • Will orange scent enhance video reality if it is presented with a video which vividly illustrates orange juice? Or, will romantic scent improve video reality if it is presented along with a date scene? Whereas the former is related to reality improvement when concrete objects or places are present in a video, the latter is related to a case when they are absent. This paper reviews previous research which tested diverse videos and scents in order to answer the above two different questions, and discusses implications, limitations, and future research directions. In particular, this paper focuses on measurement methods and results regarding acceptability of olfactory information, perception of scent similarity, olfactory vividness and video reality, matching between scent vs. color (or color temperature), and description of various scents using emotional adjectives. We expect this paper to help researchers or engineers who are interested in using scents for video reality.

Video Classification Based on Viewer Acceptability of Olfactory Information and Suggestion for Reality Improvement (시청자의 후각정보 수용 특성에 따른 영상분류와 실감증대를 위한 제안)

  • Lee, Guk-Hee;Choi, Ji Hoon;Ahn, Chung Hyun;Li, Hyung-Chul O.;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.207-220
    • /
    • 2013
  • For video reality improvement, there has been much advancement in the methods of providing visual, auditory, and tactile information. On the other hand, there is little research on olfaction for video reality because it is difficult to define and knotty to manipulate. As a first step for video reality improvement using olfactory information, this research investigated users' acceptability of smell when they watch videos of various kinds and then classified the video clips based on their acceptability of different criteria. To do so, we first selected three questions of whether the scene in the video appears to have an odor (odor presence), whether a matching odor is likely to improve a sense of reality (effect on sense of reality), and whether s/he would like a matching odor to be present (preference for the matching odor). Then after collecting 51 video clips of various genres that would receive either high or low ratings for the questions above, we had participants to watch the videos and rate them for the above three questions on 7-point scale. For video classification, we paired each two questions to construct 2D space to draw scatterplot of video clips where the scales of the two questions represent X or Y axis. Clusters of video clips that locate at different quadrants of the 2D space would provide important insights in providing olfactory information for video reality improvement.

  • PDF

Comparison of experience recognition in 360° virtual reality videos and common videos (360° 가상현실 동영상과 일반 동영상 교육 콘텐츠의 경험인식 비교 분석)

  • Jung, Eun-Kyung;Jung, Ji-Yeon
    • The Korean Journal of Emergency Medical Services
    • /
    • v.23 no.3
    • /
    • pp.145-154
    • /
    • 2019
  • Purpose: This study simulates cardiac arrest situations in 360° virtual reality video clips and general video clips, and compares the correlations between educational media and experience recognition. Methods: Experimental research was carried out on a random control group (n=32) and experimental group (n=32) on March 20, 2019. Results: The groups where participants were trained with the 360° virtual reality video clips and a higher score of experience recognition (p=.047) than the group where participants were trained with the general video clips. Moreover, the subfactors of experience recognition including the sense of presence and vividness (p=.05), immersion (p<.05). experience (p<.01), fantasy factor (p<.05). and content satisfaction (p<.05) were positively correlated. Conclusion: Enhancing vividness and the sense of presence when developing virtual reality videos recorded with a 360° camera is thought to enable experience recognition without any direct interaction.

Design of Metaverse for Two-Way Video Conferencing Platform Based on Virtual Reality

  • Yoon, Dongeon;Oh, Amsuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.189-194
    • /
    • 2022
  • As non-face-to-face activities have become commonplace, online video conferencing platforms have become popular collaboration tools. However, existing video conferencing platforms have a structure in which one side unilaterally exchanges information, potentially increase the fatigue of meeting participants. In this study, we designed a video conferencing platform utilizing virtual reality (VR), a metaverse technology, to enable various interactions. A virtual conferencing space and realistic VR video conferencing content authoring tool support system were designed using Meta's Oculus Quest 2 hardware, the Unity engine, and 3D Max software. With the Photon software development kit, voice recognition was designed to perform automatic text translation with the Watson application programming interface, allowing the online video conferencing participants to communicate smoothly even if using different languages. It is expected that the proposed video conferencing platform will enable conference participants to interact and improve their work efficiency.

Augmented Reality Annotation for Real-Time Collaboration System

  • Cao, Dongxing;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.483-489
    • /
    • 2020
  • Advancements in mobile phone hardware and network connectivity made communication becoming more and more convenient. Compared to pictures or texts, people prefer to share videos to convey information. For intentions clearer, the way to annotating comments directly on the video are quite important issues. Recently there have been many attempts to make annotations on video. These previous works have many limitations that do not support user-defined handwritten annotations or annotating on local video. In this sense, we propose an augmented reality based real-time video annotation system which allowed users to make any annotations directly on the video freely. The contribution of this work is the development of a real-time video annotation system based on recent augmented reality platforms that not only enables annotating drawing geometry shape on video in real-time but also drastically reduces the production costs. For practical use, we proposed a real-time collaboration system based on the proposed annotation method. Experimental results show that the proposed annotation method meets the requirements of real-time, accuracy and robustness of the collaboration system.

Enhanced Augmented Reality with Realistic Shadows of Graphic Generated Objects (비디오 영상에 가상물체의 그림자 삽입을 통한 향상된 AR 구현)

  • 김태원;홍기상
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.619-622
    • /
    • 2000
  • In this paper, we propose a method for generating graphic objects having realistic shadows inserted into video sequence for the enhanced augmented reality. Our purpose is to extend the work of [1], which is applicable to the case of a static camera, to video sequence. However, in case of video, there are a few challenging problems, including the camera calibration problem over video sequence, false shadows occurring when the video camera moves and so on. We solve these problems using the convenient calibration technique of [2] and the information from video sequence . We present the experimental results on real video sequences.

  • PDF

Weaving the realities with video in multi-media theatre centering on Schaubuhne's Hamlet and Lenea de Sombra's Amarillo (멀티미디어 공연에서 비디오를 활용한 리얼리티 구축하기 - 샤우뷔네의 <햄릿>과 리니아 드 솜브라의 <아마릴로>를 중심으로 -)

  • Choi, Young-Joo
    • Journal of Korean Theatre Studies Association
    • /
    • no.53
    • /
    • pp.167-202
    • /
    • 2014
  • When video composes mise-en-scene during the performance, it reflects the aspect of contemporary image culture, where the individual as creator joins in the image culture through the device of cell phone and computer remediating the former video technology. It also closely related with the contemporary theatre culture in which 1960's and 1970's video art was weaved into the contemporary performance theatre. With these cultural background, theatre practitioners regarded media-friendly mise-en-scene as an alternative facing the cultural landscape the linear representational narrative did not correspond to the present culture. Nonetheless, it can not be ignored that video in the performance theatre is remediating its historical function: to criticize the social reality. to enrich the aesthetic or emotional reality. I focused video in the performance theatre could feature the object with the image by realizing the realtime relay, emphasizing the situation within the frame, and strengthening the reality by alluding the object as a gesutre. So I explored its two historical manuel. First, video recorded the spot, communicated the information, and arose the audience's recognition of the object to its critical function. Second, video in performance theatre could redistribute perceptual way according to the editing method like as close up, slow motion, multiple perspective, montage and collage, and transformation of the image to the aesthetic function. Reminding the historical function of video in contemporary performance theatre, I analyzed two shows, Schaubuhne's Hamlet and Lenea de Sombra's Amarillo which were introduced to Korean audiences during the 2010 Seoul Theatre Olympics. It is known to us that Ostermeir found real social reality as a text and made the play the context. In this, he used video as a vehicle to penetrate the social reality through the hero's perspective. It is also noteworthy that Ostermeir understood Hamlet's dilemma as these days' young generation's propensity. They delayed action while being involved in image culture. Besides his use of video in the piece revitalized the aesthetic function of video by hypermedial perceptual method. Amarillo combined documentary theatre method with installation, physical theatre, and video relay on the spot, and activated aesthetic function with the intermediality, its interacting co-relationship between the media. In this performance theatre, video has recorded and pursued the absent presence of the real people who died or lost in the desert. At the same time it fantasized the emotional aspect of the people at the moment of their death, which would be opaque or non prominent otherwise. As a conclusion, I found the video in contemporary performance theatre visualized the rupture between the media and perform their intermediality. It attempted to disturb the transparent immediacy to invoke the spectator's perception to the theatrical situation, to open its emotional and spiritual aspect, and to remind the realities as with Schaubuhne's Hamlet and Lenea de Sombra's Amarillo.

Status and development direction of Virtual Reality Video technology (가상현실 영상 기술의 현황과 발전방향 연구)

  • Liu, Miaoyihai;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.405-411
    • /
    • 2021
  • Virtual reality technology is a new practical technology developed in the 20th century. In recent years, the related industry is rapidly developing due to the continuous development and improvement of virtual reality (VR) technology, and various image contents that are realistic through the use of virtual reality technology provide users with a better visual experience. In addition, it has excellent characteristics in terms of interaction and imagination, so a bright prospect can be expected in the field of video content production. This paper introduced the types of display of VR video, technology, and how users view VR video at the current stage. In addition, the difference in resolution between the past VR equipment and the current equipment was compared and analyzed, and the reason why the resolution affects the VR image was explored. Among the future development of VR video, we will present some development directions and provide convenience to people.

User Perception of Olfactory Information for Video Reality and Video Classification (영상실감을 위한 후각정보에 대한 사용자 지각과 영상분류)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Ahn, Chung Hyun;Choi, Ji Hoon;Kim, Shin Woo
    • Journal of the HCI Society of Korea
    • /
    • v.8 no.2
    • /
    • pp.9-19
    • /
    • 2013
  • There has been much advancement in reality enhancement using audio-visual information. On the other hand, there is little research on provision of olfactory information because smell is difficult to implement and control. In order to obtain necessary basic data when intend to provide smell for video reality, in this research, we investigated user perception of smell in diverse videos and then classified the videos based on the collected user perception data. To do so, we chose five main questions which were 'whether smell is present in the video'(smell presence), 'whether one desire to experience the smell with the video'(preference for smell presence with the video), 'whether one likes the smell itself'(preference for the smell itself), 'desired smell intensity if it is presented with the video'(smell intensity), and 'the degree of smell concreteness'(smell concreteness). After sampling video clips of various genre which are likely to receive either high and low ratings in the questions, we had participants watch each video after which they provided ratings on 7-point scale for the above five questions. Using the rating data for each video clips, we constructed scatter plots by pairing the five questions and representing the rating scale of each paired questions as X-Y axes in 2 dimensional spaces. The video clusters and distributional shape in the scatter plots would provide important insight into characteristics of each video clusters and about how to present olfactory information for video reality.

  • PDF