• Title/Summary/Keyword: Video clips

Search Result 195, Processing Time 0.024 seconds

An Analysis of Visual Fatigue Caused From Distortions in 3D Video Production (3D 영상의 제작 왜곡이 시청 피로도에 미치는 영향 분석)

  • Jang, Hyung-Jun;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.1-16
    • /
    • 2012
  • In order to improve the workflow of 3D video production, this paper analyzes the visual fatigue caused from distortions in 3D video production stage through a set of subjective visual assessment tests. To establish a set of objective indicators for subjective visual tests, various distortions in production stage are investigated to be categorized into 7 representative visual-fatigue-producing factors, and to conduct visual assessment tests for each selected category, 4 test video clips are produced by combining the extent of camera movement as well as the object(s) movement in the scene. Each produced test video is distorted to reflect each of the selected 7 visual-fatigue-producing factors, and we set 7 levels of distortion for each factor, resulting in 196 5-second-long video clips for testing. Based on these test materials and the recommendation of ITU-R BT.1438, subjective visual assessment tests are conducted by 101 applicants. The test results provide a relative importance and the tolerance limit of each visual-fatigue-producing factor, which corresponds to various distortions in 3D video production field.

Differential effects of the valenced content and the interaction with pacing on information processing while watching video clips (영상물 시청에 발현된 감성 유인가의 차별적 영향과 편집속도와의 상호작용)

  • Lee, Seung-Jo
    • Science of Emotion and Sensibility
    • /
    • v.12 no.1
    • /
    • pp.33-44
    • /
    • 2009
  • This study investigates differential impacts of the positive and negative content and the interaction with pacing, as a structural feature, on information processing while watching televised video clips with moderately intensive emotional tone. College participants watched six positive messages and six negative video clips lasting approximately 60 seconds. Heart rate was used to index attention and skin conductance was used to measure arousal. After all of the stimuli were shown, the participants performed the free recall questionnaire. The result demonstrates, first, positivity superiority on attention in which participants' heart rates were slower during positive content compared to during negative content. Secondly, negativity superiority was shown on free recall memory as participants remembered positive content better than did negative content. The result also manifests the interaction of emotional valence and pacing as the effects of pacing were less for the negatively emotional content compared to those for the positively emotional content. It is suggested that future studies should examine further about the differential and independent functions of positive and negative contents on information processing and the potential interaction with formal features.

  • PDF

Feature-Based Light and Shadow Estimation for Video Compositing and Editing (동영상 합성 및 편집을 위한 특징점 기반 조명 및 그림자 추정)

  • Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • Video-based modeling / rendering developed to produce photo-realistic video contents have been one of the important research topics in computer graphics and computer visions. To smoothly combine original input video clips and 3D graphic models, geometrical information of light sources and cameras used to capture a scene in the real world is essentially required. In this paper, we present a simple technique to estimate the position and orientation of an optimal light source from the topology of objects and the silhouettes of shadows appeared in the original video clips. The technique supports functions to generate well matched shadows as well as to render the inserted models by applying the estimated light sources. Shadows are known as an important visual cue that empirically indicates the relative location of objects in the 3D space. Thus our method can enhance realism in the final composed videos through the proposed shadow generation and rendering algorithms in real-time.

Factors Affecting the Popularity of Video Clip: The Case of Naver TV (영상클립의 인기요인에 대한 실증 연구: 네이버 TV를 중심으로)

  • Yang, Gimun;Chung, Sun Hyung;Lee, Sang Woo
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.6
    • /
    • pp.706-718
    • /
    • 2018
  • This study analyzed Naver TV users' pattern of video clip watching, and analyzed the factors affecting the popularity of Naver TV's video clip. We selected 572 individual video clips that were ranked 50th in Naver TV rankings from September 10th to September 24th in 2017. We classified video clip's characteristics into several factors, including the number of likes, the number of subscriber, genre, video clip's types, and star appearances. We indexed the popularity of video clip, which implies the degree of popularity for each video clip. The results showed that the number of likes for video clips and the number of subscribers for each video clip were positively related to the popularity of video clip. Video clip's genre, video clip's type and star power positively affected the popularity of video clip. The effect of extras genre on the popularity of video clip was the lowest, followed by entertainment, music, and drama genre. but the difference among entertainment, music and drama genre was not statistically significant. Web-only video and non-broadcast video positively affected the popularity of video clip. Finally, the popularity of video clip was higher when stars appeared in the video clip.

A new approach for content-based video retrieval

  • Kim, Nac-Woo;Lee, Byung-Tak;Koh, Jai-Sang;Song, Ho-Young
    • International Journal of Contents
    • /
    • v.4 no.2
    • /
    • pp.24-28
    • /
    • 2008
  • In this paper, we propose a new approach for content-based video retrieval using non-parametric based motion classification in the shot-based video indexing structure. Our system proposed in this paper has supported the real-time video retrieval using spatio-temporal feature comparison by measuring the similarity between visual features and between motion features, respectively, after extracting representative frame and non-parametric motion information from shot-based video clips segmented by scene change detection method. The extraction of non-parametric based motion features, after the normalized motion vectors are created from an MPEG-compressed stream, is effectively fulfilled by discretizing each normalized motion vector into various angle bins, and by considering the mean, variance, and direction of motion vectors in these bins. To obtain visual feature in representative frame, we use the edge-based spatial descriptor. Experimental results show that our approach is superior to conventional methods with regard to the performance for video indexing and retrieval.

Server Side Solutions For Web-Based Video

  • Biernacki, Arkadiusz
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1768-1789
    • /
    • 2016
  • In contemporary video streaming systems based on HTTP protocol, video players at the client side are responsible for adjusting video quality to network conditions and user expectations. However, when multiple video clips are streamed simultaneously, an intricate application logic implemented in the video players overlays the TCP mechanism which is responsible for a balanced access to a shared network link. As a result, some video players may not obtain a fair share of network throughput and may be vulnerable to an unstable video bit-rate. Therefore, we propose to simplify the algorithms implemented in the video players, which are responsible for the adjustment of video quality and constrain their functionality only to sending feedback to a server about a state of the player buffer. The main logic of the system is shifted to the server, which is now responsible for bit-rate selection and prioritisation of the video streams transmitted to multiple clients. To verify our proposition, we performed several experiments in a laboratory environment which show that when the server cooperates with the clients, the video players experience fewer quality switches and the system achieves better fairness when allocating network throughput among the video players. However, this comes at the cost of worse utilisation of network bandwidth.

Video Segmentation and Video Segment Structure for Virtual Navigation

  • Choi, Ji-Hoon;Kim, Seong-Baek;Lee, Seung-Yong;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.783-785
    • /
    • 2003
  • In recent years, the use of video in GIS is considered to be an important subject and many related studies result in VideoGIS. The virtual navigation is an important function that can be applied to various VideoGIS applications. For virtual navigation by video, the following problems must be solved. 1) Because the video route may be not exactly coincided with route that user wants to navigate, parts of several video clips may be required for single navigation. Virtual navigation should allow the user to move from one video to another at the proper position. We suggest the video segmentation method based on geographic data combined with video. 2) From a point to a destination, the change frequency of video must be minimized. The frequent change of video make user to mislead navigation route and cause the wasteful use of computing resource. We suggest methods that structure video segments and calculate weight value of each node and link.

  • PDF

User Perception of Olfactory Information for Video Reality and Video Classification (영상실감을 위한 후각정보에 대한 사용자 지각과 영상분류)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Ahn, Chung Hyun;Choi, Ji Hoon;Kim, Shin Woo
    • Journal of the HCI Society of Korea
    • /
    • v.8 no.2
    • /
    • pp.9-19
    • /
    • 2013
  • There has been much advancement in reality enhancement using audio-visual information. On the other hand, there is little research on provision of olfactory information because smell is difficult to implement and control. In order to obtain necessary basic data when intend to provide smell for video reality, in this research, we investigated user perception of smell in diverse videos and then classified the videos based on the collected user perception data. To do so, we chose five main questions which were 'whether smell is present in the video'(smell presence), 'whether one desire to experience the smell with the video'(preference for smell presence with the video), 'whether one likes the smell itself'(preference for the smell itself), 'desired smell intensity if it is presented with the video'(smell intensity), and 'the degree of smell concreteness'(smell concreteness). After sampling video clips of various genre which are likely to receive either high and low ratings in the questions, we had participants watch each video after which they provided ratings on 7-point scale for the above five questions. Using the rating data for each video clips, we constructed scatter plots by pairing the five questions and representing the rating scale of each paired questions as X-Y axes in 2 dimensional spaces. The video clusters and distributional shape in the scatter plots would provide important insight into characteristics of each video clusters and about how to present olfactory information for video reality.

  • PDF

Design and Implementation of the Video Query Processing Engine for Content-Based Query Processing (내용기반 질의 처리를 위한 동영상 질의 처리기의 설계 및 구현)

  • Jo, Eun-Hui;Kim, Yong-Geol;Lee, Hun-Sun;Jeong, Yeong-Eun;Jin, Seong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.603-614
    • /
    • 1999
  • As multimedia application services on high-speed information network have been rapidly developed, the need for the video information management system that provides an efficient way for users to retrieve video data is growing. In this paper, we propose a video data model that integrates free annotations, image features, and spatial-temporal features for video purpose of improving content-based retrieval of video data. The proposed video data model can act as a generic video data model for multimedia applications, and support free annotations, image features, spatial-temporal features, and structure information of video data within the same framework. We also propose the video query language for efficiently providing query specification to access video clips in the video data. It can formalize various kinds of queries based on the video contents. Finally we design and implement the query processing engine for efficient video data retrieval on the proposed metadata model and the proposed video query language.

  • PDF

A Frame-Based Video Signature Method for Very Quick Video Identification and Location

  • Na, Sang-Il;Oh, Weon-Geun;Jeong, Dong-Seok
    • ETRI Journal
    • /
    • v.35 no.2
    • /
    • pp.281-291
    • /
    • 2013
  • A video signature is a set of feature vectors that compactly represents and uniquely characterizes one video clip from another for fast matching. To find a short duplicated region, the video signature must be robust against common video modifications and have a high discriminability. The matching method must be fast and be successful at finding locations. In this paper, a frame-based video signature that uses the spatial information and a two-stage matching method is presented. The proposed method is pair-wise independent and is robust against common video modifications. The proposed two-stage matching method is fast and works very well in finding locations. In addition, the proposed matching structure and strategy can distinguish a case in which a part of the query video matches a part of the target video. The proposed method is verified using video modified by the VCE7 experimental conditions found in MPEG-7. The proposed video signature method achieves a robustness of 88.7% under an independence condition of 5 parts per million with over 1,000 clips being matched per second.