• Title/Summary/Keyword: Video clips

Search Result 195, Processing Time 0.022 seconds

Anatomizing Popular YouTube Channels of English-speaking Countries

  • Han, Sukhee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.42-47
    • /
    • 2020
  • YouTube, the online video streaming platform, has become popular and influential around the globe. Due to the development of science and technology, people without expertise in filming can now easily produce their videos with unique content. Many people are more eager to become a popular YouTube creator because they can earn money by placing commercials or Products in Placement (PPL) in their video clips. However, it is yet unknown what genres of YouTube videos are popular. YouTube creators have their channels where they upload videos of a certain type of genre. This study investigates video genres of the top 250 YouTube channels in English-speaking countries (United States, Canada, United Kingdom, and Australia) using Social Blade, which is a research website. The ranking is set based on the number of times people watched a video ("Video Views"). We handsomely analyze popular genres of the channels and also the YouTube ecosystem, and it will be meaningful for today's new media era.

Emergency Detection Method using Motion History Image for a Video-based Intelligent Security System

  • Lee, Jun;Lee, Se-Jong;Park, Jeong-Sik;Seo, Yong-Ho
    • International journal of advanced smart convergence
    • /
    • v.1 no.2
    • /
    • pp.39-42
    • /
    • 2012
  • This paper proposed a method that detects emergency situations in a video stream using MHI (Motion History Image) and template matching for a video-based intelligent security system. The proposed method creates a MHI of each human object through image processing technique such as background removing based on GMM (Gaussian Mixture Model), labeling and accumulating the foreground images, then the obtained MHI is compared with the existing MHI templates for detecting an emergency situation. To evaluate the proposed emergency detection method, a set of experiments on the dataset of video clips captured from a security camera has been conducted. And we successfully detected emergency situations using the proposed method. In addition, the implemented system also provides MMS (Multimedia Message Service) so that a security manager can deal with the emergency situation appropriately.

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

A Study on the Factors Affecting Pre-Roll Advertising Avoidance by Online Video Content Types (온라인 동영상 콘텐츠 유형별 프리롤 광고회피에 영향을 미치는 요인에 관한 연구)

  • Yun, Yeon-Joo;Lee, Yeong-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.4
    • /
    • pp.677-687
    • /
    • 2018
  • The purpose of this study is to investigate the causes of pre-roll ads played before watching online video contents, and to investigate the using motives by dividing them into broadcasting contents clips and web original contents. The results show that broadcast contents clips have higher use of entertainment/habitual use and social interaction, and that the use time of web content is higher when entertainment/convenience and selective use motivation are higher. Second, perceived invasion has the greatest effect on ad avoidance in broadcasting contents clip, and positive attitude toward advertisement is a significant factor in web contents. Content factors such as content preference and engagement did not affect the avoidance of pre-roll ad.

Deep Learning Object Detection to Clearly Differentiate Between Pedestrians and Motorcycles in Tunnel Environment Using YOLOv3 and Kernelized Correlation Filters

  • Mun, Sungchul;Nguyen, Manh Dung;Kweon, Seokkyu;Bae, Young Hoon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.7
    • /
    • pp.1266-1275
    • /
    • 2019
  • With increasing criminal rates and number of CCTVs, much attention has been paid to intelligent surveillance system on the horizon. Object detection and tracking algorithms have been developed to reduce false alarms and accurately help security agents immediately response to undesirable changes in video clips such as crimes and accidents. Many studies have proposed a variety of algorithms to improve accuracy of detecting and tracking objects outside tunnels. The proposed methods might not work well in a tunnel because of low illuminance significantly susceptible to tail and warning lights of driving vehicles. The detection performance has rarely been tested against the tunnel environment. This study investigated a feasibility of object detection and tracking in an actual tunnel environment by utilizing YOLOv3 and Kernelized Correlation Filter. We tested 40 actual video clips to differentiate pedestrians and motorcycles to evaluate the performance of our algorithm. The experimental results showed significant difference in detection between pedestrians and motorcycles without false positive rates. Our findings are expected to provide a stepping stone of developing efficient detection algorithms suitable for tunnel environment and encouraging other researchers to glean reliable tracking data for smarter and safer City.

Robust Online Object Tracking with a Structured Sparse Representation Model

  • Bo, Chunjuan;Wang, Dong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2346-2362
    • /
    • 2016
  • As one of the most important issues in computer vision and image processing, online object tracking plays a key role in numerous areas of research and in many real applications. In this study, we present a novel tracking method based on the proposed structured sparse representation model, in which the tracked object is assumed to be sparsely represented by a set of object and background templates. The contributions of this work are threefold. First, the structure information of all the candidate samples is utilized by a joint sparse representation model, where the representation coefficients of these candidates are promoted to share the same sparse patterns. This representation model can be effectively solved by the simultaneous orthogonal matching pursuit method. In addition, we develop a tracking algorithm based on the proposed representation model, a discriminative candidate selection scheme, and a simple model updating method. Finally, we conduct numerous experiments on several challenging video clips to evaluate the proposed tracker in comparison with various state-of-the-art tracking algorithms. Both qualitative and quantitative evaluations on a number of challenging video clips show that our tracker achieves better performance than the other state-of-the-art methods.

A Comprasion of the Activation of Mirror Neurons Induced by Action Observation between Simple and Complex Hand Movement

  • Lee, Mi Young;Kim, Ju Sang
    • The Journal of Korean Physical Therapy
    • /
    • v.31 no.3
    • /
    • pp.157-160
    • /
    • 2019
  • Purpose: We compared the activation pattern of the mirror neurons (MN) between two types of hand movement according to action observation using functional MRI. Methods: Twelve right-handed healthy subjects (5 male and 7 female, mean age $21.92{\pm}2.02years$) participated in the experiment. During fMRI scanning, subjects underwent two different stimuli on the screen: 1) video clips showing repeated grasping and releasing of the ball via simple hand movement (SHM), and (2) video clips showing an actor performing a Purdue Pegboard test via complex hand movement (CHM). paired t-test in statistical parametric mapping (SPM) was used to compare the activation differences between the two types of hand movement. Results: CHM as compared with the SHM produced a higher blood oxygen level dependent (BOLD) signal response in the right superior frontal gyrus, left inferior and superior parietal lobules, and lingual gyrus. However, no greater BOLD signal response was found by SHM compared with CHM (FWE corrected, p<0.05). Conclusion: Our findings provided that the activation patterns for observation of SHM and CHM are different. CHM also elicited boarder or stronger activations in the brain, including inferior parietal lobule called the MN region.

Content Based Video Retrieval by Example Considering Context (문맥을 고려한 예제 기반 동영상 검색 알고리즘)

  • 박주현;낭종호;김경수;하명환;정병희
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.12
    • /
    • pp.756-771
    • /
    • 2003
  • Digital Video Library System which manages a large amount of multimedia information requires efficient and effective retrieval methods. In this paper, we propose and implement a new video search and retrieval algorithm that compares the query video shot with the video shots in the archives in terms of foreground object, background image, audio, and its context. The foreground object is the region of the video image that has been changed in the successive frames of the shot, the background image is the remaining region of the video image, and the context is the relationship between the low-level features of the adjacent shots. Comparing these features is a result of reflecting the process of filming a moving picture, and it helps the user to submit a query focused on the desired features of the target video clips easily by adjusting their weights in the comparing process. Although the proposed search and retrieval algorithm could not totally reflect the high level semantics of the submitted query video, it tries to reflect the users' requirements as much as possible by considering the context of video clips and by adjusting its weight in the comparing process.

Stereo Audio Matched with 3D Video (3D영상에 정합되는 스테레오 오디오)

  • Park, Sung-Wook;Chung, Tae-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.153-158
    • /
    • 2011
  • This paper presents subjective experimental results to understand how audio should be changed when a video clip is watched in 3D than 2D. This paper divided auditory perceptual information into two categories; distance and azimuth that a sound source contributes mostly, and spaciousness that scene or environment contribute mostly. According to the experiment for distance and azimuth, i.e. sound localization, we found that distance and azimuth of sound sources were magnified when heard with 3D than 2D video. This lead us to conclude 3D sound for localization should be designed to have more distance and azimuth than 2D sound. Also we found 3D sound are preferred to be played with not only 3D video clip but also 2D video clip. According to the experiment for spaciousness, we found people prefer sound with more reverberation when they watch 3D video clips than 2D video clips. This can be understood that 3D video provides more spacial information than 2D video. Those subjective experimental results can help audio engineer familiar with 2D audio to create 3D audio, and be fundamental information of future research to make 2D to 3D audio conversion system. Furthermore when designing 3D broadcasting system with limited bandwidth and with 2D TV supportive, we propose to consider transmitting stereoscopic video, audio with enhanced localization, and metadata for TV sets to generate reverberation for spaciousness.

(Content-Based Video Copy Detection using Motion Directional Histogram) (모션의 방향성 히스토그램을 이용한 내용 기반 비디오 복사 검출)

  • 현기호;이재철
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.497-502
    • /
    • 2003
  • Content-based video copy detection is a complementary approach to watermarking. As opposed to watermarking, which relies on inserting a distinct pattern into the video stream, video copy detection techniques match content-based signatures to detect copies of video. Existing typical content-based copy detection schemes have relied on image matching which is based on key frame detection. This paper proposes a motion directional histogram, which is quantized and accumulated the direction of motion, for video copy detection. The video clip is represented by a motion directional histogram as a 1-dimensional graph. This method is suitable for real time indexing and counting the TV CF verification that is high motion video clips.