• Title/Summary/Keyword: videos

Search Result 1,555, Processing Time 0.021 seconds

Classification of Education Video by Subtitle Analysis (자막 분석을 통한 교육 영상의 카테고리 분류 방안)

  • Lee, Ji-Hoon;Lee, Hyeon Sup;Kim, Jin-Deog
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.88-90
    • /
    • 2021
  • This paper introduces a method for extracting subtitles from lecture videos through a Korean morpheme analyzer and classifying video categories according to the extracted morpheme information. In some cases incorrect information is entered due to human error and reflected in the characteristics of the items, affecting the accuracy of the recommendation system. To prevent this, we generate a keyword table for each category using morpheme information extracted from pre-classified videos, and compare the similarity of morpheme in each category keyword table to classify categories of Lecture videos using the most similar keyword table. These human intervention reduction systems directly classify videos and aim to increase the accuracy of the system.

  • PDF

Movement Detection Using Keyframes in Video Surveillance System

  • Kim, Kyutae;Jia, Qiong;Dong, Tianyu;Jang, Euee S.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1249-1252
    • /
    • 2022
  • In this paper, we propose a conceptual framework that identifies video frames in motion containing the movement of people and vehicles in traffic videos. The automatic selection of video frames in motion is an important topic in security and surveillance video because the number of videos to be monitored simultaneously is simply too large due to limited human resources. The conventional method to identify the areas in motion is to compute the differences over consecutive video frames, which has been costly because of its high computational complexity. In this paper, we reduced the overall complexity by examining only the keyframes (or I-frames). The basic assumption is that the time period between I-frames is rather shorter (e.g., 1/10 ~ 3 secs) than the usual length of objects in motion in video (i.e., pedestrian walking, automobile passing, etc.). The proposed method estimates the possibility of videos containing motion between I-frames by evaluating the difference of consecutive I-frames with the long-time statistics of the previously decoded I-frames of the same video. The experimental results showed that the proposed method showed more than 80% accuracy in short surveillance videos obtained from different locations while keeping the computational complexity as low as 20 % compared to the HM decoder.

  • PDF

Two-Stage Deep Learning Based Algorithm for Cosmetic Object Recognition (화장품 물체 인식을 위한 Two-Stage 딥러닝 기반 알고리즘)

  • Jongmin Kim;Daeho Seo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.101-106
    • /
    • 2023
  • With the recent surge in YouTube usage, there has been a proliferation of user-generated videos where individuals evaluate cosmetics. Consequently, many companies are increasingly utilizing evaluation videos for their product marketing and market research. However, a notable drawback is the manual classification of these product review videos incurring significant costs and time. Therefore, this paper proposes a deep learning-based cosmetics search algorithm to automate this task. The algorithm consists of two networks: One for detecting candidates in images using shape features such as circles, rectangles, etc and Another for filtering and categorizing these candidates. The reason for choosing a Two-Stage architecture over One-Stage is that, in videos containing background scenes, it is more robust to first detect cosmetic candidates before classifying them as specific objects. Although Two-Stage structures are generally known to outperform One-Stage structures in terms of model architecture, this study opts for Two-Stage to address issues related to the acquisition of training and validation data that arise when using One-Stage. Acquiring data for the algorithm that detects cosmetic candidates based on shape and the algorithm that classifies candidates into specific objects is cost-effective, ensuring the overall robustness of the algorithm.

Video Highlight Prediction Using GAN and Multiple Time-Interval Information of Audio and Image (오디오와 이미지의 다중 시구간 정보와 GAN을 이용한 영상의 하이라이트 예측 알고리즘)

  • Lee, Hansol;Lee, Gyemin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.143-150
    • /
    • 2020
  • Huge amounts of contents are being uploaded every day on various streaming platforms. Among those videos, game and sports videos account for a great portion. The broadcasting companies sometimes create and provide highlight videos. However, these tasks are time-consuming and costly. In this paper, we propose models that automatically predict highlights in games and sports matches. While most previous approaches use visual information exclusively, our models use both audio and visual information, and present a way to understand short term and long term flows of videos. We also describe models that combine GAN to find better highlight features. The proposed models are evaluated on e-sports and baseball videos.

A study on the Influence of lighting on DLP videos of HDTV news programs (HDTV 뉴스 조명이 DLP 영상해상도에 미치는 영향에 관한 연구)

  • Kim, Yong-Kyu;Lee, Ki-Tae;Choi, Seong-Jhin
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.838-848
    • /
    • 2008
  • Recently, new multimedia techniques using lighting and projection are often used for the production of broadcasting programs. Also news programs use DLP(Digital Lighting Processing) videos with good resolution escaping from the existing set changes. This paper examined the correlations between lighting sources and the resolution of DLP videos, and had a simulation, and then it proposed DLP used ideal lighting for news programs. This paper comparatively examined the resolution of DLP videos influenced by the conditions of lighting, using the videos picked up on the HD camera and the measuring monitor.

Quantization Method in Spatial Domain for Screen Content Video Compression (스크린 콘텐츠 영상 압축을 위한 화소 영역 양자화 방법)

  • Nam, Jung-Hak;You, Jong-Hun;Sim, Dong-Gyu;Oh, Seoung-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.67-76
    • /
    • 2012
  • Expanding services and productions for screen content videos recently, necessity of new compression techniques is emerging. The next-generation video coding standard is also considering specified coding tools for screen content videos, but it is still preliminary stage. In this paper, we investigate the characteristics of screen content videos for which we propose the quantization in spatial domain to improve coding efficiency. The proposed method directly employs quantization for residual signal without any transformations. The proposed method also applies adaptive coefficients prediction and in-loop filter for quantized residual signals in spatial domain based on the characteristics of screen content videos. As a results, the proposed method for the random access, the low-delay and the all-intra modes achieve bit-saving about 4.4%, 5.1%. and 4.9%, respectively.

A Data Service of Digital Broadcasting for Program-Guiding using Multi-View Video (멀티 뷰 영상을 활용한 디지털방송의 프로그램가이드 데이터서비스)

  • Ko, Kwangil
    • Journal of Digital Contents Society
    • /
    • v.16 no.1
    • /
    • pp.71-77
    • /
    • 2015
  • Currently, numerous (broadcasting) programs are being provided to viewers, which makes it hard for viewers to select a program to watch. Especially, surfing channels watching the program videos (which is the most general manner of searching a program) became a time consuming task of performing several tens of channel changes (a channel change takes about 0.7 second). In the paper, a data service for guiding programs using the multi-view of program videos is proposed. The data service allows viewers to navigate all program videos without channel changes. To implement the data service, a method for composing and transmitting multi-view videos with the meta data for handling each video of the multi-view has been devised and a Java API has been implemented to clip, resize, and display parts of the multi-view videos.

An Analysis of Gender Images of Fashion Style in BTS Music Videos Using Judith Butler's Performativity Theory (버틀러의 수행성 이론으로 본 BTS 뮤직비디오 패션스타일의 젠더 이미지 분석)

  • Jung, Yeonyi;Lee, Youngjae
    • Journal of Fashion Business
    • /
    • v.24 no.1
    • /
    • pp.88-101
    • /
    • 2020
  • The music videos of BTS go beyond the limit of media promoting music and shows their meaning in various ways and complete the visual message of music through fashion style. BTS' fashion style in the music videos shows a change in symbolic representation of the genre of each album and song, of which gender images are changing aligned with the music messages of BTS. The purpose of this study was to derive gender images of fashion style in BTS music videos and to interpret their meaning based on Judith Butler's theory that performativity creates discourse through iterative process. It is conducted as a research method, an analytical study was conducted in parallel with literature studies and empirical case analysis. The scope of the study was limited to 301 costumes that appeared in 21 official music videos from debut single album '2Cool 4 Skool' released in 2013 to the mini album 'Map of the Soul: Persona' released in 2019. As a result of the analysis, the controversial fashion style, challenging fashion style, boyish fashion style, hybrid fashion style, the playful fashion style were revealed. The conclusion of studying the gender image of BTS, interpreted by this analysis using Judith Butler's theory, is as follows. The gender image of BTS is the traditional image that identifies with the dominant gender discourse, the resistive gender image that intentionally distances mainstream culture, the eclectic image parodying the gender of the opposing term, and the deconstructive image that transcends the dominant gender discourse.

Development of a Video Mash-up Application using Videos from Network Environment (네트워크 환경의 동영상을 활용하는 동영상 메쉬업 어플리케이션 개발)

  • Koo, Bon-Chul;Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.7
    • /
    • pp.1743-1749
    • /
    • 2015
  • As smartphones are rapidly supplied, users can easily access video contents anywhere, anytime. So recently, demands for the ability to create one's own mash-up video have been increasing and various video mash-up programs appear to reflect those needs. But, because most existing video mash-up programs can mash up only stored video files, the ability to mash up many videos is limited for smartphones that have restricted memory. In this paper, we have developed a video mash-up application that can easily mash up not only stored videos, but also videos found by chance over the network to make one's own videos.

Robust Method of Video Contrast Enhancement for Sudden Illumination Changes (급격한 조명 변화에 강건한 동영상 대조비 개선 방법)

  • Park, Jin Wook;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.55-65
    • /
    • 2015
  • Contrast enhancement methods for a single image applied to videos may cause flickering artifacts because these methods do not consider continuity of videos. On the other hands, methods considering the continuity of videos can reduce flickering artifacts but it may cause unnecessary fade-in/out artifacts when the intensity of videos changes abruptly. In this paper, we propose a robust method of video contrast enhancement for sudden illumination changes. The proposed method enhances each frame by Fast Gray-Level Grouping(FGLG) and considers the continuity of videos by an exponential smoothing filter. The proposed method calculates the smoothing factor of an exponential smoothing filter using a sigmoid function and applies to each frame to reduce unnecessary fade-in/out effects. In the experiment, 6 measurements are used for the performance analysis of the proposed method and traditional methods. Through the experiment. it has been shown that the proposed method demonstrates the best quantitative performance of MSSIM and Flickering score and show the adaptive enhancement under sudden illumination change through the visual quality comparison.