• Title/Summary/Keyword: Videos

Search Result 1,541, Processing Time 0.028 seconds

Two-Stage Deep Learning Based Algorithm for Cosmetic Object Recognition (화장품 물체 인식을 위한 Two-Stage 딥러닝 기반 알고리즘)

  • Jongmin Kim;Daeho Seo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.101-106
    • /
    • 2023
  • With the recent surge in YouTube usage, there has been a proliferation of user-generated videos where individuals evaluate cosmetics. Consequently, many companies are increasingly utilizing evaluation videos for their product marketing and market research. However, a notable drawback is the manual classification of these product review videos incurring significant costs and time. Therefore, this paper proposes a deep learning-based cosmetics search algorithm to automate this task. The algorithm consists of two networks: One for detecting candidates in images using shape features such as circles, rectangles, etc and Another for filtering and categorizing these candidates. The reason for choosing a Two-Stage architecture over One-Stage is that, in videos containing background scenes, it is more robust to first detect cosmetic candidates before classifying them as specific objects. Although Two-Stage structures are generally known to outperform One-Stage structures in terms of model architecture, this study opts for Two-Stage to address issues related to the acquisition of training and validation data that arise when using One-Stage. Acquiring data for the algorithm that detects cosmetic candidates based on shape and the algorithm that classifies candidates into specific objects is cost-effective, ensuring the overall robustness of the algorithm.

Video Highlight Prediction Using GAN and Multiple Time-Interval Information of Audio and Image (오디오와 이미지의 다중 시구간 정보와 GAN을 이용한 영상의 하이라이트 예측 알고리즘)

  • Lee, Hansol;Lee, Gyemin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.143-150
    • /
    • 2020
  • Huge amounts of contents are being uploaded every day on various streaming platforms. Among those videos, game and sports videos account for a great portion. The broadcasting companies sometimes create and provide highlight videos. However, these tasks are time-consuming and costly. In this paper, we propose models that automatically predict highlights in games and sports matches. While most previous approaches use visual information exclusively, our models use both audio and visual information, and present a way to understand short term and long term flows of videos. We also describe models that combine GAN to find better highlight features. The proposed models are evaluated on e-sports and baseball videos.

A study on the Influence of lighting on DLP videos of HDTV news programs (HDTV 뉴스 조명이 DLP 영상해상도에 미치는 영향에 관한 연구)

  • Kim, Yong-Kyu;Lee, Ki-Tae;Choi, Seong-Jhin
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.838-848
    • /
    • 2008
  • Recently, new multimedia techniques using lighting and projection are often used for the production of broadcasting programs. Also news programs use DLP(Digital Lighting Processing) videos with good resolution escaping from the existing set changes. This paper examined the correlations between lighting sources and the resolution of DLP videos, and had a simulation, and then it proposed DLP used ideal lighting for news programs. This paper comparatively examined the resolution of DLP videos influenced by the conditions of lighting, using the videos picked up on the HD camera and the measuring monitor.

Quantization Method in Spatial Domain for Screen Content Video Compression (스크린 콘텐츠 영상 압축을 위한 화소 영역 양자화 방법)

  • Nam, Jung-Hak;You, Jong-Hun;Sim, Dong-Gyu;Oh, Seoung-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.67-76
    • /
    • 2012
  • Expanding services and productions for screen content videos recently, necessity of new compression techniques is emerging. The next-generation video coding standard is also considering specified coding tools for screen content videos, but it is still preliminary stage. In this paper, we investigate the characteristics of screen content videos for which we propose the quantization in spatial domain to improve coding efficiency. The proposed method directly employs quantization for residual signal without any transformations. The proposed method also applies adaptive coefficients prediction and in-loop filter for quantized residual signals in spatial domain based on the characteristics of screen content videos. As a results, the proposed method for the random access, the low-delay and the all-intra modes achieve bit-saving about 4.4%, 5.1%. and 4.9%, respectively.

A Data Service of Digital Broadcasting for Program-Guiding using Multi-View Video (멀티 뷰 영상을 활용한 디지털방송의 프로그램가이드 데이터서비스)

  • Ko, Kwangil
    • Journal of Digital Contents Society
    • /
    • v.16 no.1
    • /
    • pp.71-77
    • /
    • 2015
  • Currently, numerous (broadcasting) programs are being provided to viewers, which makes it hard for viewers to select a program to watch. Especially, surfing channels watching the program videos (which is the most general manner of searching a program) became a time consuming task of performing several tens of channel changes (a channel change takes about 0.7 second). In the paper, a data service for guiding programs using the multi-view of program videos is proposed. The data service allows viewers to navigate all program videos without channel changes. To implement the data service, a method for composing and transmitting multi-view videos with the meta data for handling each video of the multi-view has been devised and a Java API has been implemented to clip, resize, and display parts of the multi-view videos.

An Analysis of Gender Images of Fashion Style in BTS Music Videos Using Judith Butler's Performativity Theory (버틀러의 수행성 이론으로 본 BTS 뮤직비디오 패션스타일의 젠더 이미지 분석)

  • Jung, Yeonyi;Lee, Youngjae
    • Journal of Fashion Business
    • /
    • v.24 no.1
    • /
    • pp.88-101
    • /
    • 2020
  • The music videos of BTS go beyond the limit of media promoting music and shows their meaning in various ways and complete the visual message of music through fashion style. BTS' fashion style in the music videos shows a change in symbolic representation of the genre of each album and song, of which gender images are changing aligned with the music messages of BTS. The purpose of this study was to derive gender images of fashion style in BTS music videos and to interpret their meaning based on Judith Butler's theory that performativity creates discourse through iterative process. It is conducted as a research method, an analytical study was conducted in parallel with literature studies and empirical case analysis. The scope of the study was limited to 301 costumes that appeared in 21 official music videos from debut single album '2Cool 4 Skool' released in 2013 to the mini album 'Map of the Soul: Persona' released in 2019. As a result of the analysis, the controversial fashion style, challenging fashion style, boyish fashion style, hybrid fashion style, the playful fashion style were revealed. The conclusion of studying the gender image of BTS, interpreted by this analysis using Judith Butler's theory, is as follows. The gender image of BTS is the traditional image that identifies with the dominant gender discourse, the resistive gender image that intentionally distances mainstream culture, the eclectic image parodying the gender of the opposing term, and the deconstructive image that transcends the dominant gender discourse.

Development of a Video Mash-up Application using Videos from Network Environment (네트워크 환경의 동영상을 활용하는 동영상 메쉬업 어플리케이션 개발)

  • Koo, Bon-Chul;Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.7
    • /
    • pp.1743-1749
    • /
    • 2015
  • As smartphones are rapidly supplied, users can easily access video contents anywhere, anytime. So recently, demands for the ability to create one's own mash-up video have been increasing and various video mash-up programs appear to reflect those needs. But, because most existing video mash-up programs can mash up only stored video files, the ability to mash up many videos is limited for smartphones that have restricted memory. In this paper, we have developed a video mash-up application that can easily mash up not only stored videos, but also videos found by chance over the network to make one's own videos.

Robust Method of Video Contrast Enhancement for Sudden Illumination Changes (급격한 조명 변화에 강건한 동영상 대조비 개선 방법)

  • Park, Jin Wook;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.55-65
    • /
    • 2015
  • Contrast enhancement methods for a single image applied to videos may cause flickering artifacts because these methods do not consider continuity of videos. On the other hands, methods considering the continuity of videos can reduce flickering artifacts but it may cause unnecessary fade-in/out artifacts when the intensity of videos changes abruptly. In this paper, we propose a robust method of video contrast enhancement for sudden illumination changes. The proposed method enhances each frame by Fast Gray-Level Grouping(FGLG) and considers the continuity of videos by an exponential smoothing filter. The proposed method calculates the smoothing factor of an exponential smoothing filter using a sigmoid function and applies to each frame to reduce unnecessary fade-in/out effects. In the experiment, 6 measurements are used for the performance analysis of the proposed method and traditional methods. Through the experiment. it has been shown that the proposed method demonstrates the best quantitative performance of MSSIM and Flickering score and show the adaptive enhancement under sudden illumination change through the visual quality comparison.

Design of Real-time MR Contents using Substitute Videos of Vehicles and Background based on Black Box Video (블랙박스 영상 기반 차량 및 배경 대체 영상을 이용한 실시간 MR 콘텐츠의 설계)

  • Kim, Sung-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.6
    • /
    • pp.213-218
    • /
    • 2021
  • In this paper, we detect and track vehicles by type based on highway daytime driving videos taken with black boxes for vehicles. In addition, we design a real-time MR contents production method that can be newly created by placing substitute videos of each type of detected vehicles in the same location as the new background video. To detect and track vehicles by type, we use the YOLO algorithm. And we also use the mask technique based on RGB color for substitute videos of each type of vehicles detected. The size of the vehicle substitute videos to be used for MR content are substituted by the same size as the area size of the detected vehicles. In this paper, we confirm that real-time MR contents design is possible as a result of experiments and simulations and believe that It will be usefully utilized in the field of VR contents.

YouTube videos provide low-quality educational content about rotator cuff disease

  • Kunze, Kyle N.;Alter, Kevin H.;Cohn, Matthew R.;Vadhera, Amar S.;Verma, Nikhil N.;Yanke, Adam B.;Chahla, Jorge
    • Clinics in Shoulder and Elbow
    • /
    • v.25 no.3
    • /
    • pp.217-223
    • /
    • 2022
  • Background: YouTube has become a popular source of healthcare information in orthopedic surgery. Although quality-based studies of YouTube content have been performed for information concerning many orthopedic pathologies, the quality and accuracy of information on the rotator cuff have yet to be evaluated. The purpose of the current study was to evaluate the reliability and educational content of YouTube videos concerning the rotator cuff. Methods: YouTube was queried for the term "rotator cuff." The first 50 videos from this search were evaluated. Video reliability was assessed using the Journal of the American Medical Association (JAMA) benchmark criteria (range, 0-5). Educational content was assessed using the global quality score (GQS; range, 0-4) and the rotator cuff-specific score (RCSS; range, 0-22). Results: The mean number of views was 317,500.7±538,585.3. The mean JAMA, GQS, and RCSS scores were 2.7±2.0, 3.7±1.0, and 5.6±3.6, respectively. Non-surgical intervention content was independently associated with a lower GQS (β=-2.19, p=0.019). Disease-specific video content (β=4.01, p=0.045) was the only independent predictor of RCSS. Conclusions: The overall quality and educational content of YouTube videos concerned with the rotator cuff were low. Physicians should caution patients in using such videos as resources for decision-making and should counsel them appropriately.