• Title/Summary/Keyword: videos

Search Result 1,555, Processing Time 0.027 seconds

YouTube as an information source for instrument separation in root canal treatment

  • Yagiz Ozbay;Neslihan Yilmaz Cirakoglu
    • Restorative Dentistry and Endodontics
    • /
    • v.46 no.1
    • /
    • pp.8.1-8.7
    • /
    • 2021
  • Objectives: The reliability and educational quality of videos on YouTube for patients seeking information regarding instrument separation in root canal treatment were evaluated. Materials and Methods: YouTube was searched for videos on instrument separation in root canal treatment. Video content was scored based on reliability in terms of 3 categories (etiology, procedure, and prognosis) and based on video flow, quality, and educational usefulness using the Global Quality Score (GQS). Descriptive statistics were obtained and the data were analyzed using analysis of variance and the Kruskal-Wallis test. Results: The highest mean completeness scores were obtained for videos published by dentists or specialists (1.48 ± 1.06). There was no statistically significant difference among sources of upload in terms of content completeness. The highest mean GQS was found for videos published by dentists or specialists (1.82 ± 0.96), although there was no statistically significant correlation between GQS and the source of upload. Conclusions: Videos on YouTube have incomplete and low-quality content for patients who are concerned about instrument separation during endodontic treatment, or who experience this complication during endodontic treatment.

Semiotic Analysis of Advertising Video Related to the Sustainability of Fast Fashion Brands (패스트 패션 브랜드의 지속가능성 관련 광고 영상에 대한 기호학적 분석)

  • Na Yeon Kil;Jaehoon Chun
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.47 no.6
    • /
    • pp.1057-1079
    • /
    • 2023
  • This paper examines the use of semiotics for analyzing fashion advertisements in the fast fashion industry. While previous studies have explored the use of semiotics in various industries, the application of this theory in the fashion sector-especially regarding fast fashion's commercial videos related to sustainability-remains underexplored. The paper adopts Roland Barthes' Semiotics Theory to analyze the advertising videos related to the sustainability of major fast fashion brands such as H&M, MANGO, and ZARA. The research approach involved reviewing all commercial videos related to sustainability on these brands' official YouTube accounts and conducting comprehensive analyses of advertisements using the binary opposition analysis framework. The paper's findings indicate that these commercial videos serve as a platform to mold a brand's sustainability image and promote the notion that fast fashion brands are leading the charge toward sustainability, preparing for an unpredictable future, guiding people toward hope, and offering ultimate freedom. This research high-lights the necessity for a critical examination of advertising videos related to sustainability in the fast fashion industry to guarantee accountability and transparency.

Appeared In a Domestic YouTube Video A Study on Makeup Characteristics According to Emotional Emages

  • Na-Hyun, An
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • While technologies such as the 4th revolution and artificial intelligence (AI), which create new value through the convergence of intelligent information technology, are becoming hot topics, the beauty industry is rapidly developing and combining information and communication technology to produce beauty items based on smartphones among mobile technologies. As the area of expands, YouTube is forming a network through various means of information. In particular, beauty-related YouTube videos are a field of great interest and popularity among the public. By classifying the makeup characteristics according to the emotional images shown in domestic YouTube videos by emotional image and identifying the characteristics of makeup, the needs for watching YouTube makeup videos are identified. We aim to build trust in the delivery of information about makeup. The emotional images were divided into four types: 'modern', 'natural', 'gorgeous', and cute. Among the domestic makeup YouTubers, Pony, Isabe and Shinnim, Lamuque were selected. By organizing more diverse makeup-related content systematically and creatively, we expect to have a positive influence on k-makeup not only domestically but also overseas. We aim to provide basic data for follow-up research on makeup YouTuber videos in the field of cosmetology and contribute to marketing plans for the development of the beauty content industry and establishment of promotional strategies.

UNDERSTANDING BASEBALL GAME PROCESS FROM VIDEO BASED ON SIMILAR MOTION RETRIEVAL

  • Aoki, Kyota
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.541-546
    • /
    • 2009
  • There are many videos about sports. There is a large need for content based video retrievals. In sports videos, the motions and camera works have much information about shots and plays. This paper proposes the baseball game process understanding using the similar motion retrieval on videos. We can retrieve the similar motion parts based on motions shown in videos using the space-time images describing the motions. Using a finite state model of plays, we can decide the precise point of pitches from the pattern of estimated typical motions. From only the motions, we can decide the precise point of pitches. This paper describes the method and the experimental results.

  • PDF

Non-Iterative Threshold based Recovery Algorithm (NITRA) for Compressively Sensed Images and Videos

  • Poovathy, J. Florence Gnana;Radha, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4160-4176
    • /
    • 2015
  • Data compression like image and video compression has come a long way since the introduction of Compressive Sensing (CS) which compresses sparse signals such as images, videos etc. to very few samples i.e. M < N measurements. At the receiver end, a robust and efficient recovery algorithm estimates the original image or video. Many prominent algorithms solve least squares problem (LSP) iteratively in order to reconstruct the signal hence consuming more processing time. In this paper non-iterative threshold based recovery algorithm (NITRA) is proposed for the recovery of images and videos without solving LSP, claiming reduced complexity and better reconstruction quality. The elapsed time for images and videos using NITRA is in ㎲ range which is 100 times less than other existing algorithms. The peak signal to noise ratio (PSNR) is above 30 dB, structural similarity (SSIM) and structural content (SC) are of 99%.

Trends in Online Action Detection in Streaming Videos (온라인 행동 탐지 기술 동향)

  • Moon, J.Y.;Kim, H.I.;Lee, Y.J.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.75-82
    • /
    • 2021
  • Online action detection (OAD) in a streaming video is an attractive research area that has aroused interest lately. Although most studies for action understanding have considered action recognition in well-trimmed videos and offline temporal action detection in untrimmed videos, online action detection methods are required to monitor action occurrences in streaming videos. OAD predicts action probabilities for a current frame or frame sequence using a fixed-sized video segment, including past and current frames. In this article, we discuss deep learning-based OAD models. In addition, we investigated OAD evaluation methodologies, including benchmark datasets and performance measures, and compared the performances of the presented OAD models.

Creating Deep Learning-based Acrobatic Videos Using Imitation Videos

  • Choi, Jong In;Nam, Sang Hun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.713-728
    • /
    • 2021
  • This paper proposes an augmented reality technique to generate acrobatic scenes from hitting motion videos. After a user shoots a motion that mimics hitting an object with hands or feet, their pose is analyzed using motion tracking with deep learning to track hand or foot movement while hitting the object. Hitting position and time are then extracted to generate the object's moving trajectory using physics optimization and synchronized with the video. The proposed method can create videos for hitting objects with feet, e.g. soccer ball lifting; fists, e.g. tap ball, etc. and is suitable for augmented reality applications to include virtual objects.

Multi-Person Tracking Using SURF and Background Subtraction for Surveillance

  • Yu, Juhee;Lee, Kyoung-Mi
    • Journal of Information Processing Systems
    • /
    • v.15 no.2
    • /
    • pp.344-358
    • /
    • 2019
  • Surveillance cameras have installed in many places because security and safety is becoming important in modern society. Through surveillance cameras installed, we can deal with troubles and prevent accidents. However, watching surveillance videos and judging the accidental situations is very labor-intensive. So now, the need for research to analyze surveillance videos is growing. This study proposes an algorithm to track multiple persons using SURF and background subtraction. While the SURF algorithm, as a person-tracking algorithm, is robust to scaling, rotating and different viewpoints, SURF makes tracking errors with sudden changes in videos. To resolve such tracking errors, we combined SURF with a background subtraction algorithm and showed that the proposed approach increased the tracking accuracy. In addition, the background subtraction algorithm can detect persons in videos, and SURF can initialize tracking targets with these detected persons, and thus the proposed algorithm can automatically detect the enter/exit of persons.

Video augmentation technique for human action recognition using genetic algorithm

  • Nida, Nudrat;Yousaf, Muhammad Haroon;Irtaza, Aun;Velastin, Sergio A.
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.327-338
    • /
    • 2022
  • Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.

A New Denoising Method for Time-lapse Video using Background Modeling

  • Park, Sanghyun
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.125-138
    • /
    • 2020
  • Due to the development of camera technology, the cost of producing time-lapse video has been reduced, and time-lapse videos are being applied in many fields. Time-lapse video is created using images obtained by shooting for a long time at long intervals. In this paper, we propose a method to improve the quality of time-lapse videos monitoring the changes in plants. Considering the characteristics of time-lapse video, we propose a method of separating the desired and unnecessary objects and removing unnecessary elements. The characteristic of time-lapse videos that we have noticed is that unnecessary elements appear intermittently in the captured images. In the proposed method, noises are removed by applying a codebook background modeling algorithm to use this characteristic. Experimental results show that the proposed method is simple and accurate to find and remove unnecessary elements in time-lapse videos.