• 제목/요약/키워드: Videos

검색결과 1,523건 처리시간 0.025초

CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events

  • Choe, Giseok;Lee, Seungbin;Nang, Jongho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권3호
    • /
    • pp.1689-1701
    • /
    • 2019
  • In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC data.

숏폼 패션영상의 특성과 제작에 관한 연구 (A Study on the Characteristics and Production of Short-form Fashion Video)

  • 김세진
    • 한국의류학회지
    • /
    • 제45권1호
    • /
    • pp.200-216
    • /
    • 2021
  • This article considers short-form fashion videos as distinguished from fashion films, defines the concept, details the expressive characteristics of short-form fashion video, and reveals the method of producing it. For the methodology, a literature review was conducted to derive the concept and expression techniques. A case study was also performed to define the expressive characteristics. Five short-form fashion videos were also produced based on the results. The final results are as follows. First, short-form fashion video was defined as a fashion medium on the purpose of fashion communication within 60 seconds and classified by three digital image formats. Second, the result of analyzing the expression of the short-form fashion video shows the simplicity and reconstitution, characterization and remediation, borderless and expansion, and synesthesia trigger of the fashion image. Third, five short-form fashion videos were produced based on the theme of the digital garden. It shows that the short-form fashion video intensively expresses the content as a medium whose sensational expression is more prominent than the composition of the story by the short running time that reflects the taste of digital mainstream.

틱톡에 나타난 한푸 스트리트 스냅의 특성 (Characteristics of Hanfu Street Snaps on TikTok)

  • 장로월;임은혁
    • 한국의류산업학회지
    • /
    • 제24권5호
    • /
    • pp.519-529
    • /
    • 2022
  • This research analyzed the characteristics of Hanfu street snaps in the Chinese version of TikTok to determine the development and meaning of Hanfu. Based on grounded theory, this study selected 102 representative cases by sorting Hanfu street snaps on TikTok according to popularity. Subsequently, through open coding, the cases were organized and summarized into five main categories. The findings are as follows: 1) The national cultural pride has enabled a greater number of Hanfu fans and groups to upload short videos promoting the Hanfu movement on TikTok to expand the influence of the activities and popularize cultural knowledge. 2) The users attempted cross-cultural communication by participating in cultural festivals in Western countries wearing Hanfu. 3) The 'See now buy now' function of TikTok enables numerous Hanfu merchants to upload short videos about Hanfu products to promote their products and boost sales. 4) As 'gamification' affects everyday life, computer game enthusiasts among them wear Hanfu in the form of role-playing. 5)As a unique "meme" phenomenon on TikTok, wearing Hanfu to make interesting videos has also become a form of entertainment. Thus, although the characteristics of Hanfu street snaps on TikTok originated from the transmission of Hanfu culture, the culture has now been transformed through social media into symbolic consumption and play culture.

MPEG 몰입형 비디오 기반 6DoF 영상 스트리밍 성능 분석 (Performance Analysis of 6DoF Video Streaming Based on MPEG Immersive Video)

  • 정종범;이순빈;김인애;류은석
    • 방송공학회논문지
    • /
    • 제27권5호
    • /
    • pp.773-793
    • /
    • 2022
  • 다수의 고품질 몰입형 영상 전송을 통해 가상 현실에서 six degrees of freedom(6DoF)를 지원하기 위해 the moving picture experts group (MPEG) immersive video (MIV) 압축 표준이 설립되었다. MIV는 비트율과 연산 복잡도 간 상충관계를 고려하여 1) 시점 간 연관성 제거 또는 2) 대표 시점을 선택하여 전송하는 2가지 압축 방식을 제공한다. 본 논문은 전술한 두 가지 방식에 대해 high-efficiency video coding (HEVC), versatile video coding (VVC) 기반 성능 분석 결과를 입력 영상 위치에 합성한 가상 영상 및 사용자 시점 영상 중심으로 제시한다.

How to utilize various Bluetooth devices when receiving Korean Public Alert System

  • Choi, Seung-Hwan;Kim, kyung-Seok
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권4호
    • /
    • pp.104-120
    • /
    • 2022
  • In severe disaster situations, Emergency alert provide information for people to survive. However, the only device that can receive the Cell Broadcasting Service (CBS) of the Korean Public Alert System (KPAS), the Public Warning System (PWS), is Smart Phone and Receiving the disaster information is very limited. Therefore, this paper proposes a solution to the problem. First, we discover problems through the analysis of KPAS, and second, we propose Smart Watch and Navigation as devices to receive Emergency alert. The devices can receive disaster information through Bluetooth communication from Smart Phone and provide content such as disaster-related images and videos. Based on the proposed items, the application was developed, and as a result of the test, Smart Phone provided more disaster information like images and videos than current emergency alert, and the method of notification of emergency alert was varied. And Smart Watch and Navigation received disaster information successfully through Smart Phone, which also provided various disaster information like images and videos. The developed application has expanded functionally than the existing emergency alert and can benefit many people in emergency situations.

Comparison of Postural Control Ability according to the Various Video Contents during Action Observations

  • Goo, Bon Wook;Lee, Mi Young
    • The Journal of Korean Physical Therapy
    • /
    • 제33권1호
    • /
    • pp.16-20
    • /
    • 2021
  • Purpose: This study examined the effects of the type of video contents used for action observations on the ability to control posture. Methods: The participants were 48 healthy adults. The two hands of the participants were crossed on both shoulders, and the other foot was placed in a straight line in front of the target to allow them to watch a video of the monitor. The videos were presented in random order with three video contents (natural, stable balance posture, and unstable balance posture) consisting of 30 seconds each. A 15-second resting time was given between each video. During action observation using various video content forms, the posture control ability was measured using a TekScan MetScan® system. Results: The results revealed statistically significant differences in the area of movement and the distance by COP and distance by the type of action-observation videos, and the distance by the anteroposterior and mediolateral sides (p<0.05). The stable balance posture and unstable balance posture video showed significant differences in the distance by the COP, anteroposterior, and mediolateral distance. (p<0.05) Conclusion: This study suggests that choosing the contents of the videos is important during action-observation training, and action-observation training can help improve postural control.

MPEG 몰입형 비디오를 위한 Geometry Packing 구현 (Implementing Geometry Packing for MPEG Immersive Video)

  • 정종범;이순빈;류은석
    • 방송공학회논문지
    • /
    • 제27권6호
    • /
    • pp.861-871
    • /
    • 2022
  • 실사 및 컴퓨터 그래픽을 표현하는 다수의 몰입형 영상을 효율적으로 부호화하는 표준으로 moving picture experts group (MPEG)에서는 MPEG immersive video (MIV) 표준을 개발하였다. MIV 표준은 다수의 몰입형 영상을 압축하여 다수의 출력 영상인 아틀라스를 생성하나, 다수의 아틀라스를 부호화 후 복호화 시 저사양 장비에서 복호기 간 동기화 문제가 발생할 수 있다. 본 논문은 저사양 및 고사양 장비에 모두 대응하여 적응적 복호기 개수 조절을 위한 geometry packing 기법을 제안하고 구현한다. 제안하는 방법은 MIV의 최신 참조 소프트웨어에서 문제없이 동작함을 확인하였다.

틱톡(Tik Tok) 이용자의 연애유형이 연애 동영상의 이용 동기, 이용 만족도에 미치는 영향 (The Effect of Tik Tok Users' Love Types on Love Videos' Motivation and User Satisfaction)

  • 조맹;양천;이상훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권5호
    • /
    • pp.703-720
    • /
    • 2022
  • Based on the love styles theory used in psychology, this paper classifies users(Passionate Love, Game-playing Love, Friendship Love, Practical Love, Possessive Love, Altruistic Love) and investigates satisfaction with the motivation for using TikTok love videos(Entertainment, Social Relationship, Love skills-learning, Self-verification, Problem-solving) according to the theory of use and satisfaction. First, 414 users were selected to conduct TikTok surveys to collect data. Then, through the analysis of the research results, among the six love types, game-playing type and possessive type have a positive (+) impact on entertainment motivation and love skill-learning motivation. Game-playing type also have a positive (+) impact on social relationship motivation and self-verification motivation. In addition, altruistic type and possessive type are also factors to strengthen the motivation of self-verification. The altruistic type, possessive type and practical type will improve the problem-solving motivation. Finally, through hierarchial multiple regression analysis, it is confirmed that game-playing love type, entertainment motivation, love skill-learning motivation and self-verification motivation can improve user satisfaction. The above results enrich the research of user classification as well as providing inspiration for improving the quality and communication efficiency of TikTok's video and enhancing user experience.

Development and Distribution of Deep Fake e-Learning Contents Videos Using Open-Source Tools

  • HO, Won;WOO, Ho-Sung;LEE, Dae-Hyun;KIM, Yong
    • 유통과학연구
    • /
    • 제20권11호
    • /
    • pp.121-129
    • /
    • 2022
  • Purpose: Artificial intelligence is widely used, particularly in the popular neural network theory called Deep learning. The improvement of computing speed and capability expedited the progress of Deep learning applications. The application of Deep learning in education has various effects and possibilities in creating and managing educational content and services that can replace human cognitive activity. Among Deep learning, Deep fake technology is used to combine and synchronize human faces with voices. This paper will show how to develop e-Learning content videos using those technologies and open-source tools. Research design, data, and methodology: This paper proposes 4 step development process, which is presented step by step on the Google Collab environment with source codes. This technology can produce various video styles. The advantage of this technology is that the characters of the video can be extended to any historical figures, celebrities, or even movie heroes producing immersive videos. Results: Prototypes for each case are also designed, developed, presented, and shared on YouTube for each specific case development. Conclusions: The method and process of creating e-learning video contents from the image, video, and audio files using Deep fake open-source technology was successfully implemented.

뇌성마비 환자의 자세 불균형 탐지를 위한 스마트폰 동영상 기반 보행 분석 시스템 (Smartphone-based Gait Analysis System for the Detection of Postural Imbalance in Patients with Cerebral Palsy)

  • 황윤호;이상현;민유선;이종택
    • 대한임베디드공학회논문지
    • /
    • 제18권2호
    • /
    • pp.41-50
    • /
    • 2023
  • Gait analysis is an important tool in the clinical management of cerebral palsy, allowing for the assessment of condition severity, identification of potential gait abnormalities, planning and evaluation of interventions, and providing a baseline for future comparisons. However, traditional methods of gait analysis are costly and time-consuming, leading to a need for a more convenient and continuous method. This paper proposes a method for analyzing the posture of cerebral palsy patients using only smartphone videos and deep learning models, including a ResNet-based image tilt correction, AlphaPose for human pose estimation, and SmoothNet for temporal smoothing. The indicators employed in medical practice, such as the imbalance angles of shoulder and pelvis and the joint angles of spine-thighs, knees and ankles, were precisely examined. The proposed system surpassed pose estimation alone, reducing the mean absolute error for imbalance angles in frontal videos from 4.196° to 2.971° and for joint angles in sagittal videos from 5.889° to 5.442°.