• Title/Summary/Keyword: Video Contents Generation

Search Result 96, Processing Time 0.024 seconds

Artificial Intelligence-Based Video Content Generation (인공지능 기반 영상 콘텐츠 생성 기술 동향)

  • Son, J.W.;Han, M.H.;Kim, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.34-42
    • /
    • 2019
  • This study introduces artificial intelligence (AI) techniques for video generation. For an effective illustration, techniques for video generation are classified as either semi-automatic or automatic. First, we discuss some recent achievements in semi-automatic video generation, and explain which types of AI techniques can be applied to produce films and improve film quality. Additionally, we provide an example of video content that has been generated by using AI techniques. Then, two automatic video-generation techniques are introduced with technical details. As there is currently no feasible automatic video-generation technique that can generate commercial videos, in this study, we explain their technical details, and suggest the future direction for researchers. Finally, we discuss several considerations for more practical automatic video-generation techniques.

3D Video Processing for 3DTV

  • Sohn, Kwang-Hoon
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1231-1234
    • /
    • 2007
  • This paper presents the overview of 3D video processing technologies for 3DTV such as 3D content generation, 3D video codec and video processing techniques for 3D displays. Some experimental results for 3D contents generation are shown in 3D mixed reality and 2D/3D conversion.

  • PDF

A Feasibility Study on RUNWAY GEN-2 for Generating Realistic Style Images

  • Yifan Cui;Xinyi Shan;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.99-105
    • /
    • 2024
  • Runway released an updated version, Gen-2, in March 2023, which introduced new features that are different from Gen-1: it can convert text and images into videos, or convert text and images together into video images based on text instructions. This update will be officially open to the public in June 2023, so more people can enjoy and use their creativity. With this new feature, users can easily transform text and images into impressive video creations. However, as with all new technologies, comes the instability of AI, which also affects the results generated by Runway. This article verifies the feasibility of using Runway to generate the desired video from several aspects through personal practice. In practice, I discovered Runway generation problems and propose improvement methods to find ways to improve the accuracy of Runway generation. And found that although the instability of AI is a factor that needs attention, through careful adjustment and testing, users can still make full use of this feature and create stunning video works. This update marks the beginning of a more innovative and diverse future for the digital creative field.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Seeking for Underlying Meaning of the 'house' and Characteristics in Music Video - Analyzing Seotaiji and Boys and BTS Music Video in Perspective of Generation - ( 뮤직비디오에 나타난 '집'의 의미와 성격 - 서태지와 아이들, 방탄소년단 작품에 대한 세대론적 접근 -)

  • Kil, Hye Bin;Ahn, Soong Beum
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.5
    • /
    • pp.24-34
    • /
    • 2019
  • This study compares in depth the song performed by two groups, one by 'Seo Taiji And His Boys'(X Generation) and the other by 'BTS'(C Generation) based on the discourse about the 'X Generation' in the 1990s and the 'C Generation' in the 2010s. It will specifically focus on the nature of 'home' that has great significance in the music video and will find the sociocultural meaning of it. Based on the analysis, the original performance by 'Seo Taiji and The Boys' demonstrated the vertical structure of enlightenment and discipline and narrated the story with the plot of 'maturity'. The meaning of 'home' in the original version shifts from a target of resistance to a subject of internalization. The remake music video of BTS demonstrated a horizontal structure of empathy and solidarity and narrated the story with the plot of 'pursuit/discovery'. The 'home' here can be considered the life itself of a person who maintains one's self identity.

User-created multi-view video generation with portable camera in mobile environment (모바일 환경의 이동형 카메라를 이용한 사용자 저작 다시점 동영상의 제안)

  • Sung, Bo Kyung;Park, Jun Hyoung;Yeo, Ji Hye;Ko, Il Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.157-170
    • /
    • 2012
  • Recently, user-created video shows high increasing in production and consumption. Among these, videos records an identical subject in limited space with multi-view are coming out. Occurring main reason of this kind of video is popularization of portable camera and mobile web environment. Multi-view has studied in visually representation technique fields for point of view. Definition of multi-view has been expanded and applied to various contents authoring lately. To make user-created videos into multi-view contents can be a kind of suggestion as a user experience for new form of video consumption. In this paper, we show the possibility to make user-created videos into multi-view video content through analyzing multi-view video contents even there exist attribute differentiations. To understanding definition and attribution of multi-view classified and analyzed existing multi-view contents. To solve time axis arranging problem occurred in multi-view processing proposed audio matching method. Audio matching method organize feature extracting and comparing. To extract features is proposed MFCC that is most universally used. Comparing is proposed n by n. We proposed multi-view video contents that can consume arranged user-created video by user selection.

The One Time Biometric Key Generation and Authentication Model for Portection of Paid Video Contents (상용 비디오 콘텐츠 보호를 위한 일회용 바이오메트릭 키 생성 및 인증 모델)

  • Yun, Sunghyun
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.4
    • /
    • pp.101-106
    • /
    • 2014
  • Most peoples are used to prefer to view the video contents rather than the other contents since the video contents are more easy to understand with both their eyes and ears. As the wide spread use of smartphones, the demands for the contents services are increasing rapidly. To promote the contents business, it's important to provide security of subscriber authentication and corresponding communication channels through which the contents are delivered. Generally, symmetric key encryption scheme is used to protect the contents in the channel, and the session key should be upadated periodically for the security reasons. In addition, to protect viewing paid contents by illegal users, the proxy authentication should not be allowed. In this paper, we propose biometric based user authentication and one time key generation models. The proposed model is consist of biometric template registration, session key generation and chanel encryption steps. We analyze the difference and benefits of our model with existing CAS models which are made for CATV contents protection, and also provides applications of our model in electronic commerce area.

Construction of a Video Dataset for Face Tracking Benchmarking Using a Ground Truth Generation Tool

  • Do, Luu Ngoc;Yang, Hyung Jeong;Kim, Soo Hyung;Lee, Guee Sang;Na, In Seop;Kim, Sun Hee
    • International Journal of Contents
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2014
  • In the current generation of smart mobile devices, object tracking is one of the most important research topics for computer vision. Because human face tracking can be widely used for many applications, collecting a dataset of face videos is necessary for evaluating the performance of a tracker and for comparing different approaches. Unfortunately, the well-known benchmark datasets of face videos are not sufficiently diverse. As a result, it is difficult to compare the accuracy between different tracking algorithms in various conditions, namely illumination, background complexity, and subject movement. In this paper, we propose a new dataset that includes 91 face video clips that were recorded in different conditions. We also provide a semi-automatic ground-truth generation tool that can easily be used to evaluate the performance of face tracking systems. This tool helps to maintain the consistency of the definitions for the ground-truth in each frame. The resulting video data set is used to evaluate well-known approaches and test their efficiency.

Development and Evaluation of Video English Dictionary for Silver Generation (실버세대를 위한 동영상 영어사전의 개발 및 평가)

  • Kim, Jeiyoung;Park, Ji Su;Shon, Jin Gon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.11
    • /
    • pp.345-350
    • /
    • 2020
  • Based on the analysis of physical and learning characteristics and requirements of the silver generation, a video English dictionary was developed and evaluated as English learning contents. The video English dictionary was developed using OCR as an input method and video as an output method, and 17 silver generations were evaluated for academic achievement, learning satisfaction, and ease of use. As a result of the analysis, both the text English dictionary and the video English dictionary showed high learning satisfaction, but the video English dictionary showed higher results than the text English dictionary in an academic achievement and ease of use.