• Title/Summary/Keyword: 미디어 포맷

Search Result 303, Processing Time 0.024 seconds

Development of Valuation Framework for Estimating the Market Value of Media Contents (미디어 콘텐츠의 시장가치 산정을 위한 가치평가 프레임워크 개발)

  • Sung, Tae-Eung;Park, Hyun-Woo
    • Journal of Service Research and Studies
    • /
    • v.6 no.3
    • /
    • pp.29-40
    • /
    • 2016
  • Since the late 20th century, there has been much effort to improve the market value of media contents which are commercialized in a digital format, by fusing digital data of video, audio, numerals, characters with IT technology together. Then by what criteria and methodologies could the market value for the drama "Sons of the Sun" or the animated film 'Frozen', often referred to in the meida, be estimated? In the circumstances there has been little or no research on the valuation framework of media contents and the status of their valuation system development to date, we propose a practical valuation models for various purposes such as contents trading, review of investment adequacy, etc., by formalizing and presenting a contents valuation framework for the four types of media of movies, online games, and broadcasting commercials, and animations. Therefore, we develope computational methods of cash flows which includes production cost by media content types, provide reference databases associated with key variables of valuation (economic life cycle, discount rates, contents contribution and royalty rates), and finally propose the valuation framework of media contents based on both income approach and relief-from-royalty method which has been applied to valuation of intangible assets so far.

Core Experiments for Standardization of Internet of Media Things (미디어사물인터넷의 국제표준화를 위한 핵심 실험)

  • Jeong, Min Hyuk;Lee, Gyeong Sik;Kim, Sang Kyun
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.579-588
    • /
    • 2017
  • Recently, due to the development of network environment, the internet market has been expanding, so it is necessary to standardize the data format and API to exchange information among objects. Therefore, MPEG (Moving Picture Expert Group), an international standardization organization, is establishing the MPEG-IoMT (ISO/IEC 23093) project to standardize the Internet of Things. MPEG-IoMT establishes Core Experiment (CE) and discusses overall data exchange such as data exchange procedure, markup language and communication method. In this paper, core experiments 1, 2, 4, and 5 of the core experiments of MPEG-IoMT will be discussed. The performance information of the sensor, the sensor data, the performance information of the driver, and the exchange procedure of the control command are explained and the exchange of the media additional data is discussed. We compare the markup language and communication method through experiment.

Comparison of the transformation methods for Flash Videos to Web Videos (플래시 비디오에서 웹비디오로의 변환기법 비교)

  • Lee, Hyun-Lee;Kim, Kyoung-Soo;Ceong, Hee-Taek
    • Journal of Digital Contents Society
    • /
    • v.11 no.4
    • /
    • pp.579-588
    • /
    • 2010
  • Generalization of the web, development of one-person media such as the blog and mini homepage, and integration of video digital devices have generalized multimedia video services on the web. However, flash videos, the previously used bit map-based multimedia videos, exhibit problems like the waterfall phenomenon, lag phenomenon, or non-synchronization of audios or videos. Thereupon, This study is conducted to suggest a converting technique to provide efficient web video service on the web by solving problems of bitmap-based flash video through file format-converting software and movie editing programs. And this paper also conducts experiments on five videos for 13 CODECs and analyzes converted results comparatively. The recommendable method considering the characteristics of each videos is to utilize MainConcept H.264 Video CODEC using SWF2Video pro. The result of this research can be used to produce web videos on the web more effectively.

Change of the Age of TV Cooking Programs (TV 요리 프로그램의 시대적 변화에 대한 연구)

  • Chung, Tae-Sub
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.6
    • /
    • pp.379-386
    • /
    • 2019
  • It is a study of the changing times of cooking programs that were broadcast and aired on TV. The media has shown a lot of changes in appearance, and it's also showing a lot We want to look at various changes and changes in the times through programs on cuisine and food that many people now prefer. In the preceding study, we looked at changes in media formats and changes in social changes and programs until the 2000s. Based on this analysis, the changes of the cooking program were analyzed according to the changes of the times. For the purpose of this study, changes were divided into cooking and historical changes, which were divided and examined by the period. Through this study, we were able to see the change in the cooking program in an era where changes in diet are mixed with changes in culture. It was possible to see the change through programs that consumers could feel, not unilateral delivery of information, and to see the change from the era of experts to culture. Further, through the introduction and challenge of local culture through food, subsequent research aims to see changes and social phenomena caused by the division of programs.

Processing Techniques of Layer Channel Image for 3D Image Effects (3D 영상 효과를 위한 레이어 채널 이미지의 처리 기법)

  • Choi, Hak-Hyun;Kim, Jung-Hee;Lee, Myung-Hak
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.272-281
    • /
    • 2008
  • A layer channel, which can express effects on 3D image, is inserted to use it on application rendering effectively. The current method of effect rendering requires individual sources in storage and image processing, because it uses individual and mixed management of images and effects. However, we can save costs and improve results in images processing by processing both image and layer channels together. By changing image format to insert a layer channel in image and adding a hide function to conceal the layer channel and control to make it possible to approach image and layer channels simultaneously during loading image and techniques hiding the layer channel by changing image format with simple techniques, like alpha blending, etc., it is developed to improve reusability and be able to be used in all programs by combining the layer channel and image together, so that images in changed format can be viewed in general image viewers. With the configuration, we can improve processing speed by introducing image and layer channels simultaneously during loading images, and reduce the size of source storage space for layer channel images by inserting a layer channel in 3D images. Also, it allows managing images in 3D image and layer channels simultaneously, enabling effective expressions, and we can expect to use it effectively in multimedia image used in practical applications.

Preliminary Study on All-in-JPEG with Multi-Content Storage Format extending JPEG (JPEG를 확장한 멀티 콘텐츠 저장 포맷 All-in-JPEG에 관한 예비 연구)

  • Yu-Jin Kim;Kyung-Mi Kim;Song-Yeon Yoo;Chae-Won Park;Kitae Hwang;In-Hwan Jung;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.183-189
    • /
    • 2023
  • This paper proposes a new JPEG format, All-in-JPEG, which can include not only multiple photos but also various media such as audio and text by extending the JPEG format. All-in-JPEG add images, audio, and text at the existing JPEG file, and stores meta information by utilizing the APP3 segment of JPEG. With All-in-JPEG, smartphone users can save many pictures taken in burst shots in one file, and it is also very convenient to share them with others. In addition, you can create a live photo, such as saving a short audio at the time of taking a photo or moving a part of the photo. In addition, it can be used for various applications such as a photo diary app that stores images, voices, and diary text in a single All-in-JPEG file. In this paper, we developed an app that creates and edits All-in-JPEG, a photo diary app, and a magic photo function, and verified feasibility of the All-in-JPEG through them.

Research on the Design of TPO(Time, Place, 0Occasion)-Shift System for Mobile Multimedia Devices (휴대용 멀티미디어 디바이스를 위한 TPO(Time, Place, Occasion)-Shift 시스템 설계에 대한 연구)

  • Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.9-16
    • /
    • 2009
  • While the broadband network and multimedia technology are being developed, the commercial market of digital contents as well as using IPTV has been widely spreading. In this background, Time-Shift system is developed for requirement of multimedia. This system is independent of Time but is not independent of Place and Occasion. For solving these problems, in this paper, we propose the TPO(Time, Place, Occasion)-Shift system for mobile multimedia devices. The profile that can be applied to the mobile multimedia devices is much different from that of the setter-box. And general mobile multimedia devices could not have such large memories that is for multimedia data. So it is important to continuously store and manage those multimedia data in limited capacity with mobile device's profile. Therefore we compose the basket in a way using defined time unit and manage these baskets for effective buffer management. In addition. since the file name of basket is made up to include a basket's time information, we can make use of this time information as DTS(Decoding Time Stamp). When some multimedia content is converted to be available for portable multimedia devices, we are able to compose new formatted contents using such DTS information. Using basket based buffer systems, we can compose the contents by real time in mobile multimedia devices and save some memory. In order to see the system's real-time operation and performance, we implemented the proposed TPO-Shift system on the basis of mobile device, MS340. And setter-box are desisted by using directshow player under Windows Vista environment. As a result, we can find the usefulness and real-time operation of the proposed systems.

Content Insertion Technology using Mobile MMT with CMAF (CMAF 기반 Mobile MMT를 활용한 콘텐츠 삽입 기술)

  • Kim, Junsik;Park, Sunghwan;Kim, Doohwan;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.560-568
    • /
    • 2020
  • In recent years, as network technology develops, the usage of streaming services by users is increasing. However, the complexity of streaming services is also increasing due to various terminal environments. Even when streaming the same content, it is necessary to re-encode the content according to the type of service. In order to solve the complexity and latency of the streaming service, Moving Picture Experts Group (MPEG) has standardized the Common Media Application Format (CMAF). In addition, as content transmission using a communication network becomes possible, the Republic of Korea's Ultra High Definition (UHD) broadcasting standard has been enacted as a hybrid standard using a broadcasting network and a communication network. The hybrid service enables various services such as transmitting additional information of contents or providing user-customized contents through a communication network. The Republic of Korea's UHD transmission standard utilizes MPEG Media Transport (MMT), and Mobile MMT is an extension of MMT to provide mobile network-specific functions. This paper proposes a method of inserting CMAF contents suitable for various streaming services using signaling messages of MMT and Mobile MMT. In addition, this paper proposes a model for content insertion system in heterogeneous network environment using broadcasting and communication networks, and verifies the validity of the proposed technology by checking the result of content insertion.

Simulation of YUV-Aware Instructions for High-Performance, Low-Power Embedded Video Processors (고성능, 저전력 임베디드 비디오 프로세서를 위한 YUV 인식 명령어의 시뮬레이션)

  • Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.252-259
    • /
    • 2007
  • With the rapid development of multimedia applications and wireless communication networks, consumer demand for video-over-wireless capability on mobile computing systems is growing rapidly. In this regard, this paper introduces YUV-aware instructions that enhance the performance and efficiency in the processing of color image and video. Traditional multimedia extensions (e.g., MMX, SSE, VIS, and AltiVec) depend solely on generic subword parallelism whereas the proposed YUV-aware instructions support parallel operations on two-packed 16-bit YUV (6-bit Y, 5-bits U, V) values in a 32-bit datapath architecture, providing greater concurrency and efficiency for color image and video processing. Moreover, the ability to reduce data format size reduces system cost. Experiment results on a representative dynamically scheduled embedded superscalar processor show that YUV-aware instructions achieve an average speedup of 3.9x over the baseline superscalar performance. This is in contrast to MMX (a representative Intel#s multimedia extension), which achieves a speedup of only 2.1x over the same baseline superscalar processor. In addition, YUV-aware instructions outperform MMX instructions in energy reduction (75.8% reduction with YUV-aware instructions, but only 54.8% reduction with MMX instructions over the baseline).

A Real Time 6 DoF Spatial Audio Rendering System based on MPEG-I AEP (MPEG-I AEP 기반 실시간 6 자유도 공간음향 렌더링 시스템)

  • Kyeongok Kang;Jae-hyoun Yoo;Daeyoung Jang;Yong Ju Lee;Taejin Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.213-229
    • /
    • 2023
  • In this paper, we introduce a spatial sound rendering system that provides 6DoF spatial sound in real time in response to the movement of a listener located in a virtual environment. This system was implemented using MPEG-I AEP as a development environment for the CfP response of MPEG-I Immersive Audio and consists of an encoder and a renderer including a decoder. The encoder serves to offline encode metadata such as the spatial audio parameters of the virtual space scene included in EIF and the directivity information of the sound source provided in the SOFA file and deliver them to the bitstream. The renderer receives the transmitted bitstream and performs 6DoF spatial sound rendering in real time according to the position of the listener. The main spatial sound processing technologies applied to the rendering system include sound source effect and obstacle effect, and other ones for the system processing include Doppler effect, sound field effect and etc. The results of self-subjective evaluation of the developed system are introduced.