• Title/Summary/Keyword: Video distribution

Search Result 413, Processing Time 0.023 seconds

The Characteristics and Future Trends of Short-Form Animation (숏폼 애니메이션의 특성과 발전방향에 관한 연구)

  • Lee, Sun-Ju;Han, Je-Sung
    • Cartoon and Animation Studies
    • /
    • s.38
    • /
    • pp.29-51
    • /
    • 2015
  • With the progress in high speed internet networks, mobile devices and social networking, the eco-system of the media has shifted from that where the flow of content was one-way from the producer to the consumer. A so-called 'prosumer' culture has taken root where the consumer himself produces media content. Along with these trends, various video-sharing platforms such as youtube has a method of allocating advertisement profit to the content producer, offering a win-win platform for content pro-sumers. This allows the channels to attract several tens of millions of subscribers and raise an annual income of over 10 billion Won, marking a revolutionary change in the content industry. This paper seeks to analyze video distribution channels and short-form media content that are showing continuous growth to identify new markets where animated content can make progress in an era of online video media platforms, as well as provide a future direction for small teams of creators of animated films to survive and thrive in this environment.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Detection of an Impact Flash Candidate on the Moon with an Educational Telescope System

  • Kim, Eunsol;Kim, Yong Ha;Hong, Ik-Seon;Yu, Jaehyung;Lee, Eungseok;Kim, Kyoungja
    • Journal of Astronomy and Space Sciences
    • /
    • v.32 no.2
    • /
    • pp.121-125
    • /
    • 2015
  • At the suggestion of the NASA Meteoroid Environment Office (NASA/MEO), which promotes lunar impact monitoring worldwide during NASA's Lunar Atmosphere and Dust Environment Explorer (LADEE) mission period (launched Sept. 2013), we set up a video observation system for lunar impact flashes using a 16-inch educational telescope at Chungnam National University. From Oct. 2013 through Apr. 2014, we recorded 80 hours of video observation of the unilluminated part of the crescent moon in the evening hours. We found a plausible candidate impact flash on Feb. 3, 2014 at selenographic longitude $2.1^{\circ}$ and latitude $25.4^{\circ}$. The flash lasted for 0.2 s and the light curve was asymmetric with a slow decrease after a peak brightness of $8.7{\pm}0.3mag$. Based on a star-like distribution of pixel brightness and asymmetric light curve, we conclude that the observed flash was due to a meteoroid impact on the lunar surface. Since unequivocal detection of an impact flash requires simultaneous observation from at least two sites, we strongly recommend that other institutes and universities in Korea set up similar inexpensive monitoring systems involving educational or amateur telescopes, and that they collaborate in the near future.

Adaptive Video Watermarking based on 3D-DCT Using Image Characteristics (영상 특성을 이용한 3D-DCT 기반의 적응적인 비디오 워터마킹)

  • Park Hyun;Lee Sung-Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.68-75
    • /
    • 2006
  • In this paper, we propose an adaptive video watermarking method using human visual system(HVS) and characteristics of three-dimensional cosine transform (3D-DCT) cubes. We classify 3D-DCT cubes into three patterns according to the distribution of coefficients in the 3D-DCT cube: cube with motion and textures, cube with high textures and little motion, and cube with little textures and line motion. Images are also classified into three types according to the ratio of these patterns: images with motion and textures, images with high textures and little motion, and images with little textures and little motion. The proposed watermarking method adaptivelyinserts the watermark on the coefficients of the mid-range in the 3D-DCT cube using the appropriately learned sensitivity table and the proportional constants depending on the patterns of 3D-DCT cubes and types of images. Experimental results show that the proposed method achieves better performance in terms of invisibility and robustness than the previous method.

Detection and Blocking of a Face Area Using a Tracking Facility in Color Images (컬러 영상에서 추적 기능을 활용한 얼굴 영역 검출 및 차단)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.454-460
    • /
    • 2020
  • In recent years, the rapid increases in video distribution and viewing over the Internet have increased the risk of personal information exposure. In this paper, a method is proposed to robustly identify areas in images where a person's privacy is compromised and simultaneously blocking the object area by blurring it while rapidly tracking it using a prediction algorithm. With this method, the target object area is accurately identified using artificial neural network-based learning. The detected object area is then tracked using a location prediction algorithm and is continuously blocked by blurring it. Experimental results show that the proposed method effectively blocks private areas in images by blurring them, while at the same time tracking the target objects about 2.5% more accurately than another existing method. The proposed blocking method is expected to be useful in many applications, such as protection of personal information, video security, object tracking, etc.

Adaptive Selection of Weighted Quantization Matrix for H.264 Intra Video Coding (H.264 인트라 부호화를 위한 적응적 가중치 양자화 행렬 선택방법)

  • Cho, Jae-Hyun;Cho, Suk-Hee;Jeong, Se-Yoon;Song, Byung-Cheol
    • Journal of Broadcast Engineering
    • /
    • v.15 no.5
    • /
    • pp.672-680
    • /
    • 2010
  • This paper presents an adaptive quantization matrix selection scheme for H.264 video encoding. Conventional H.264 coding standard applies the same quantization matrix to the entire video sequence without considering local characteristics in each frame. In this paper, we propose block adaptive selection of quantization matrix according to edge directivity of each block. Firstly, edge directivity of each block is determined using intra prediction modes of its spatially adjacent blocks. If the block is decided as a directional block, new weighted quantization matrix is applied to the block. Otherwise, conventional quantization matrix is used for quantization of the non-directional block. Since the proposed weighted quantization is designed based on statistical distribution of transform coefficients in accordance with intra prediction modes, we can achieve high coding efficiency. Experimental results show that the proposed scheme can improve coding efficiency by about 2% in terms of BD bit-rate.

Encryption Scheme for MPEG-4 Media Transmission Exploiting Frame Dropping

  • Shin, Dong-Kyoo;Shin, Dong-Il;Shin, Jae-Wan;Kim, Soo-Han;Kim, Seung-Dong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.5
    • /
    • pp.925-938
    • /
    • 2010
  • Depending on network conditions, a communication network could be overloaded when media are transmitted. Research has been carried out to lessen network overloading, such as by filtering, load distribution, frame dropping, and other methods. Among these methods, one of the most effective is frame dropping, which reduces specified video frames for bandwidth diminution. In frame dropping, B-frames are dropped and then I- and P-frames are dropped, based on the dependency among the frames. This paper proposes a scheme for protecting copyrights by encryption, when frame dropping is applied to reduce the bandwidth of media based on the MPEG-4 file format. We designed two kinds of frame dropping: the first stores and then sends the dropped files and the other drops frames in real time when transmitting. We designed three kinds of encryption methods using the DES algorithm to encrypt MPEG-4 data: macro block encryption in I-VOP, macro block and motion vector encryption in P-VOP, and macro block and motion vector encryption in I-, P-VOP. Based on these three methods, we implemented a digital rights management solution for MPEG-4 data streaming. We compared the results of dropping, encryption, decryption, and the quality of the video sequences to select an optimal method, and found that there was no noticeable difference between the video sequences recovered after frame dropping and the ones recovered without frame dropping. The best performance in the encryption and decryption of frames was obtained when we applied the macro block and motion vector encryption in I-, P-VOP.

Detection of Abnormal Behavior by Scene Analysis in Surveillance Video (감시 영상에서의 장면 분석을 통한 이상행위 검출)

  • Bae, Gun-Tae;Uh, Young-Jung;Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12C
    • /
    • pp.744-752
    • /
    • 2011
  • In intelligent surveillance system, various methods for detecting abnormal behavior were proposed recently. However, most researches are not robust enough to be utilized for actual reality which often has occlusions because of assumption the researches have that individual objects can be tracked. This paper presents a novel method to detect abnormal behavior by analysing major motion of the scene for complex environment in which object tracking cannot work. First, we generate Visual Word and Visual Document from motion information extracted from input video and process them through LDA(Latent Dirichlet Allocation) algorithm which is one of document analysis technique to obtain major motion information(location, magnitude, direction, distribution) of the scene. Using acquired information, we compare similarity between motion appeared in input video and analysed major motion in order to detect motions which does not match to major motions as abnormal behavior.

A Stroke-Based Text Extraction Algorithm for Digital Videos (디지털 비디오를 위한 획기반 자막 추출 알고리즘)

  • Jeong, Jong-Myeon;Cha, Ji-Hun;Kim, Kyu-Heon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.297-303
    • /
    • 2007
  • In this paper, the stroke-based text extraction algorithm for digital video is proposed. The proposed algorithm consists of four stages such as text detection, text localization, text segmentation and geometric verification. The text detection stage ascertains that a given frame in a video sequence contains text. This procedure is accomplished by morphological operations for the pixels with higher possibility of being stroke-based text, which is called as seed points. For the text localization stage, morphological operations for the edges including seed points ate adopted followed by horizontal and vortical projections. Text segmentation stage is to classify projected areas into text and background regions according to their intensity distribution. Finally, in the geometric verification stage, the segmented area are verified by using prior knowledge of video text characteristics.

Non-rigid 3D Shape Recovery from Stereo 2D Video Sequence (스테레오 2D 비디오 영상을 이용한 비정형 3D 형상 복원)

  • Koh, Sung-shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.281-288
    • /
    • 2016
  • The natural moving objects are the most non-rigid shapes with randomly time-varying deformation, and its types also very diverse. Methods of non-rigid shape reconstruction have widely applied in field of movie or game industry in recent years. However, a realistic approach requires moving object to stick many beacon sets. To resolve this drawback, non-rigid shape reconstruction researches from input video without beacon sets are investigated in multimedia application fields. In this regard, our paper propose novel CPSRF(Chained Partial Stereo Rigid Factorization) algorithm that can reconstruct a non-rigid 3D shape. Our method is focused on the real-time reconstruction of non-rigid 3D shape and motion from stereo 2D video sequences per frame. And we do not constrain that the deformation of the time-varying non-rigid shape is limited by a Gaussian distribution. The experimental results show that the 3D reconstruction performance of the proposed CPSRF method is superior to that of the previous method which does not consider the random deformation of shape.