• 제목/요약/키워드: Video representation

Search Result 195, Processing Time 0.034 seconds

A Petri Net Representation of an IPTV System (IPTV 시스템의 페트리넷 표현)

  • Yim, Jae-Geol;Lee, Gye-Young;Woo, Jin-Seok
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.01a
    • /
    • pp.91-94
    • /
    • 2012
  • 융합 시스템의 표본으로 주목받고 있는 IPTV는 라디오 전파 방송이 아닌 인터넷을 통하여 사용자들에게 멀티미디어 콘텐츠를 제공한다. IPTV 응용 분야에는 VOD(Video On Demand) 서비스, 교육, 문화, 오락 등 다양한 양방향 응용 서비스가 포함된다. 본 연구에서는 이러한 IPTV 시스템을 페트리 넷으로 표현하고, CPNTools를 이용하여 페트리 넷을 분석함으로써 IPTV 시스템이 막힘없이 운행됨을 검증한다.

  • PDF

Design and Implementation of Variable-Rate QPSK Demodulator from Data Flow Representation

  • Lee, Seung-Jun
    • Journal of Electrical Engineering and information Science
    • /
    • v.3 no.2
    • /
    • pp.139-144
    • /
    • 1998
  • This paper describes the design of a variable rate QPSK demodulator for digital satellite TV system. This true variable-rate demodulator employs a unique architecture to realize an all digital synchronization and detection algorithm. Data-flow based design approach enabled a seamless transition from high level design optimization to physical layout. The demodulator has been integrated with Viterbi decoder, de-interleaver, and Ree-Solomon decoder to make a single chip Digital Video Broadcast (DVB) receiver. The receiver IC has been fabricated with a 0.5mm CMOS TLM process and proved fully functional in a real-world set-up.

  • PDF

A Study on Video Recognition and Representation Service Based on Smart Glass (스마트 글래스 기반 영상 인식 및 표현 서비스에 관한 연구)

  • Kim, Kwihoon;You, Woongsik;Oh, Jintae
    • Annual Conference of KIPS
    • /
    • 2014.11a
    • /
    • pp.1191-1194
    • /
    • 2014
  • 본 논문은 스마트 글래스 혹은 스마트 디바이스를 활용해서, 영상을 인식하여 다양한 표현을 할 수 있는 서비스에 대한 것이다. 스마트 글래스를 활용하여 일반 사용자에게 제공할 수 있는 다양한 서비스에 대해서 소개하려고 한다.

Misclassified Samples based Hierarchical Cascaded Classifier for Video Face Recognition

  • Fan, Zheyi;Weng, Shuqin;Zeng, Yajun;Jiang, Jiao;Pang, Fengqian;Liu, Zhiwen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.785-804
    • /
    • 2017
  • Due to various factors such as postures, facial expressions and illuminations, face recognition by videos often suffer from poor recognition accuracy and generalization ability, since the within-class scatter might even be higher than the between-class one. Herein we address this problem by proposing a hierarchical cascaded classifier for video face recognition, which is a multi-layer algorithm and accounts for the misclassified samples plus their similar samples. Specifically, it can be decomposed into single classifier construction and multi-layer classifier design stages. In single classifier construction stage, classifier is created by clustering and the number of classes is computed by analyzing distance tree. In multi-layer classifier design stage, the next layer is created for the misclassified samples and similar ones, then cascaded to a hierarchical classifier. The experiments on the database collected by ourselves show that the recognition accuracy of the proposed classifier outperforms the compared recognition algorithms, such as neural network and sparse representation.

Reduced Reference Quality Metric for Synthesized Virtual Views in 3DTV

  • Le, Thanh Ha;Long, Vuong Tung;Duong, Dinh Trieu;Jung, Seung-Won
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1114-1123
    • /
    • 2016
  • Multi-view video plus depth (MVD) has been widely used owing to its effectiveness in three-dimensional data representation. Using MVD, color videos with only a limited number of real viewpoints are compressed and transmitted along with captured or estimated depth videos. Because the synthesized views are generated from decoded real views, their original reference views do not exist at either the transmitter or receiver. Therefore, it is challenging to define an efficient metric to evaluate the quality of synthesized images. We propose a novel metric-the reduced-reference quality metric. First, the effects of depth distortion on the quality of synthesized images are analyzed. We then employ the high correlation between the local depth distortions and local color characteristics of the decoded depth and color images, respectively, to achieve an efficient depth quality metric for each real view. Finally, the objective quality metric of the synthesized views is obtained by combining all the depth quality metrics obtained from the decoded real views. The experimental results show that the proposed quality metric correlates very well with full reference image and video quality metrics.

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Character Recognition and Search for Media Editing (미디어 편집을 위한 인물 식별 및 검색 기법)

  • Park, Yong-Suk;Kim, Hyun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.519-526
    • /
    • 2022
  • Identifying and searching for characters appearing in scenes during multimedia video editing is an arduous and time-consuming process. Applying artificial intelligence to labor-intensive media editing tasks can greatly reduce media production time, improving the creative process efficiency. In this paper, a method is proposed which combines existing artificial intelligence based techniques to automate character recognition and search tasks for video editing. Object detection, face detection, and pose estimation are used for character localization and face recognition and color space analysis are used to extract unique representation information.

Latent Shifting and Compensation for Learned Video Compression (신경망 기반 비디오 압축을 위한 레이턴트 정보의 방향 이동 및 보상)

  • Kim, Yeongwoong;Kim, Donghyun;Jeong, Se Yoon;Choi, Jin Soo;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.31-43
    • /
    • 2022
  • Traditional video compression has developed so far based on hybrid compression methods through motion prediction, residual coding, and quantization. With the rapid development of technology through artificial neural networks in recent years, research on image compression and video compression based on artificial neural networks is also progressing rapidly, showing competitiveness compared to the performance of traditional video compression codecs. In this paper, a new method capable of improving the performance of such an artificial neural network-based video compression model is presented. Basically, we take the rate-distortion optimization method using the auto-encoder and entropy model adopted by the existing learned video compression model and shifts some components of the latent information that are difficult for entropy model to estimate when transmitting compressed latent representation to the decoder side from the encoder side, and finally compensates the distortion of lost information. In this way, the existing neural network based video compression framework, MFVC (Motion Free Video Compression) is improved and the BDBR (Bjøntegaard Delta-Rate) calculated based on H.264 is nearly twice the amount of bits (-27%) of MFVC (-14%). The proposed method has the advantage of being widely applicable to neural network based image or video compression technologies, not only to MFVC, but also to models using latent information and entropy model.

Video Event Detection according to Generating of Semantic Unit based on Moving Object (객체 움직임의 의미적 단위 생성을 통한 비디오 이벤트 검출)

  • Shin, Ju-Hyun;Baek, Sun-Kyoung;Kim, Pan-Koo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.2
    • /
    • pp.143-152
    • /
    • 2008
  • Nowadays, many investigators are studying various methodologies concerning event expression for semantic retrieval of video data. However, most of the parts are still using annotation based retrieval that is defined into annotation of each data and content based retrieval using low-level features. So, we propose a method of creation of the motion unit and extracting event through the unit for the more semantic retrieval than existing methods. First, we classify motions by event unit. Second, we define semantic unit about classified motion of object. For using these to event extraction, we create rules that are able to match the low-level features, from which we are able to retrieve semantic event as a unit of video shot. For the evaluation of availability, we execute an experiment of extraction of semantic event in video image and get approximately 80% precision rate.

  • PDF

ViStoryNet: Neural Networks with Successive Event Order Embedding and BiLSTMs for Video Story Regeneration (ViStoryNet: 비디오 스토리 재현을 위한 연속 이벤트 임베딩 및 BiLSTM 기반 신경망)

  • Heo, Min-Oh;Kim, Kyung-Min;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.138-144
    • /
    • 2018
  • A video is a vivid medium similar to human's visual-linguistic experiences, since it can inculcate a sequence of situations, actions or dialogues that can be told as a story. In this study, we propose story learning/regeneration frameworks from videos with successive event order supervision for contextual coherence. The supervision induces each episode to have a form of trajectory in the latent space, which constructs a composite representation of ordering and semantics. In this study, we incorporated the use of kids videos as a training data. Some of the advantages associated with the kids videos include omnibus style, simple/explicit storyline in short, chronological narrative order, and relatively limited number of characters and spatial environments. We build the encoder-decoder structure with successive event order embedding, and train bi-directional LSTMs as sequence models considering multi-step sequence prediction. Using a series of approximately 200 episodes of kids videos named 'Pororo the Little Penguin', we give empirical results for story regeneration tasks and SEOE. In addition, each episode shows a trajectory-like shape on the latent space of the model, which gives the geometric information for the sequence models.