• Title/Summary/Keyword: performance video

Search Result 2,485, Processing Time 0.028 seconds

Probabilistic Background Subtraction in a Video-based Recognition System

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.4
    • /
    • pp.782-804
    • /
    • 2011
  • In video-based recognition systems, stationary cameras are used to monitor an area of interest. These systems focus on a segmentation of the foreground in the video stream and the recognition of the events occurring in that area. The usual approach to discriminating the foreground from the video sequence is background subtraction. This paper presents a novel background subtraction method based on a probabilistic approach. We represent the posterior probability of the foreground based on the current image and all past images and derive an updated method. Furthermore, we present an efficient fusion method for the color and edge information in order to overcome the difficulties of existing background subtraction methods that use only color information. The suggested method is applied to synthetic data and real video streams, and its robust performance is demonstrated through experimentation.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.

Video Sequence Matching Using Normalized Dominant Singular Values

  • Jeong, Kwang-Min;Lee, Joon-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.6
    • /
    • pp.785-793
    • /
    • 2009
  • This paper proposes a signature using dominant singular values for video sequence matching. By considering the input image as matrix A, a partition procedure is first performed to separate the matrix into non-overlapping sub-images of a fixed size. The SVD(Singular Value Decomposition) process decomposes matrix A into a singular value-singular vector factorization. As a result, singular values are obtained for each sub-image, then k dominant singular values which are sufficient to discriminate between different images and are robust to image size variation, are chosen and normalized as the signature for each block in an image frame for matching between the reference video clip and the query one. Experimental results show that the proposed video signature has a better performance than ordinal signature in ROC curve.

  • PDF

Parallel Implementation Strategy for Content Based Video Copy Detection Using a Multi-core Processor

  • Liao, Kaiyang;Zhao, Fan;Zhang, Mingzhu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3520-3537
    • /
    • 2014
  • Video copy detection methods have emerged in recent years for a variety of applications. However, the lack of efficiency in the usual retrieval systems restricts their use. In this paper, we propose a parallel implementation strategy for content based video copy detection (CBCD) by using a multi-core processor. This strategy can support video copy detection effectively, and the processing time tends to decrease linearly as the number of processors increases. Experiments have shown that our approach is successful in speeding up computation and as well as in keeping the performance.

A study on the design and implementation of the transport protocol for Audio/Video data transmission (음성 및 화상 데이타 전송을 위한 트랜스포트 프로토콜의 설계 및 구현에 관한 연구)

  • Kim, June;Lee, Kwang-Hui;An, Sun-Shin
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1053-1057
    • /
    • 1987
  • In this paper, we have studied a communication protocol which may provide Audio/Video data transmission in real time. Auido/Video data have its own characteristics. A new transport protocol with realtime constraint has been designed and implemented which performs dynamic error control and flow control depending on the characteristics of transmitted Audio/Video data. Since the receiving data can be predicted from the previously received data using the prediction function in Auido/Video data transmission, these functions are introduced in our transport protocol that may possibly improve the speed of data transmission and give a real time response. We have tested our transport protocol and measured the performance by the simulation. We assume that our transport protocol would be used in LAN environment. Our prime purpose is to provide a reliable and real time Auido/Video data transmission service.

  • PDF

Hybrid Wyner-Ziv Video Coding with No Feedback Channel

  • Lee, Hoyoung;Tillo, Tammam;Jeon, Byeungwoo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.6
    • /
    • pp.418-429
    • /
    • 2016
  • In this paper, we propose a hybrid Wyner-Ziv video coding structure that combines conventional motion predictive video coding and Wyner-Ziv video coding to eliminate the feedback channel, which is a major practical problem in applications using the Wyner-Ziv video coding approach. The proposed method divides a hybrid frame into two regions. One is coded by a motion predictive video coder, and the other by the Wyner-Ziv coding method. The proposed encoder estimates side information with low computational complexity, using the coding information of the motion predictive coded region, and estimates the number of syndrome bits required to decode the region. The decoder generates side information using the same method as the encoder, which also reduces the computational complexity in the decoder. Experimental results show that the proposed method can eliminate the feedback channel without incurring a significant rate-distortion performance loss.

Offline Camera Movement Tracking from Video Sequences

  • Dewi, Primastuti;Choi, Yeon-Seok;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.69-72
    • /
    • 2011
  • In this paper, we propose a method to track the movement of camera from the video sequences. This method is useful for video analysis and can be applied as pre-processing step in some application such as video stabilizer and marker-less augmented reality. First, we extract the features in each frame using corner point detection. The features in current frame are then compared with the features in the adjacent frames to calculate the optical flow which represents the relative movement of the camera. The optical flow is then analyzed to obtain camera movement parameter. The final step is camera movement estimation and correction to increase the accuracy. The method performance is verified by generating a 3D map of camera movement and embedding 3D object to the video. The demonstrated examples in this paper show that this method has a high accuracy and rarely produce any jitter.

  • PDF

An Efficient MPEG-4 Video Codec using Low-power Architectural Engines

  • Bontae Koo;Park, Juhyun;Park, Seongmo;Kim, Seongmin;Nakwoong Eum
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1308-1311
    • /
    • 2002
  • We present a low-power MPEG-4 video codec chip capable of delivering high-quality video data in wireless multimedia applications. The discussion will focus on the architectural design techniques for implementing a high-performance video compression/decompression chip at low power architectures. The proposed MPEG-4 video codec can perform 30 frames/s of QCIF or 7.5 frame/s of CIF at 27MHz for 128k∼144kbps. By introducing the efficiently optimized Frame Memory Interface architecture, low power motion estimation and embedded ARM microprocessor and AMBA interface, the proposed MPEG-4 video codec has low power consumption for wireless multimedia applications such as IMT-2000.

  • PDF

Flickering Effect Reduction Based on the Modified Transformation Function for Video Contrast Enhancement

  • Yang, Hyeonseok;Park, Jinwook;Moon, Youngshik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.6
    • /
    • pp.358-365
    • /
    • 2014
  • This paper proposes a method that reduces the flickering effect caused by A-GLG (Adaptive Gray-Level Grouping) during video contrast enhancement. Of the GLG series, A-GLG shows the best contrast enhancement performance. The GLG series is based on histogram grouping. Histogram grouping is calculated differently between the continuous frames with a similar histogram and causes a subtle change in the transformation function. This is the reason for flickering effect when the video contrast is enhanced by A-GLG. To reduce the flickering effect caused by A-GLG, the proposed method calculates a modified transformation function. The modified transformation function is calculated using a previous and current transformation function applied with a weight separately. The proposed method was compared with A-GLG for flickering effect reduction and video contrast enhancement. Through the experimental results, the proposed method showed not only a reduced flickering effect, but also video contrast enhancement.

Enhanced RGB Video Coding Based on Correlation in the Adjacent Block (인접블록의 상관관계에 기반한 RGB video coding 개선 알고리즘)

  • Kim, Yang-Soo;Jeong, Jin-Woo;Choe, Yoon-Sik
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.12
    • /
    • pp.2538-2541
    • /
    • 2009
  • H.264/AVC High 4:4:4 Intra/Predictive profiles supports RGB 4:4:4 sequences for high fidelity video. RGB color planes rather than YCbCr color planes are preferred by high-fidelity video applications such as digital cinema, medical imaging, and UHDTV. Several RGB coding tools have therefore been developed to improve the coding efficiency of RGB video. In this paper, we propose a new method to extract more accurate correlation parameters for inter-plane prediction. We use a searching method to determine the matched macroblock (MB) that has a similar inter-color relation to the current MB. Using this block, we can infer more accurate correlation parameters to predict chroma MB from luma MB. Our proposed inter-plane prediction mode shows an average bits saving of 15.6% and a PSNR increase of 0.99 dB compared with H.264 high4:4:4 intra-profile RGB coding. Furthermore, extensive performance evaluation revealed that our proposed algorithm has better coding efficiency than existing algorithms..