• Title/Summary/Keyword: performance video

Search Result 2,476, Processing Time 0.03 seconds

Frame Rearrangement Method by Time Information Remarked on Recovered Image (복원된 영상에 표기된 시간 정보에 의한 프레임 재정렬 기법)

  • Kim, Yong Jin;Lee, Jung Hwan;Byun, Jun Seok;Park, Nam In
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1641-1652
    • /
    • 2021
  • To analyze the crime scene, the role of digital evidence such as CCTV and black box is very important. Such digital evidence is often damaged due to device defects or intentional deletion. In this case, the deleted video can be restored by well-known techniques like the frame-based recovery method. Especially, the data such as the video can be generally fragmented and saved in the case of the memory used almost fully. If the fragmented video were recovered in units of images, the sequence of the recovered images may not be continuous. In this paper, we proposed a new video restoration method to match the sequence of recovered images. First, the images are recovered through a frame-based recovery technique. Then, after analyzing the time information marked on the images, the time information was extracted and recognized via optical character recognition (OCR). Finally, the recovered images are rearranged based on the time information obtained by OCR. For performance evaluation, we evaluate the recovery rate of our proposed video restoration method. As a result, it was shown that the recovery rate for the fragmented video was recovered from a minimum of about 47% to a maximum of 98%.

Implementation and Performance Evaluation of a Video-Equipped Real-Time Fire Detection Method at Different Resolutions using a GPU (GPU를 이용한 다양한 해상도의 비디오기반 실시간 화재감지 방법 구현 및 성능평가)

  • Shon, Dong-Koo;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.1
    • /
    • pp.1-10
    • /
    • 2015
  • In this paper, we propose an efficient parallel implementation method of a widely used complex four-stage fire detection algorithm using a graphics processing unit (GPU) to improve the performance of the algorithm and analyze the performance of the parallel implementation method. In addition, we use seven different resolution videos (QVGA, VGA, SVGA, XGA, SXGA+, UXGA, QXGA) as inputs of the four-stage fire detection algorithm. Moreover, we compare the performance of the GPU-based approach with that of the CPU implementation for each different resolution video. Experimental results using five different fire videos with seven different resolutions indicate that the execution time of the proposed GPU implementation outperforms that of the CPU implementation in terms of execution time and takes a 25.11ms per frame for the UXGA resolution video, satisfying real-time processing (30 frames per second, 30fps) of the fire detection algorithm.

Construction of a Video Dataset for Face Tracking Benchmarking Using a Ground Truth Generation Tool

  • Do, Luu Ngoc;Yang, Hyung Jeong;Kim, Soo Hyung;Lee, Guee Sang;Na, In Seop;Kim, Sun Hee
    • International Journal of Contents
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 2014
  • In the current generation of smart mobile devices, object tracking is one of the most important research topics for computer vision. Because human face tracking can be widely used for many applications, collecting a dataset of face videos is necessary for evaluating the performance of a tracker and for comparing different approaches. Unfortunately, the well-known benchmark datasets of face videos are not sufficiently diverse. As a result, it is difficult to compare the accuracy between different tracking algorithms in various conditions, namely illumination, background complexity, and subject movement. In this paper, we propose a new dataset that includes 91 face video clips that were recorded in different conditions. We also provide a semi-automatic ground-truth generation tool that can easily be used to evaluate the performance of face tracking systems. This tool helps to maintain the consistency of the definitions for the ground-truth in each frame. The resulting video data set is used to evaluate well-known approaches and test their efficiency.

Rate-User-Perceived-Quality Aware Replication Strategy for Video Streaming over Wireless Mesh Networks

  • Du, Xu;Vo, Nguyen-Son;Cheng, Wenqing;Duong, Trung Q.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2103-2120
    • /
    • 2011
  • In this research, we consider the replication strategy for the applications of video streaming in wireless mesh networks (WMNs). In particular, we propose a closed-form of optimal replication densities for a set of frames of a video streaming by exploiting not only the skewed access probability of each frame but also the skewed loss probability and skewed encoding rate-distortion information. The simulation results demonstrate that our method improves the replication performance in terms of user-perceived quality (UPQ) which includes: 1) minimum average maximum reconstructed distortion for high peak signal-to-noise ratio (PSNR), 2) small reconstructed distortion fluctuation among frames for smooth playback, and 3) reasonable average maximum transmission distance for continuous playback. Furthermore, the proposed strategy consumes smaller storage capacity compared to other existing optimal replication strategies. More importantly, the effect of encoding rate is carefully investigated to show that high encoding rate does not always gain high performance of replication for video streaming.

A Novel Video Stitching Method for Multi-Camera Surveillance Systems

  • Yin, Xiaoqing;Li, Weili;Wang, Bin;Liu, Yu;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3538-3556
    • /
    • 2014
  • This paper proposes a novel video stitching method that improves real-time performance and visual quality of a multi-camera video surveillance system. A two-stage seam searching algorithm based on enhanced dynamic programming is proposed. It can obtain satisfactory result and achieve better real-time performance than traditional seam-searching methods. The experiments show that the computing time is reduced by 66.4% using the proposed algorithm compared with enhanced dynamic programming, while the seam-searching accuracy is maintained. A real-time local update scheme reduces the deformation effect caused by moving objects passing through the seam, and a seam-based local color transfer model is constructed and applied to achieve smooth transition in the overlapped area, and overcome the traditional pixel blending methods. The effectiveness of the proposed method is proved in the experiements.

On-line Background Extraction in Video Image Using Vector Median (벡터 미디언을 이용한 비디오 영상의 온라인 배경 추출)

  • Kim, Joon-Cheol;Park, Eun-Jong;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.515-524
    • /
    • 2006
  • Background extraction is an important technique to find the moving objects in video surveillance system. This paper proposes a new on-line background extraction method for color video using vector order statistics. In the proposed method, using the fact that background occurs more frequently than objects, the vector median of color pixels in consecutive frames Is treated as background at the position. Also, the objects of current frame are consisted of the set of pixels whose distance from background pixel is larger than threshold. In the paper, the proposed method is compared with the on-line multiple background extraction based on Gaussian mixture model(GMM) in order to evaluate the performance. As the result, its performance is similar or superior to the method based on GMM.

Efficient Side Infonnation Generation Techniques and Perfonnance Evaluation for Distributed Video Coding System (분산 동영상 부호화 시스템을 위한 부가정보 생성 기법의 성능 평가)

  • Moon, Hak-Soo;Lee, Chang-Woo;Lee, Seong-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.3C
    • /
    • pp.140-148
    • /
    • 2011
  • The side information in the distributed video coding system is generated using motion compensated interpolation methods. Since the accuracy of the generated side information affects the amount of parity bits for the reconstruction of Wyner-Ziv frame, it is important to produce an accurate side information. In this paper, we analyze the informance of various side information generation methods and propose an effective side information generation technique. Also, we compare each side information generation methods from the hardware point of view and analyze the performance of distributed video coding system using various side information generation methods.

Real-time Style Transfer for Video (실시간 비디오 스타일 전이 기법에 관한 연구)

  • Seo, Sang Hyun
    • Smart Media Journal
    • /
    • v.5 no.4
    • /
    • pp.63-68
    • /
    • 2016
  • Texture transfer is a method to transfer the texture of an input image into a target image, and is also used for transferring artistic style of the input image. This study presents a real-time texture transfer for generating artistic style video. In order to enhance performance, this paper proposes a parallel framework using T-shape kernel used in general texture transfer on GPU. To accelerate motion computation time which is necessarily required for maintaining temporal coherence, a multi-scaled motion field is proposed in parallel concept. Through these approach, an artistic texture transfer for video with a real-time performance is archived.

MOPSO-based Data Scheduling Scheme for P2P Streaming Systems

  • Liu, Pingshan;Fan, Yaqing;Xiong, Xiaoyi;Wen, Yimin;Lu, Dianjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5013-5034
    • /
    • 2019
  • In the Peer-to-Peer (P2P) streaming systems, peers randomly form a network overlay to share video resources with a data scheduling scheme. A data scheduling scheme can have a great impact on system performance, which should achieve two optimal objectives at the same time ideally. The two optimization objectives are to improve the perceived video quality and maximize the network throughput, respectively. Maximizing network throughput means improving the utilization of peer's upload bandwidth. However, maximizing network throughput will result in a reduction in the perceived video quality, and vice versa. Therefore, to achieve the above two objects simultaneously, we proposed a new data scheduling scheme based on multi-objective particle swarm optimization data scheduling scheme, called MOPSO-DS scheme. To design the MOPSO-DS scheme, we first formulated the data scheduling optimization problem as a multi-objective optimization problem. Then, a multi-objective particle swarm optimization algorithm is proposed by encoding the neighbors of peers as the position vector of the particles. Through extensive simulations, we demonstrated the MOPSO-DS scheme could improve the system performance effectively.

Scen based MPEG video traffic modeling considering the correlations between frames (프레임간 상관관계를 고려한 장면기반 MPEG 비디오 트래픽 모델링)

  • 유상조;김성대;최재각
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.9A
    • /
    • pp.2289-2304
    • /
    • 1998
  • For the performance analysis and traffic control of ATM networks carrying video sequences, need an appropriate video traffic model. In this paper, we propose a new traffic model for MPEG compressed videos which are widely used for any type of video applications at th emoment. The proposed modeling scheme uses scene-based traffic characteristics and considers the correlation between frames of consecutiv GOPs. Using a simple scene detection algorithm, scene changes are modeled by state transitions and the number of GOPs of a scene state is modeled by a geometric distirbution. Frames of a scene stte are modeled by mean I, P, and B frame size. For more accurate traffic modeling, quantization errors (residual bits) that the state transition model using mean values has are compensated by autoregressive processes. We show that our model very well captures the traffic chracteristics of the original videos by performance analysis in terms of autocorrelation, histogram of frame bits genrated by the model, and cell loss rate in the ATM multiplexer with limited buffers. Our model is able to perrorm translations between levels (i.e., GOP, frame, and cell levels) and to estimate very accurately the stochastic characteristics of the original videos by each level.

  • PDF