• Title/Summary/Keyword: video-surveillance

Search Result 490, Processing Time 0.028 seconds

A Multi-Channel Trick Mode Play Algorithm and Hardware Implementation of H.264/AVC for Surveillance Applications (H.264/AVC 감시 어플리케이션용 멀티 채널 트릭 모드 재생 알고리즘 및 하드웨어 구현)

  • Jo, Hyeonsu;Hong, Youpyo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.12
    • /
    • pp.1834-1843
    • /
    • 2016
  • DVRs are the most common recording and displaying devices used for surveillance. Video compression plays a key role in DVRs for saving storage; the video compression standard, H.264/AVC, has recently become the dominant choice for DVRs. DVRs require various display modes, such as fast-forward, backward play, and pause; these are called trick modes. The implementation of precise trick mode play requires a very high decoding capability or a very intelligent scheme in order to handle the high computation complexity. The complexity is increased in many surveillance applications where more than one camera is used to monitor multiple spots or to monitor the same area using various angles. An implementation of a trick mode play and a frame buffer management scheme for the hardware-based H.264/AVC codec for multi-channel is presented in this paper. The experimental results show that exact trick mode play is possible using a standard H.264/AVC video codec with keyframe encoding feature at the expense of bitstream size increase.

A Novel Video Stitching Method for Multi-Camera Surveillance Systems

  • Yin, Xiaoqing;Li, Weili;Wang, Bin;Liu, Yu;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3538-3556
    • /
    • 2014
  • This paper proposes a novel video stitching method that improves real-time performance and visual quality of a multi-camera video surveillance system. A two-stage seam searching algorithm based on enhanced dynamic programming is proposed. It can obtain satisfactory result and achieve better real-time performance than traditional seam-searching methods. The experiments show that the computing time is reduced by 66.4% using the proposed algorithm compared with enhanced dynamic programming, while the seam-searching accuracy is maintained. A real-time local update scheme reduces the deformation effect caused by moving objects passing through the seam, and a seam-based local color transfer model is constructed and applied to achieve smooth transition in the overlapped area, and overcome the traditional pixel blending methods. The effectiveness of the proposed method is proved in the experiements.

On-line Background Extraction in Video Image Using Vector Median (벡터 미디언을 이용한 비디오 영상의 온라인 배경 추출)

  • Kim, Joon-Cheol;Park, Eun-Jong;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.515-524
    • /
    • 2006
  • Background extraction is an important technique to find the moving objects in video surveillance system. This paper proposes a new on-line background extraction method for color video using vector order statistics. In the proposed method, using the fact that background occurs more frequently than objects, the vector median of color pixels in consecutive frames Is treated as background at the position. Also, the objects of current frame are consisted of the set of pixels whose distance from background pixel is larger than threshold. In the paper, the proposed method is compared with the on-line multiple background extraction based on Gaussian mixture model(GMM) in order to evaluate the performance. As the result, its performance is similar or superior to the method based on GMM.

A Shadow Region Suppression Method using Intensity Projection and Converting Energy to Improve the Performance of Probabilistic Background Subtraction (확률기반 배경제거 기법의 향상을 위한 밝기 사영 및 변환에너지 기반 그림자 영역 제거 방법)

  • Hwang, Soon-Min;Kang, Dong-Joong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.1
    • /
    • pp.69-76
    • /
    • 2010
  • The segmentation of moving object in video sequence is a core technique of intelligent image processing system such as video surveillance, traffic monitoring and human tracking. A typical method to segment a moving region from the background is the background subtraction. The steps of background subtraction involve calculating a reference image, subtracting new frame from reference image and then thresholding the subtracted result. One of famous background modeling is Gaussian mixture model (GMM). Even though the method is known efficient and exact, GMM suffers from a problem that includes false pixels in ROI (region of interest), specifically shadow pixels. These false pixels cause fail of the post-processing tasks such as tracking and object recognition. This paper presents a method for removing false pixels included in ROT. First, we subdivide a ROI by using shape characteristics of detected objects. Then, a method is proposed to classify pixels from using histogram characteristic and comparing difference of energy that converts the color value of pixel into grayscale value, in order to estimate whether the pixels belong to moving object area or shadow area. The method is applied to real video sequence and the performance is verified.

Violent crowd flow detection from surveillance cameras using deep transfer learning-gated recurrent unit

  • Elly Matul Imah;Riskyana Dewi Intan Puspitasari
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.671-682
    • /
    • 2024
  • Violence can be committed anywhere, even in crowded places. It is hence necessary to monitor human activities for public safety. Surveillance cameras can monitor surrounding activities but require human assistance to continuously monitor every incident. Automatic violence detection is needed for early warning and fast response. However, such automation is still challenging because of low video resolution and blind spots. This paper uses ResNet50v2 and the gated recurrent unit (GRU) algorithm to detect violence in the Movies, Hockey, and Crowd video datasets. Spatial features were extracted from each frame sequence of the video using a pretrained model from ResNet50V2, which was then classified using the optimal trained model on the GRU architecture. The experimental results were then compared with wavelet feature extraction methods and classification models, such as the convolutional neural network and long short-term memory. The results show that the proposed combination of ResNet50V2 and GRU is robust and delivers the best performance in terms of accuracy, recall, precision, and F1-score. The use of ResNet50V2 for feature extraction can improve model performance.

Implementation of Real-time Video Surveillance System based on Multi-Screen in Mobile-phone Environment (스마트폰 환경에서의 멀티스크린 기반의 실시간 비디오 감시 시스템 개발)

  • Kim, Dae-Jin
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1009-1015
    • /
    • 2017
  • Recently, video surveillance is becoming more and more common as many camera are installed due to crime, terrorism, traffic and security. And systems that control cameras are becoming increasingly general. Video input from the installed camera is monitored by the multiscreen at the central control center, it is essential to simultaneously monitor multiscreen in real-time to quickly respond to situations or dangers. However, monitoring of multiscreen in a mobile environment such as a smart phone is not applied to hardware specifications or network bandwidth problems. For resolving these problems, in this paper, we propose a system that can monitor multiscreen in real-time in mobile-phone environment. We reconstruct the desired multiscreen through transcoding, it is possible to monitor continuously video streaming of multiple cameras, and to have the advantage of being mobile in mobile-phone environment.

Application of Police Video Equipment for Fighting Crime and Legal Trends (범죄 대응을 위한 경찰 영상장비의 활용과 법 동향)

  • Lee, Hoon;Lee, Won-Sang
    • Informatization Policy
    • /
    • v.25 no.2
    • /
    • pp.3-19
    • /
    • 2018
  • With the introduction of video cameras into law enforcement, a great deal of police organizations have adopted the technology in their routine crime prevention activities. The up-to-date systems of ambient surveillance energized by CCTV, police wearable cameras, drones, and thermal imaging devices enable the police to thoroughly monitor public spaces as well as to rigorously arrest on-scene criminals. These efforts to improve the level of surveillance are often met with public resistance raising concerns over citizens' rights to privacy. Recent studies on the use of police video equipment have constantly raised the issues related to the lack of applicable legal provisions, risk of personal information and privacy infringement as well as security vulnerabilities. In this regard, the present study attempted to review the public surveillance methods currently used by law enforcement agencies worldwide within the context of public safety and individual rights to privacy. Furthermore, the present study also discussed the legal boundaries of police use of video equipment to address public concerns over privacy issues.

Aerial Video Summarization Approach based on Sensor Operation Mode for Real-time Context Recognition (실시간 상황 인식을 위한 센서 운용 모드 기반 항공 영상 요약 기법)

  • Lee, Jun-Pyo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.6
    • /
    • pp.87-97
    • /
    • 2015
  • An Aerial video summarization is not only the key to effective browsing video within a limited time, but also an embedded cue to efficiently congregative situation awareness acquired by unmanned aerial vehicle. Different with previous works, we utilize sensor operation mode of unmanned aerial vehicle, which is global, local, and focused surveillance mode in order for accurately summarizing the aerial video considering flight and surveillance/reconnaissance environments. In focused mode, we propose the moving-react tracking method which utilizes the partitioning motion vector and spatiotemporal saliency map to detect and track the interest moving object continuously. In our simulation result, the key frames are correctly detected for aerial video summarization according to the sensor operation mode of aerial vehicle and finally, we verify the efficiency of video summarization using the proposed mothed.

Concept Design for the Intelligent Surveillance System for Urban Transit (도시철도 지능형 종합감시시스템 개념설계)

  • An, Tae-Ki;Shin, Jeong-Ryol;Lee, Woo-Dong;Han, Seok-Yoon
    • Proceedings of the KSR Conference
    • /
    • 2008.06a
    • /
    • pp.653-658
    • /
    • 2008
  • Service areas in the urban transit need to construct the intelligent integrated surveillance system, because they are the public places that many people get together at one time. In past, analogue, closed-circuit televisions and analogue video recorders are used to construct the surveillance system. Now, a lot parts of the analogue systems that depend on the images have been changed to the complicated system, which consists of sensors and images and also, to be digitalized. In past, the surveillance system was used as an inspection devices to examine the spots after happening some events. But, with a high level of the computer and communication technologies, it is possible that the digitalized data leads the intelligence systems to prevent some accidents by using the various analysis techniques. And the data could be used to decide surveillance policies and provide some information on the safety and management policies as well as surveillance policies. In this paper, we define the intelligent surveillance system and suggest the major functions of the system. Also, we suggest the fundamental functions that every part should get and describe the way to develop the system.

  • PDF

An Efficient Implementation of Key Frame Extraction and Sharing in Android for Wireless Video Sensor Network

  • Kim, Kang-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3357-3376
    • /
    • 2015
  • Wireless sensor network is an important research topic that has attracted a lot of attention in recent years. However, most of the interest has focused on wireless sensor network to gather scalar data such as temperature, humidity and vibration. Scalar data are insufficient for diverse applications such as video surveillance, target recognition and traffic monitoring. However, if we use camera sensors in wireless sensor network to collect video data which are vast in information, they can provide important visual information. Video sensor networks continue to gain interest due to their ability to collect video information for a wide range of applications in the past few years. However, how to efficiently store the massive data that reflect environmental state of different times in video sensor network and how to quickly search interested information from them are challenging issues in current research, especially when the sensor network environment is complicated. Therefore, in this paper, we propose a fast algorithm for extracting key frames from video and describe the design and implementation of key frame extraction and sharing in Android for wireless video sensor network.