• Title/Summary/Keyword: Real-time video analysis

Search Result 256, Processing Time 0.026 seconds

Implementation of Video Surveillance System with Motion Detection based on Network Camera Facilities (움직임 감지를 이용한 네트워크 카메라 기반 영상보안 시스템 구현)

  • Lee, Kyu-Woong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.169-177
    • /
    • 2014
  • It is essential to support the image and video analysis technology such as motion detection since the DVR and NVR storage were adopted in the real time visual surveillance system. Especially the network camera would be popular as a video input device. The traditional CCTV that supports analog video data get be replaced by the network camera. In this paper, we present the design and implementation of video surveillance system that provides the real time motion detection by the video storage server. The mobile application also has been implemented in order to provides the retrieval functionality of image analysis results. We develop the video analysis server with open source library OpenCV and implement the daemon process for video input processing and real-time image analysis in our video surveillance system.

Analysis of Satisfaction of Elementary School Students and Teachers for Software Practice Education in Real-Time Video Classes (실시간 화상 수업 환경에서 소프트웨어 실습 교육에 대한 초등학생 및 교사의 만족도 분석)

  • Kang, Doobong;Park, Hansuk
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.5
    • /
    • pp.825-834
    • /
    • 2021
  • This study analyzed learners' satisfaction and in-depth interviews with teachers after operating a software practical curriculum as a real-time video class for fifth and sixth graders in elementary school. The correlation between learner's presence, class overall, interaction, and real-time video class satisfaction showed that the positive correlation between presence, class overall, interaction, and satisfaction with real-time video classes was somewhat high. There were some differences in the real-time video class participation environment and real-time video class satisfaction, but it was not found to be statistically significant. In the case of teachers, it was difficult to respond to problems occurring in each student's individual environment, interactions between students, and individual feedback problems for a sluggish student. To solve this problem, opinions such as preliminary guidance and verification of real-time video class connection environment, error support for sluggish students and individual class participation environment, and feedback on individual tasks using LMS were suggested.

A study on the improvement of non-face-to-face environment video lectures using IPA (IPA를 활용한 비대면 환경 화상강의 개선 방안 연구)

  • Kwon, Youngae;Park, Hyejin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.17 no.3
    • /
    • pp.121-132
    • /
    • 2021
  • The purpose of this study is to explore ways to improve the quality of real-time video lectures in a non-face-to-face environment using IPA (Importance-Performance Analysis). Recently, due to the impact of COVID-19 in universities, all remote classes are being implemented, so research is needed to raise learner awareness. Accordingly, factor analysis, mean analysis, correspondence analysis, and IPA analysis were performed based on the data of 632 students who responded from March 21 to June 30, 2021 for learners of K University in Chungbuk. First, overall satisfaction was low compared to importance, and the difference in system perception was the largest. Second, the difference in learner perception of real-time video lectures through the IPA matrix showed that the system error and screen cutoff were the largest. Third, the difficulty of lecture content, task and test feedback, etc. are classified. Accordingly, the satisfaction of real-time video lectures in non-face-to-face environments is low, suggesting that school-level support for quality improvement to improve learner satisfaction in non-face-to-face environments and the role of instructors are needed to improve learners' academic achievement.

Real-time video Surveillance System Design Proposal Using Abnormal Behavior Recognition Technology

  • Lee, Jiyoo;Shin, Seung-Jung
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.120-123
    • /
    • 2020
  • The surveillance system to prevent crime and accidents in advance has become a necessity, not an option in real life. Not only public institutions but also individuals are installing surveillance cameras to protect their property and privacy. However, since the installed surveillance camera cannot be monitored for 24 hours, the focus is on the technology that tracks the video after an accident occurs rather than prevention. In this paper, we propose a system model that monitors abnormal behaviors that may cause crimes through real-time video, and when a specific behavior occurs, the surveillance system automatically detects it and responds immediately through an alarm. We are a model that analyzes real-time images from surveillance cameras and uses I3D models from analysis servers to analyze abnormal behavior and deliver notifications to web servers and then to clients. If the system is implemented with the proposed model, immediate response can be expected when a crime occurs.

A Study on Real-Time Position Analysis and Wireless Transmission Technology for Effective Acquisition of Video Recording Information in UAV Video Surveillance (유효영상 획득을 위한 무인기 영상감시의 실시간 위치분석과 무선전송 기술에 관한 연구)

  • Kim, Hwan-Chul;Lee, Chang-Seok;Choi, Jeong-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.9
    • /
    • pp.1047-1057
    • /
    • 2015
  • In this paper, we propose an effective wireless transmission technology, under poor wireless transmission channel surroundings caused by speedy flying, that are able to transmit high quality video recording information and surveillance data via accessing to various wireless networking services architecture such as One-on-One, Many-on-One, One-on-Many, Over the Horizon. The Real-Time Position Analysis(RAPA) method is also suggested to provide more meaningful video information of shooting area. The suggested wireless transmission technology and RAPA can make remote control of UAV's flight route to get valuable topography information. Because of the benefit to get both of video information and GPS data of shooting area simultaneously, the result of study can be applied to various application sphere including UAV that requires high speed wireless transmission.

The Study on the Development of the Realtime HD(High Definition) Level Video Streaming Transmitter Supporting the Multi-platform (다중 플랫폼 지원 실시간 HD급 영상 전송기 개발에 관한 연구)

  • Lee, JaeHee;Seo, ChangJin
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.65 no.4
    • /
    • pp.326-334
    • /
    • 2016
  • In this paper for developing and implementing the realtime HD level video streaming transmitter which is operated on the multi-platform in all network and client environment compared to the exist video live streaming transmitter. We design the realtime HD level video streaming transmitter supporting the multi-platform using the TMS320DM386 video processor of T.I company and then porting the Linux kernel 2.6.29 and implementing the RTSP(Real Time Streaming Protocol)/RTP(Real Time Transport Protocol), HLS(Http Live Streaming), RTMP(Real Time Messaging Protocol) that can support the multi-platform of video stream protocol of the received equipments (smart phone, tablet PC, notebook etc.). For proving the performance of developed video streaming transmitter, we make the testing environment for testing the performance of streaming transmitter using the notebook, iPad, android Phone, and then analysis the received video in the client displayer. In this paper, we suggest the developed the Realtime HD(High Definition) level Video Streaming transmitter performance data values higher than the exist products.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Real-time Camera and Video Streaming Through Optimized Settings of Ethernet AVB in Vehicle Network System

  • An, Byoungman;Kim, Youngseop
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3025-3047
    • /
    • 2021
  • This paper presents the latest Ethernet standardization of in-vehicle network and the future trends of automotive Ethernet technology. The proposed system provides design and optimization algorithms for automotive networking technology related to AVB (Audio Video Bridge) technology. We present a design of in-vehicle network system as well as the optimization of AVB for automotive. A proposal of Reduced Latency of Machine to Machine (RLMM) plays an outstanding role in reducing the latency among devices. RLMM's approach to real-world experimental cases indicates a reduction in latency of around 41.2%. The setup optimized for the automotive network environment is expected to significantly reduce the time in the development and design process. The results obtained in the study of image transmission latency are trustworthy because average values were collected over a long period of time. It is necessary to analyze a latency between multimedia devices within limited time which will be of considerable benefit to the industry. Furthermore, the proposed reliable camera and video streaming through optimized AVB device settings would provide a high level of support in the real-time comprehension and analysis of images with AI (Artificial Intelligence) algorithms in autonomous driving.

Advanced Real-Time Rate Control for Low Bit Rate Video Communication

  • Kim, Yoon
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.513-520
    • /
    • 2006
  • In this paper, we propose a novel real-time frame-layer rate control algorithm using sliding window method for low bit rate video coding. The proposed rate control method performs bit allocation at the frame level to minimize the average distortion over an entire sequence as well as variations in distortion between frames. A new frame-layer rate-distortion model is derived, and a non-iterative optimization method is used for low computational complexity. In order to reduce the quality fluctuation, we use a sliding window scheme which does not require the pre-analysis process. Therefore, the proposed algorithm does not produce time delay from encoding, and is suitable for real-time low-complexity video encoder. Experimental results indicate that the proposed control method provides better visual and PSNR performance than the existing TMN8 rate control method.

  • PDF