• Title/Summary/Keyword: Video recognition

Search Result 679, Processing Time 0.033 seconds

Integrated Context Awareness by Sharing Information between Cameras (카메라간 정보공유를 통한 종합적인 상황인식)

  • An, Tae-Ki;Shin, Jeong-Ryol;Han, Seok-Youn;Lee, Gil-Jae
    • Proceedings of the KSR Conference
    • /
    • 2008.11b
    • /
    • pp.1360-1365
    • /
    • 2008
  • Most recognition algorithms for intelligent surveillance system are based on analysis of the video collected from one camera. Video analysis is also used to compute the internal parameters used in the recognition process. The algorithm computes only the video of the fixed area so that it is a insufficient method and it could not use information of the related areas. However, intelligent integrated surveillance system should be constructed to correlate the events in the other areas as well as in the fixed area. In this paper, in order to construct the intelligent integrated surveillance system, we describe the method not to focus on the video of each camera but to aware the whole event by sharing information between cameras, which is more accurate. The method would be used to aware the event in the fixed area such as stations in urban transit.

  • PDF

A Study of Video-Based Abnormal Behavior Recognition Model Using Deep Learning

  • Lee, Jiyoo;Shin, Seung-Jung
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.115-119
    • /
    • 2020
  • Recently, CCTV installations are rapidly increasing in the public and private sectors to prevent various crimes. In accordance with the increasing number of CCTVs, video-based abnormal behavior detection in control systems is one of the key technologies for safety. This is because it is difficult for the surveillance personnel who control multiple CCTVs to manually monitor all abnormal behaviors in the video. In order to solve this problem, research to recognize abnormal behavior using deep learning is being actively conducted. In this paper, we propose a model for detecting abnormal behavior based on the deep learning model that is currently widely used. Based on the abnormal behavior video data provided by AI Hub, we performed a comparative experiment to detect anomalous behavior through violence learning and fainting in videos using 2D CNN-LSTM, 3D CNN, and I3D models. We hope that the experimental results of this abnormal behavior learning model will be helpful in developing intelligent CCTV.

Real-time video Surveillance System Design Proposal Using Abnormal Behavior Recognition Technology

  • Lee, Jiyoo;Shin, Seung-Jung
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.120-123
    • /
    • 2020
  • The surveillance system to prevent crime and accidents in advance has become a necessity, not an option in real life. Not only public institutions but also individuals are installing surveillance cameras to protect their property and privacy. However, since the installed surveillance camera cannot be monitored for 24 hours, the focus is on the technology that tracks the video after an accident occurs rather than prevention. In this paper, we propose a system model that monitors abnormal behaviors that may cause crimes through real-time video, and when a specific behavior occurs, the surveillance system automatically detects it and responds immediately through an alarm. We are a model that analyzes real-time images from surveillance cameras and uses I3D models from analysis servers to analyze abnormal behavior and deliver notifications to web servers and then to clients. If the system is implemented with the proposed model, immediate response can be expected when a crime occurs.

Improvement of Accuracy for Human Action Recognition by Histogram of Changing Points and Average Speed Descriptors

  • Vu, Thi Ly;Do, Trung Dung;Jin, Cheng-Bin;Li, Shengzhe;Nguyen, Van Huan;Kim, Hakil;Lee, Chongho
    • Journal of Computing Science and Engineering
    • /
    • v.9 no.1
    • /
    • pp.29-38
    • /
    • 2015
  • Human action recognition has become an important research topic in computer vision area recently due to many applications in the real world, such as video surveillance, video retrieval, video analysis, and human-computer interaction. The goal of this paper is to evaluate descriptors which have recently been used in action recognition, namely Histogram of Oriented Gradient (HOG) and Histogram of Optical Flow (HOF). This paper also proposes new descriptors to represent the change of points within each part of a human body, caused by actions named as Histogram of Changing Points (HCP) and so-called Average Speed (AS) which measures the average speed of actions. The descriptors are combined to build a strong descriptor to represent human actions by modeling the information about appearance, local motion, and changes on each part of the body, as well as motion speed. The effectiveness of these new descriptors is evaluated in the experiments on KTH and Hollywood datasets.

Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network

  • Hoang, Nguyen Ngoc;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • Smart Media Journal
    • /
    • v.9 no.1
    • /
    • pp.23-29
    • /
    • 2020
  • This paper presents an approach for dynamic hand gesture recognition by using algorithm based on 3D Convolutional Neural Network (3D_CNN), which is later extended to 3D Residual Networks (3D_ResNet), and the neural network based key frame selection. Typically, 3D deep neural network is used to classify gestures from the input of image frames, randomly sampled from a video data. In this work, to improve the classification performance, we employ key frames which represent the overall video, as the input of the classification network. The key frames are extracted by SegNet instead of conventional clustering algorithms for video summarization (VSUMM) which require heavy computation. By using a deep neural network, key frame selection can be performed in a real-time system. Experiments are conducted using 3D convolutional kernels such as 3D_CNN, Inflated 3D_CNN (I3D) and 3D_ResNet for gesture classification. Our algorithm achieved up to 97.8% of classification accuracy on the Cambridge gesture dataset. The experimental results show that the proposed approach is efficient and outperforms existing methods.

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Detection and Recognition of Traffic Lights for Unmanned Autonomous Driving (무인 자율주행을 위한 신호등의 검출과 인식)

  • Kim, Jang-Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.6
    • /
    • pp.751-756
    • /
    • 2018
  • This research extracted traffic light from input video, recognized colors of traffic light, and suggested traffic light color recognizing algorithm applicable to manless autonomous vehicle or ITS by distinguishing signs. To extract traffic light, suggested algorithm extracted the outline with CEA(Canny Edge Algorithm), and applied HCT(Hough Circle Transform) to recognize colors of traffic light and improve the accuracy. The suggested method was applied to the video of stream acquired on the road. As a result, excellent rate of traffic light recognition was confirmed. Especially, ROI including traffic light in input video was distinguished and computing time could be reduced. In even area similar to traffic light, circle was not extracted or V value is low in HSV space, so it's failed in candidate area. So, accuracy of recognition rate could be improved.

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.

Video character recognition improvement by support vector machines and regularized discriminant analysis (서포트벡터머신과 정칙화판별함수를 이용한 비디오 문자인식의 분류 성능 개선)

  • Lim, Su-Yeol;Baek, Jang-Sun;Kim, Min-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.4
    • /
    • pp.689-697
    • /
    • 2010
  • In this study, we propose a new procedure for improving the character recognition of text area extracted from video images. The recognition of strings extracted from video, which are mixed with Hangul, English, numbers and special characters, etc., is more difficult than general character recognition because of various fonts and size, graphic forms of letters tilted image, disconnection, miscellaneous videos, tangency, characters of low definition, etc. We improved the recognition rate by taking commonly used letters and leaving out the barely used ones instead of recognizing all of the letters, and then using SVM and RDA character recognition methods. Our numerical results indicate that combining SVM and RDA performs better than other methods.

Preprocessing Technique for Improving Action Recognition Performance in ERP Video with Multiple Objects (다중 객체가 존재하는 ERP 영상에서 행동 인식 모델 성능 향상을 위한 전처리 기법)

  • Park, Eun-Soo;Kim, Seunghwan;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.374-385
    • /
    • 2020
  • In this paper, we propose a preprocessing technique to solve the problems of action recognition with Equirectangular Projection (ERP) video. The preprocessing technique proposed in this paper assumes the person object as the subject of action, that is, the Object of Interest (OOI), and the surrounding area of the OOI as the ROI. The preprocessing technique consists of three modules. I) Recognize person object in the image with object recognition model. II) Create a saliency map from the input image. III) Select subject of action using recognized person object and saliency map. The subject boundary box of the selected action is input to the action recognition model in order to improve the action recognition performance. When comparing the performance of the proposed preprocessing method to the action recognition model and the performance of the original ERP image input method, the performance is improved up to 99.6%, and the action is obtained when only the OOI is detected. It can also see the effects of related video summaries.