• Title/Summary/Keyword: Video detection

Search Result 1,333, Processing Time 0.025 seconds

Face Detection using Color Information and AdaBoost Algorithm (색상정보와 AdaBoost 알고리즘을 이용한 얼굴검출)

  • Na, Jong-Won;Kang, Dae-Wook;Bae, Jong-Sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.843-848
    • /
    • 2008
  • Most of face detection technique uses information from the face of the movement. The traditional face detection method is to use difference picture method ate used to detect movement. However, most do not consider this mathematical approach using real-time or real-time implementation of the algorithm is complicated, not easy. This paper, the first to detect real-time facial image is converted YCbCr and RGB video input. Next, you convert the difference between video images of two adjacent to obtain and then to conduct Glassfire Labeling. Labeling value compared to the threshold behavior Area recognizes and converts video extracts. Actions to convert video to conduct face detection, and detection of facial characteristics required for the extraction and use of AdaBoost algorithm.

A Study on Frame of MSE Comparison for Scene Chang Detection Retrieval (장면 전환점 검출을 위한 프레임의 평균오차 비교에 관한 연구)

  • 김단환;김형균;오무송
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.638-642
    • /
    • 2002
  • User in video data utilization of high-capacity can grasp whole video data at a look. Offer frame list that summarize information of video data to do so that can remake video from branch that want when need. Need index process of video data for effective video retrieval. This treatise wishes to propose effective method about scene change point detection of video that is been based on contents base index. Proposed method video data so that can grasp whole structure of video detection color value of schedule pixel for diagonal line direction in image sampling do. Data that get into sampling could grasp scene change point on one eye. Color value of pixel that detection in each frame is i frame number by i$\times$j procession to procession A, j stores to reflex height of frame. Introduce MSE and calculate mean error of each frame. If exceed mean error and schedule critical value, wish to detect the frame for scene change point.

  • PDF

Unusual Behavior Detection of Korean Cows using Motion Vector and SVDD in Video Surveillance System (움직임 벡터와 SVDD를 이용한 영상 감시 시스템에서 한우의 특이 행동 탐지)

  • Oh, Seunggeun;Park, Daihee;Chang, Honghee;Chung, Yongwha
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.795-800
    • /
    • 2013
  • Early detection of oestrus in Korean cows is one of the important issues in maximizing the economic benefit. Although various methods have been proposed, we still need to improve the performance of the oestrus detection system. In this paper, we propose a video surveillance system which can detect unusual behavior of multiple cows including the mounting activity. The unusual behavior detection is to detect the dangerous or abnormal situations of cows in video coming in real time from a surveillance camera promptly and correctly. The prototype system for unusual behavior detection gets an input video from a fixed location camera, and uses the motion vector to represent the motion information of cows in video, and finally selects a SVDD (one of the most well-known types of one-class SVM) as a detector by reinterpreting the unusual behavior into an one class decision problem from the practical points of view. The experimental results with the videos obtained from a farm located in Jinju illustrate the efficiency of the proposed method.

Detection of Video Scene Boundaries based on the Local and Global Context Information (지역 컨텍스트 및 전역 컨텍스트 정보를 이용한 비디오 장면 경계 검출)

  • 강행봉
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.778-786
    • /
    • 2002
  • Scene boundary detection is important in the understanding of semantic structure from video data. However, it is more difficult than shot change detection because scene boundary detection needs to understand semantics in video data well. In this paper, we propose a new approach to scene segmentation using contextual information in video data. The contextual information is divided into two categories: local and global contextual information. The local contextual information refers to the foreground regions' information, background and shot activity. The global contextual information refers to the video shot's environment or its relationship with other video shots. Coherence, interaction and the tempo of video shots are computed as global contextual information. Using the proposed contextual information, we detect scene boundaries. Our proposed approach consists of three consecutive steps: linking, verification, and adjusting. We experimented the proposed approach using TV dramas and movies. The detection accuracy of correct scene boundaries is over than 80%.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

Video Matching Algorithm of Content-Based Video Copy Detection for Copyright Protection (저작권보호를 위한 내용기반 비디오 복사검출의 비디오 정합 알고리즘)

  • Hyun, Ki-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.315-322
    • /
    • 2008
  • Searching a location of the copied video in video database, signatures should be robust to video reediting, channel noise, time variation of frame rate. Several kinds of signatures has been proposed. Ordinal signature, one of them, is difficult to describe the spatial characteristics of frame due to the site of fixed window, $N{\times}N$, which is compute the average gray value. In this paper, I studied an algorithm of sequence matching in video copy detection for the copyright protection, employing the R-tree index method for retrieval and suggesting a robust ordinal signatures for the original video clips and the same signatures of the pirated video. Robust ordinal has a 2-dimensional vector structures that has a strong to the noise and the variation of the frame rate. Also, it express as MBR form in search space of R-tree. Moreover, I focus on building a video copy detection method into which content publishers register their valuable digital content. The video copy detection algorithms compares the web content to the registered content and notifies the content owners of illegal copies. Experimental results show the proposed method is improve the video matching rate and it has a characteristics of signature suitable to the large video databases.

  • PDF

Signature Extraction Method from H.264 Compressed Video (H.264/AVC로 압축된 비디오로부터 시그너쳐 추출방법)

  • Kim, Sung-Min;Kwon, Yong-Kwang;Won, Chee-Sun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.10-17
    • /
    • 2009
  • This paper proposes a compressed domain signature extraction method which can be used for CBCD (Content Based Copy Detection). Since existing signature extraction methods for the CBCD are executed in spatial domain, they need additional computations to decode the compressed video before the signature extraction. To avoid this overhead, we generate a thumbnail image directly from the compressed video without full decoding. Then we can extract the video signature from the thumbnail image. Experimental results of extracting brightness ordering information as the signature for CBCD show that our proposed method is 2.8 times faster than the spatial domain method while maintaining 80.98% accuracy.

Video Abstracting Construction of Efficient Video Database (대용량 비디오 데이터베이스 구축을 위한 비디오 개요 추출)

  • Shin Seong-Yoon;Pyo Seong-Bae;Rhee Yang-Won
    • KSCI Review
    • /
    • v.14 no.1
    • /
    • pp.255-264
    • /
    • 2006
  • Video viewers can not understand enough entire video contents because most video is long length data of large capacity. This paper propose efficient scene change detection and video abstracting using new shot clustering to solve this problem. Scene change detection is extracted by method that was merged color histogram with ${\chi}^2$ histogram. Clustering is performed by similarity measure using difference of local histogram and new shot merge algorithm. Furthermore, experimental result is represented by using Real TV broadcast program.

  • PDF

Analysis of the Robustness and Discrimination for Video Fingerprints in Video Copy Detection (복제 비디오 검출에서 비디오 지문의 강인함과 분별력 분석)

  • Kim, Semin;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1281-1287
    • /
    • 2013
  • In order to prevent illegal video copies, many video fingerprints have been developed. Video fingerprints should be robust from various video transformations and have high discriminative powers. In general, video fingerprints are generated from three feature spaces such as luminance, gradient, and DCT coefficients. However, there is a few study for the robustness and discrimination according to feature spaces. Thus, we analyzed the property of each feature space by video copy detion task with the robustness and the discrimination of video fingerprints. We generated three video fingerprints from these feature spaces using a same algorithm. In our test, a video fingerprint. based on DCT coefficient outperformed others because the discrimination of it was higher.

Face Detection and Recognition for Video Retrieval (비디오 검색을 위한 얼굴 검출 및 인식)

  • lslam, Mohammad Khairul;Lee, Hyung-Jin;Paul, Anjan Kumar;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.12 no.6
    • /
    • pp.691-698
    • /
    • 2008
  • We present a novel method for face detection and recognition methods applicable to video retrieval. The person matching efficiency largely depends on how robustly faces are detected in the video frames. Face regions are detected in video frames using viola-jones features boosted with the Adaboost algorithm After face detection, PCA (Principal Component Analysis) follows illumination compensation to extract features that are classified by SVM (Support Vector Machine) for person identification. Experimental result shows that the matching efficiency of the ensembled architecture is quit satisfactory.

  • PDF