• Title/Summary/Keyword: Video analysis system

Search Result 748, Processing Time 0.028 seconds

The Analysis of Digital Watermarking for MPEG-21 Digital Item Adaptation (디지털 영상 워터마킹에 대한 MPEG-21 DIA의 영향 분석)

  • Bae, Tae Meon;Kang, Seok Jun;Ro, Yong Man;Ine, So Ran
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.139-142
    • /
    • 2004
  • 본 논문에서는 MPEG-21 Digital Item Adaptation(DIA)에 의한 워터마크 신호의 영향을 실험하고 분석한다. MPEG-21 DIA에서는 다양한 소비환경에 맞게 멀티미디어 컨텐츠를 변할 수 있는 기능들을 제공하고 있다. 그러나 컨텐츠 변환기능들은 저작권 보호를 위해 컨텐츠에 삽입된 워터마크신호를 홰손시킬 수 있으므로, DIA 환경에서 워터마킹기술을 사용하기 위해서는 워터마킹기술에 대한 DIA의 영향을 분석할 필요가 있다. 본 논문에서는 일반적으로 널리 알려진 대표적인 워터마킹기술을 이용하여 MPEG-21 DIA에서 정의하고 있는 각각의 적응변환기능에 대한 워터마크의 강인성을 실험하여, 그 결과를 바탕으로 DIA 환경에서 워터마킹기술을 적용할 때 필요한 요구사항을 분석하였다.

  • PDF

CARA: Character Appearance Retrieval and Analysis for TV Programs

  • Jung Byunghee;Park Sungchoon;Kim Kyeongsoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2004.11a
    • /
    • pp.237-240
    • /
    • 2004
  • This paper describes a character retrieval system for TV programs and a set of novel algorithms for detecting and recognizing faces for the system. Our character retrieval system consists of two main components: Face Register and Face Recognizer. The Face Register detects faces in video frames and then guides users to register the detected faces of interest into the database. The Face Recognizer displays the appearance interval of each character on the timeline interface and the list of scenes with the names of characters that appear on each scene. These two components also provide a function to modify incorrect results. which is helpful to provide accurate character retrieval services. In the proposed face detection and recognition algorithms. we reduce the computation time without sacrificing the recognition accuracy by using the DCT/LDA method for face feature extraction. We also develop the character retrieval system in the form of plug-in. By plugging in our system to a cataloguing system. the metadata about the characters in a video can be automatically generated. Through this system, we can easily realize sophisticated on-demand video services which provide the search of scenes of a specific TV star.

  • PDF

Automatic Extraction of Focused Video Object from Low Depth-of-Field Image Sequences (낮은 피사계 심도의 동영상에서 포커스 된 비디오 객체의 자동 검출)

  • Park, Jung-Woo;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.10
    • /
    • pp.851-861
    • /
    • 2006
  • The paper proposes a novel unsupervised video object segmentation algorithm for image sequences with low depth-of-field (DOF), which is a popular photographic technique enabling to represent the intention of photographer by giving a clear focus only on an object-of-interest (OOI). The proposed algorithm largely consists of two modules. The first module automatically extracts OOIs from the first frame by separating sharply focused OOIs from other out-of-focused foreground or background objects. The second module tracks OOIs for the rest of the video sequence, aimed at running the system in real-time, or at least, semi-real-time. The experimental results indicate that the proposed algorithm provides an effective tool, which can be a basis of applications, such as video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing systems.

A Study of CCTV Video Tracking Technique to The Object Monitoring in The Automation Manufacturing Facilities (자동화 생산 시설물의 객체모니터링을 위한 CCTV 영상추적 기술에 관한 연구)

  • Seo, Won-Gi;Lee, Ju-Young;Park, Goo-Man;Shin, Jae-Kwon;Lee, Seung-Youn
    • Journal of Satellite, Information and Communications
    • /
    • v.7 no.1
    • /
    • pp.134-138
    • /
    • 2012
  • In this paper, we implement the real-time status monitoring system to surveil the object in the automation manufacturing facilities and we propose the CCTV video tracking system using the video tracking filter to improve efficiency. To surveil the object in automation manufacturing facilities, we implement monitoring SW on the based of the video tracking filter instead of the general method for the video monitoring so the reliable monitoring based on the PC is possible efficiently. In addition, accessibility and convenience for administrator are improved as the real-time status confirmation function. Also, we conform the performance improvement effect through the performance analysis of the proposed monitoring system using the video tracking filter.

An intelligent video security system for the tracking of multiple moving objects (복수의 동체 추적을 위한 지능형 영상보안 시스템)

  • Kim, Byung-Chul
    • Journal of Digital Convergence
    • /
    • v.11 no.10
    • /
    • pp.359-366
    • /
    • 2013
  • Due to the development and market expansion of image analysis and recognition technology, video security such as CCTV cameras and digital storage devices, are required for real-time monitoring systems and intelligent video security systems. This includes the development of more advanced technologies. A rotatable PTZ camera, in a CCTV camera system, has a zoom function so you can acquire a precise picture. However it can cause blind spots, and can not monitor two or more moving objects at the same time. This study concerns, the intelligent tracking of multiple moving objects, CCTV systems, and methods of video surveillance. An intelligent video surveillance system is proposed. It can accurately shoot broad areas and track multiple objects at the same time, much more effectively than using one fixed camera for an entire area or two or more PTZ cameras.

A Design of Similar Video Recommendation System using Extracted Words in Big Data Cluster (빅데이터 클러스터에서의 추출된 형태소를 이용한 유사 동영상 추천 시스템 설계)

  • Lee, Hyun-Sup;Kim, Jindeog
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.172-178
    • /
    • 2020
  • In order to recommend contents, the company generally uses collaborative filtering that takes into account both user preferences and video (item) similarities. Such services are primarily intended to facilitate user convenience by leveraging personal preferences such as user search keywords and viewing time. It will also be ranked around the keywords specified in the video. However, there is a limit to analyzing video similarities using limited keywords. In such cases, the problem becomes serious if the specified keyword does not properly reflect the item. In this paper, I would like to propose a system that identifies the characteristics of a video as it is by the system without human intervention, and analyzes and recommends similarities between videos. The proposed system analyzes similarities by taking into account all words (keywords) that have different meanings from training videos, and in such cases, the methods handled by big data clusters are applied because of the large scale of data and operations.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

A Study on Traffic Analysis and Hierarchical Program Allocation for Distributed VOD Systems (분산 VOD 시스템의 트래픽 분석과 계층적 프로그램 저장에 관한 연구)

  • Lee, Tae-Hoon;Kim, Yong-Deak
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.8
    • /
    • pp.2080-2091
    • /
    • 1997
  • It is generally recognized that Video On Demand (VOD) service will become a promising interactive service in the emerging broadband integrated services digital networks. A centralized VOD system, all programs are stored in a single VOD server which is linked to each user via exchanges, is applicable when a small number of users enjoys the VOD service. However, in case of large service penetration, it is very important to solve the problems of bandwidth and load concentrating in the central video server(CVS) and program transmission network. In this paper, the architecture of the video distribution service network is studied, then a traffic characteristics and models for VOD system are established, and proposed program allocation method to video servers. For this purpose, we present an analysis of program storage amount in each LVS(Local Video Server), transmission traffic volume between LVSs, and link traffic volume between CVS and LVSs, according to changing the related factors such as demand, the number of LVS, vision probability, etc. A method for finding out storage capacity in LVSs is also presented on the basis of the tradeoffs among program storage cost, link traffic cost, and transmission cost.

  • PDF

Removing Shadows for the Surveillance System Using a Video Camera (비디오 카메라를 이용한 감시 장치에서 그림자의 제거)

  • Kim, Jung-Dae;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.176-178
    • /
    • 2005
  • In the images of a video camera employed for surveillance, detecting targets by extracting foreground image is of great importance. The foreground regions detected, however, include not only moving targets but also their shadows. This paper presents a novel technique to detect shadow pixels in the foreground image of a video camera. The image characteristics of video cameras employed, a web-cam and a CCD, are first analysed in the HSV color space and a pixel-level shadow detection technique is proposed based on the analysis. Compared with existing techniques where unified criteria are used to all pixels, the proposed technique determines shadow pixels utilizing a fact that the effect of shadowing to each pixel is different depending on its brightness in background image. Such an approach can accommodate local features in an image and hold consistent performance even in changing environment. In experiments targeting pedestrians, the proposed technique showed better results compared with an existing technique.

  • PDF

Scalable Big Data Pipeline for Video Stream Analytics Over Commodity Hardware

  • Ayub, Umer;Ahsan, Syed M.;Qureshi, Shavez M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1146-1165
    • /
    • 2022
  • A huge amount of data in the form of videos and images is being produced owning to advancements in sensor technology. Use of low performance commodity hardware coupled with resource heavy image processing and analyzing approaches to infer and extract actionable insights from this data poses a bottleneck for timely decision making. Current approach of GPU assisted and cloud-based architecture video analysis techniques give significant performance gain, but its usage is constrained by financial considerations and extremely complex architecture level details. In this paper we propose a data pipeline system that uses open-source tools such as Apache Spark, Kafka and OpenCV running over commodity hardware for video stream processing and image processing in a distributed environment. Experimental results show that our proposed approach eliminates the need of GPU based hardware and cloud computing infrastructure to achieve efficient video steam processing for face detection with increased throughput, scalability and better performance.