• Title/Summary/Keyword: 샷 경계 검출

Search Result 41, Processing Time 0.019 seconds

Video Segmentation Using Luminance and Edge Histogram (명도와 에지히스토그램을 이용한 비디오분할)

  • 유헌우;장동식;박진형;이법섭;송광섭
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.207-210
    • /
    • 2000
  • 비디오데이터의 증가에 따른 효율적 검색, 저장, 브라우징을 위한 방법론이 필요하게 되었다. 본 논문에서는 이러한 시스템을 구축하기 위한 첫 번째 단계인 비디오 분할기법을 제안하고자 한다. 이러한 비디오 분할은 샷경계검출 혹은 장면전환검출이라고 하는데 본 논문에서는 밝기 히스토그램과 에지갯수를 이용하여 프레임간의 유사도를 구별하고 이 유사도가 일정 임계값을 넘지 못하면 장면전환이 있는 것으로 간주한다. 점진적 장면전환검출은 현재프레임과 이전의 샷경계 프레임과의 유사도를 비교하여 검출한다. 다양한 비디오데이터에 공통적으로 적용할 수 있는 임계값을 설정하기 위해 상관관계(correlation)기법을 사용한다. 실험결과 급진적 장면전한은 각각 90%, 98%의 정확도(precision)와 회수율(recall)을 나타내었고 점진적 장면전환은 59%, 75%의 정확도와 회수율을 나타내었다.

  • PDF

Implementation of Intelligent Image Surveillance System based Context (컨텍스트 기반의 지능형 영상 감시 시스템 구현에 관한 연구)

  • Moon, Sung-Ryong;Shin, Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.11-22
    • /
    • 2010
  • This paper is a study on implementation of intelligent image surveillance system using context information and supplements temporal-spatial constraint, the weak point in which it is hard to process it in real time. In this paper, we propose scene analysis algorithm which can be processed in real time in various environments at low resolution video(320*240) comprised of 30 frames per second. The proposed algorithm gets rid of background and meaningless frame among continuous frames. And, this paper uses wavelet transform and edge histogram to detect shot boundary. Next, representative key-frame in shot boundary is selected by key-frame selection parameter and edge histogram, mathematical morphology are used to detect only motion region. We define each four basic contexts in accordance with angles of feature points by applying vertical and horizontal ratio for the motion region of detected object. These are standing, laying, seating and walking. Finally, we carry out scene analysis by defining simple context model composed with general context and emergency context through estimating each context's connection status and configure a system in order to check real time processing possibility. The proposed system shows the performance of 92.5% in terms of recognition rate for a video of low resolution and processing speed is 0.74 second in average per frame, so that we can check real time processing is possible.

Video Segmentation Using Audio and Image Information (오디오와 영상 정보를 이용한 비디오 세그먼테이션)

  • 정해준;정성환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.470-472
    • /
    • 2000
  • 본 논문에서는 영상 정보뿐만 아니라 오디오 정보를 함께 사용한 비디오 세그멘테이션에 대해 연구하였다. 대용량의 정보를 가지고 있는 비디오에 대하여 장면 경계 검출(Scene Break Detection)을 할 경우, 카메라 팬이나 장면 내에 여려 가지 다른 샷(Shot)으로 인하여 영상 정보만으로는 효과적인 검출이 어렵다. 이러한 문제를 해결하기 위해 비디오 내의 오디오 정보도 함께 사용함으로써 문제를 개선했다. 뉴스, 광고, 스포츠 등 다양한 3개 분야의 TV 프로그램으로 구성된 약 4,000개 영상 프레임과 약 30,000개의 오디오 프레임으로 구성된 비디오 데이터베이스에 대하여 실험한 결과, 영상 정보만 사용한 경우보다 우수한 성능을 확인하였다. 영상 정보 특징값으로는 칼라 히스토그램과 DC계수를 사용했고, 오디오 특징값으로는 SR(Silence ratio), VSTD(Volume standard deviation), NPR(Non pitch ratio)을 사용했다.

  • PDF

Fast Scene Change Detection Algorithm in Compressed Video by a phased-approach Method (압축 비디오에서 단계적 접근방법에 의한 빠른 장면전환검출 알고리듬)

  • 이재승;천이진;윤정오
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.3
    • /
    • pp.115-122
    • /
    • 2001
  • A scene change detection is an important step for video indexing and retrieval. This paper proposes an algorithm by a phased algorithm for fast and accurate detection of abrupt scene changes in an MPEG compressed domain with minimal decoding requirements and computational effort. The proposed method compares two successive I-frames for locating a scene change occurring within the GOP and uses macroblock-coded type information contained in B-frames to detect the exact frame where the scene change occurred. The algorithm has the advantage of speed, simplicity and accuracy. In addition, it requires less amount of storage. The experiment results demonstrate that the proposed algorithm has better detection performance, such as precision and recall rate, than the existing method using all DC images.

  • PDF

An Anchor-frame Detection Algorithm in MPEG News Data using DC component extraction and Color Clustering (MPEG으로 압축된 뉴스 데이터에서의 DC성분 추출과 컬러 클러스터링을 이용한 앵커 프레임 검색 기법)

  • 정정훈;이근섭;오화종;최병욱
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.729-732
    • /
    • 2000
  • 대용량 비디오 데이터의 이용에 있어 효과적인 비디오 검색을 위해서는 비디오 데이터의 색인 과정이 필요하다. 효과적인 비디오 데이터의 색인을 위해서는 의미적 단위인 씬(Scene)으로 이루어진 비디오 데이터를 물리적인 경계면인 컷(장면전환점)으로 검출하는 기법이 필수적이며 각 샷에서의 키 프레임 추출 또한 필수적이다. 본 논문에서는 뉴스 비디오데이터의 키 프레임인 앵커 프레임의 효과적인 검색을 위해 DC 성분 추출과 이진 검색기법, 그리고 컬러 클러스터링을 이용하고 있다. 본 논문에서 제하고 있는 방법을 검증하기 위해서 47분 10초 분량의 MPEG-2 로 압축된 뉴스 비디오 데이터에 적용한 결과 91.3%의 정확도와 84.0%의 재현율을 보여 제안한 방법의 우수성을 증명하고 있다.

  • PDF

The Implementing a Color, Edge, Optical Flow based on Mixed Algorithm for Shot Boundary Improvement (샷 경계검출 개선을 위한 칼라, 엣지, 옵티컬플로우 기반의 혼합형 알고리즘 구현)

  • Park, Seo Rin;Lim, Yang Mi
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.829-836
    • /
    • 2018
  • This study attempts to detect a shot boundary in films(or dramas) based on the length of a sequence. As films or dramas use scene change effects a lot, the issues regarding the effects are more diverse than those used in surveillance cameras, sports videos, medical care and security. Visual techniques used in films are focused on the human sense of aesthetic therefore, it is difficult to solve the errors in shot boundary detection with the method employed in surveillance cameras. In order to define the errors arisen from the scene change effects between the images and resolve those issues, the mixed algorithm based upon color histogram, edge histogram, and optical flow was implemented. The shot boundary data from this study will be used when analysing the configuration of meaningful shots in sequences in the future.

Video Shot Boundary Detection Using Correlation of Luminance and Edge Information (명도와 에지정보의 상관계수를 이용한 비디오샷 경계검출)

  • Yu, Heon-U;Jeong, Dong-Sik;Na, Yun-Gyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.304-308
    • /
    • 2001
  • The increase of video data makes the demand of efficient retrieval, storing, and browsing technologies necessary. In this paper, a video segmentation method (scene change detection method, or shot boundary detection method) for the development of such systems is proposed. For abrupt cut detection, inter-frame similarities are computed using luminance and edge histograms and a cut is declared when the similarities are under th predetermined threshold values. A gradual scene change detection is based on the similarities between the current frame and the previous shot boundary frame. A correlation method is used to obtain universal threshold values, which are applied to various video data. Experimental results show that propose method provides 90% precision and 98% recall rates for abrupt cut, and 59% precision and 79% recall rates for gradual change.

  • PDF

Fast Scene Change Detection Algorithm in MPEG Compressed Video by Minimal Decoding (MPEG으로 압축된 비디오에서 최소 복호화에 의한 빠른 장면전환검출 알고리듬)

  • Kim, Gang-Uk;Lee, Jae-Seung;Kim, Jong-Hun;Hwang, Chan-Sik
    • The KIPS Transactions:PartB
    • /
    • v.9B no.3
    • /
    • pp.343-350
    • /
    • 2002
  • A scene change detection which involves finding a cut between two consecutive shots is an important step for video indexing and retrieval. This paper proposes an algorithm for fast and accurate detection of abrupt scene changes in an MPEG compressed domain with minimal decoding requirements arid computational effort. The proposed method compares two successive DC images of I-frames for finding the GOP (group of picture) which contain a scene change and uses macroblock-coded type information contained in B-frames to detect the exact frame where the scene change occurred. The experiment results demonstrate that the proposed algorithm has better detection performance, such as precision and recall rate, than the existing method using all DC images. The algorithm has the advantage of speed, simplicity and accuracy. In addition, it requires less amount of storage.

Automatic Summary Method of Linguistic Educational Video Using Multiple Visual Features (다중 비주얼 특징을 이용한 어학 교육 비디오의 자동 요약 방법)

  • Han Hee-Jun;Kim Cheon-Seog;Choo Jin-Ho;Ro Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1452-1463
    • /
    • 2004
  • The requirement of automatic video summary is increasing as bi-directional broadcasting contents and various user requests and preferences for the bi -directional broadcast environment are increasing. Automatic video summary is needed for an efficient management and usage of many contents in service provider as well. In this paper, we propose a method to generate a content-based summary of linguistic educational videos automatically. First, shot-boundaries and keyframes are generated from linguistic educational video and then multiple(low-level) visual features are extracted. Next, the semantic parts (Explanation part, Dialog part, Text-based part) of the linguistic educational video are generated using extracted visual features. Lastly the XMI- document describing summary information is made based on HieraTchical Summary architecture oi MPEG-7 MDS (Multimedia I)escription Scheme). Experimental results show that our proposed algorithm provides reasonable performance for automatic summary of linguistic educational videos. We verified that the proposed method is useful ior video summary system to provide various services as well as management of educational contents.

  • PDF

Emotion-based Video Scene Retrieval using Interactive Genetic Algorithm (대화형 유전자 알고리즘을 이용한 감성기반 비디오 장면 검색)

  • Yoo Hun-Woo;Cho Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.6
    • /
    • pp.514-528
    • /
    • 2004
  • An emotion-based video scene retrieval algorithm is proposed in this paper. First, abrupt/gradual shot boundaries are detected in the video clip representing a specific story Then, five video features such as 'average color histogram' 'average brightness', 'average edge histogram', 'average shot duration', and 'gradual change rate' are extracted from each of the videos and mapping between these features and the emotional space that user has in mind is achieved by an interactive genetic algorithm. Once the proposed algorithm has selected videos that contain the corresponding emotion from initial population of videos, feature vectors from the selected videos are regarded as chromosomes and a genetic crossover is applied over them. Next, new chromosomes after crossover and feature vectors in the database videos are compared based on the similarity function to obtain the most similar videos as solutions of the next generation. By iterating above procedures, new population of videos that user has in mind are retrieved. In order to show the validity of the proposed method, six example categories such as 'action', 'excitement', 'suspense', 'quietness', 'relaxation', 'happiness' are used as emotions for experiments. Over 300 commercial videos, retrieval results show 70% effectiveness in average.