• Title/Summary/Keyword: 프레임 검출

Search Result 838, Processing Time 0.033 seconds

Cut Detection Using Color Histogram and Energy Vector in Wavelet Transform Domain (웨이블릿 변환영역에서 칼라 히스토그램과 에너지 벡터를 이용한 컷 검출)

  • 김수정;정성환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.467-469
    • /
    • 2000
  • 본 논문은 웨이블릿 변환영역 하에서 칼라 히스토그램과 에너지 벡터를 이용한 컷검출 방법을 제안한다. 기존의 컷 검출 방법들은 대부분 공간영역과 변환영역 각각에 대한 특징을 이용해 컷을 검출하였다. 그러나 본 논문에서는 웨이블릿 변환영역 하에서도 공간영역 특성을 유지하는 LL밴드 상의 칼라 히스토그램과 LH와 HL밴드의 에너지 값을 변환영역 특성으로 함께 고려하였다. 최근 영상 압축 표준에 웨이블릿을 이용한 압축기법이 사용되고 있으므로, 제안한 방법은 웨이블릿 압축 영상에서 압축을 해제할 필요 없이 검출하는데 사용되어질 수 있다. 제안한 방법의 성능평가를 위하여 광고, 뉴스, 스포츠, 영화 등 5개 분야의 다양한 TV 프로그램에서 약 10,000개의 프레임으로 실험한 결과, Recall에서는 약 90%, Precision에서는 약 94%의 컷 검출 성능을 나타내었다.

  • PDF

A Study on Motion detection for the Surveillance System based on Mobile (모바일 기반의 감시 시스템 구현을 위한 동작 검출 기법에 관한 연구)

  • 김형균;고석만;오무송
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.443-446
    • /
    • 2004
  • 본 논문에서는 동작 검출 기법을 소형 화상 카메라에 적용하여 감hi 영상을 검출하고 검출된 감시영상은 모바일 환경에서 실시간 모니터링 할 수 있도록 함으로써 실시간 모바일 감시 시스템을 구축하고자 한다. 동작 검출 기법으로는 기존에 사용되던 차 영상의 화소 값을 이용한 검출 기법을 보완한 블록단위의 특징 값을 비교하는 기법을 제안한다. 제안된 기법은 영상처리에서 프레임 메모리를 사용하지 않고, 기준 영상과 현재 영상의 블록별 특징 값만을 비교하기 때문에 처리 속도가 현저하게 향상되었다. 추출된 감시 영상을 전송하기 위한 모바일 클라이언트는 국내 모바일 표준 플랫폼 규격으로 사용하고 있는 WIPI SDK를 이용해 구현하고자 한다.

  • PDF

Fast Viola-Jones Object Detector using Fast Rejection and High Efficient Feature Selection (빠른 리젝션과 고효율 특징선택을 이용한 빠른 Viola-Jones 물체 검출기)

  • Park, Byeong-Ju;Lee, Jae-Heung;Lee, Gwang-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1343-1346
    • /
    • 2013
  • 본 연구에서는 기존의 Viola-Jones 물체 검출 프레임워크를 개선하여 하나의 특징 당 더 높은 효율을 가지며 검출대상이 아닌 서브 윈도우들을 더 빠르게 제거하는 학습 알고리즘을 제안한다. 학습의 결과로 생성된 물체 검출기는 서브윈도우를 특정 임계값까지 빠르게 제거하기 때문에 서브윈도우당 계산수가 줄어든다. 기존의 Viola-Jones 물체 검출기와 동일한 프레임워크이므로 인식성능에는 영향을 주지 않는다. MIT-CMU 테스트 집합에 대해서 서브윈도우당 특징 계산 횟수를 측정하였으며 기존 계산 횟수의 57%로 줄어들어 검출 속도가 약 71% 향상됨을 확인하였다.

The moving object detection for moving picture with gaussian noise (프레임간 가우시안 잡음이 있는 동영상에서의 움직임 객체 검출)

  • Kim, dong-woo;Song, young-jun;Kim, ae-kyeong;Ahn, jae-hyeong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.839-842
    • /
    • 2009
  • It is used to differential image for moving object detection in general. But it is difficult to detect the accurate detection which uses differential image between frames. In this paper, the proposed method overcome the noise that is generated by camera, grabber card, or weather condition. It extract to moving big object such as human or vehicle. The proposed method process morphological filtering and binary for the image with noise, reduce error. We are expect to apply to a real-time moving object detection system at fog condition, pass the limit of the object detection method using the differential image.

  • PDF

Robust Real-Time Lane Detection in Luminance Variation Using Morphological Processing (형태학적 처리를 이용한 밝기 변화에 강인한 실시간 차선 검출)

  • Kim, Kwan-Young;Kim, Mi-Rim;Kim, In-Kyu;Hwang, Seung-Jun;Beak, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.6
    • /
    • pp.1101-1108
    • /
    • 2012
  • In this paper, we proposed an algorithm for real-time lane detecting against luminance variation using morphological image processing and edge-based region segmentation. In order to apply the most appropriate threshold value, the adaptive threshold was used in every frame, and perspective transform was applied to correct image distortion. After that, we designated ROI for detecting the only lane and established standard to limit region of ROI. We compared performance about the accuracy and speed when we used morphological method and do not used. Experimental result showed that the proposed algorithm improved the accuracy to 98.8% of detection rate and speed of 36.72ms per frame with the morphological method.

Video Signature using Spatio-Temporal Information for Video Copy Detection (동영상 복사본 검출을 위한 시공간 정보를 이용한 동영상 서명 - 동심원 구획 기반 서술자를 이용한 동영상 복사본 검출 기술)

  • Cho, Ik-Hwan;Oh, Weon-Geun;Jeong, Dong-Seok
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.607-611
    • /
    • 2008
  • This paper proposes new video signature using spatio-temporal information for copy detection. The proposed video copy detection method is based on concentric circle partitioning method for each key frame. Firstly, key frames are extracted from whole video using temporal bilinear interpolation periodically and each frame is partitioned as a shape of concentric circle. For the partitioned sub-regions, 4 feature distributions of average intensity, its difference, symmetric difference and circular difference distributions are obtained by using the relation between the sub-regions. Finally these feature distributions are converted into binary signature by using simple hash function and merged together. For the proposed video signature, the similarity distance is calculated by simple Hamming distance so that its matching speed is very fast. From experiment results, the proposed method shows high detection success ratio of average 97.4% for various modifications. Therefore it is expected that the proposed method can be utilized for video copy detection widely.

  • PDF

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Robust Feature Selection and Shot Change Detection Method Using the Neural Networks (강인한 특징 변수 선별과 신경망을 이용한 장면 전환점 검출 기법)

  • Hong, Seung-Bum;Hong, Gyo-Young
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.7
    • /
    • pp.877-885
    • /
    • 2004
  • In this paper, we propose an enhancement shot change detection method using the neural net and the robust feature selection out of multiple features. The previous shot change detection methods usually used single feature and fixed threshold between consecutive frames. However, contents such as color, shape, background, and texture change simultaneously at shot change points in a video sequence. Therefore, in this paper, we detect the shot changes effectively using robust features, which are supplementary each other, rather than using single feature. In this paper, we use the typical CART (classification and regression tree) of data mining method to select the robust features, and the backpropagation neural net to determine the threshold of the each selected features. And to evaluation the performance of the robust feature selection, we compare the proposed method to the PCA(principal component analysis) method of the typical feature selection. According to the experimental result. it was revealed that the performance of our method had better that than the PCA method.

  • PDF

Content-based Shot Boundary Detection from MPEG Data using Region Flow and Color Information (영역 흐름 및 칼라 정보를 이용한 MPEG 데이타의 내용 기반 셧 경계 검출)

  • Kang, Hang-Bong
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.4
    • /
    • pp.402-411
    • /
    • 2000
  • It is an important step in video indexing and retrieval to detect shot boundaries on video data. Some approaches are proposed to detect shot changes by computing color histogram differences or the variances of DCT coefficients. However, these approaches do not consider the content or meaningful features in the image data which are useful in high level video processing. In particular, it is desirable to detect these features from compressed video data because this requires less processing overhead. In this paper, we propose a new method to detect shot boundaries from MPEG data using region flow and color information. First, we reconstruct DC images and compute region flow information and color histogram differences from HSV quantized images. Then, we compute the points at which region flow has discontinuities or color histogram differences are high. Finally, we decide those points as shot boundaries according to our proposed algorithm.

  • PDF

Development of Recognition Application of Facial Expression for Laughter Theraphy on Smartphone (스마트폰에서 웃음 치료를 위한 표정인식 애플리케이션 개발)

  • Kang, Sun-Kyung;Li, Yu-Jie;Song, Won-Chang;Kim, Young-Un;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.494-503
    • /
    • 2011
  • In this paper, we propose a recognition application of facial expression for laughter theraphy on smartphone. It detects face region by using AdaBoost face detection algorithm from the front camera image of a smartphone. After detecting the face image, it detects the lip region from the detected face image. From the next frame, it doesn't detect the face image but tracks the lip region which were detected in the previous frame by using the three step block matching algorithm. The size of the detected lip image varies according to the distance between camera and user. So, it scales the detected lip image with a fixed size. After that, it minimizes the effect of illumination variation by applying the bilateral symmetry and histogram matching illumination normalization. After that, it computes lip eigen vector by using PCA(Principal Component Analysis) and recognizes laughter expression by using a multilayer perceptron artificial network. The experiment results show that the proposed method could deal with 16.7 frame/s and the proposed illumination normalization method could reduce the variations of illumination better than the existing methods for better recognition performance.