• Title/Summary/Keyword: Video detection

Search Result 1,343, Processing Time 0.028 seconds

Video Summarization Using Importance-based Fuzzy One-Class Support Vector Machine (중요도 기반 퍼지 원 클래스 서포트 벡터 머신을 이용한 비디오 요약 기술)

  • Kim, Ki-Joo;Choi, Young-Sik
    • Journal of Internet Computing and Services
    • /
    • v.12 no.5
    • /
    • pp.87-100
    • /
    • 2011
  • In this paper, we address a video summarization task as generating both visually salient and semantically important video segments. In order to find salient data points, one can use the OC-SVM (One-class Support Vector Machine), which is well known for novelty detection problems. It is, however, hard to incorporate into the OC-SVM process the importance measure of data points, which is crucial for video summarization. In order to integrate the importance of each point in the OC-SVM process, we propose a fuzzy version of OC-SVM. The Importance-based Fuzzy OC-SVM weights data points according to the importance measure of the video segments and then estimates the support of a distribution of the weighted feature vectors. The estimated support vectors form the descriptive segments that best delineate the underlying video content in terms of the importance and salience of video segments. We demonstrate the performance of our algorithm on several synthesized data sets and different types of videos in order to show the efficacy of the proposed algorithm. Experimental results showed that our approach outperformed the well known traditional method.

A Robust Algorithm for Moving Object Segmentation and VOP Extraction in Video Sequences (비디오 시퀸스에서 움직임 객체 분할과 VOP 추출을 위한 강력한 알고리즘)

  • Kim, Jun-Ki;Lee, Ho-Suk
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.430-441
    • /
    • 2002
  • Video object segmentation is an important component for object-based video coding scheme such as MPEG-4. In this paper, a robust algorithm for segmentation of moving objects in video sequences and VOP(Video Object Planes) extraction is presented. The points of this paper are detection, of an accurate object boundary by associating moving object edge with spatial object edge and generation of VOP. The algorithm begins with the difference between two successive frames. And after extracting difference image, the accurate moving object edge is produced by using the Canny algorithm and morphological operation. To enhance extracting performance, we app]y the morphological operation to extract more accurate VOP. To be specific, we apply morphological erosion operation to detect only accurate object edges. And moving object edges between two images are generated by adjusting the size of the edges. This paper presents a robust algorithm implementation for fast moving object detection by extracting accurate object boundaries in video sequences.

Transmission Error Detection and Copyright Protection for MPEG-2 Video Based on Channel Coded Watermark (채널 부호화된 워터마크 신호에 기반한 MPEG-2 비디오의 전송 오류 검출과 저작권 보호)

  • Bae, Chang-Seok;Yuk, Ying-Chung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.745-754
    • /
    • 2005
  • This paper proposes an information hiding algorithm using channel coding technique which can be used to detect transmission errors and to protect copyright for MPEG-2 video The watermark signal is generated by applying copyright information of video data to a convolutional encoder, and the signal is embedded into macro blocks in every frame while encoding to MPEG-2 video stream In the decoder, the embedded signal is detected from macro blocks in every frame, and the detected signal is used to localize transmission errors in the video stream. The detected signal can also be used to claim ownership of the video data by decoding it to the copyright Information. In this stage, errors in the detected watermark signal can be corrected by channel decoder. The 3 video sequences which consist of 300 frames each are applied to the proposed MPEG-2 codec. Experimental results show that the proposed method can detect transmission errors in the video stream while decoding and it can also reconstruct copyright information more correctly than the conventional method.

Comparison of Text Beginning Frame Detection Methods in News Video Sequences (뉴스 비디오 시퀀스에서 텍스트 시작 프레임 검출 방법의 비교)

  • Lee, Sanghee;Ahn, Jungil;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.21 no.3
    • /
    • pp.307-318
    • /
    • 2016
  • 비디오 프레임 내의 오버레이 텍스트는 음성과 시각적 내용에 부가적인 정보를 제공한다. 특히, 뉴스 비디오에서 이 텍스트는 비디오 영상 내용을 압축적이고 직접적인 설명을 한다. 그러므로 뉴스 비디오 색인 시스템을 만드는데 있어서 가장 신뢰할 수 있는 실마리이다. 텔레비전 뉴스 프로그램의 색인 시스템을 만들기 위해서는 텍스트를 검출하고 인식하는 것이 중요하다. 이 논문은 뉴스 비디오에서 오버레이 텍스트를 검출하고 인식하는데 도움이 되는 오버레이 텍스트 시작 프레임 식별을 제안한다. 비디오 시퀀스의 모든 프레임이 오버레이 텍스트를 포함하는 것이 아니기 때문에, 모든 프레임에서 오버레이 텍스트의 추출은 불필요하고 시간 낭비다. 그러므로 오버레이 텍스트를 포함하고 있는 프레임에만 초점을 맞춤으로써 오버레이 텍스트 검출의 정확도를 개선할 수 있다. 텍스트 시작 프레임 식별 방법에 대한 비교 실험을 뉴스 비디오에 대해서 실시하고, 적절한 처리 방법을 제안한다.

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Development of an Integrated Traffic Object Detection Framework for Traffic Data Collection (교통 데이터 수집을 위한 객체 인식 통합 프레임워크 개발)

  • Yang, Inchul;Jeon, Woo Hoon;Lee, Joyoung;Park, Jihyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.191-201
    • /
    • 2019
  • A fast and accurate integrated traffic object detection framework was proposed and developed, harnessing a computer-vision based deep-learning approach performing automatic object detections, a multi object tracking technology, and video pre-processing tools. The proposed method is capable of detecting traffic object such as autos, buses, trucks and vans from video recordings taken under a various kinds of external conditions such as stability of video, weather conditions, video angles, and counting the objects by tracking them on a real-time basis. By creating plausible experimental scenarios dealing with various conditions that likely affect video quality, it is discovered that the proposed method achieves outstanding performances except for the cases of rain and snow, thereby resulting in 98% ~ 100% of accuracy.

Text Region Detection Method in Mobile Phone Video (휴대전화 동영상에서의 문자 영역 검출 방법)

  • Lee, Hoon-Jae;Sull, Sang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.192-198
    • /
    • 2010
  • With the popularization of the mobile phone with a built-in camera, there are a lot of effort to provide useful information to users by detecting and recognizing the text in the video which is captured by the camera in mobile phone, and there is a need to detect the text regions in such mobile phone video. In this paper, we propose a method to detect the text regions in the mobile phone video. We employ morphological operation as a preprocessing and obtain binarized image using modified k-means clustering. After that, candidate text regions are obtained by applying connected component analysis and general text characteristic analysis. In addition, we increase the precision of the text detection by examining the frequency of the candidate regions. Experimental results show that the proposed method detects the text regions in the mobile phone video with high precision and recall.

CU Depth Decision Based on FAST Corner Detection for HEVC Intra Prediction (HEVC 화면 내 예측을 위한 FAST 에지 검출 기반의 CU 분할 방법)

  • Jeon, Seungsu;kim, Namuk;Jeon, Byeungwoo
    • Journal of Broadcast Engineering
    • /
    • v.21 no.4
    • /
    • pp.484-492
    • /
    • 2016
  • The High efficiency video coding (HEVC) is the newest video coding standard that achieves coding efficiency higher than previous video coding standards such as H.264/AVC. In intra prediction, the prediction units (PUs) are derived from a large coding unit (LCU) which is partitioned into smaller coding units (CUs) sizing from 8x8 to 64x64 in a quad-tree structure. As they are divided until having the minimum depth, Optimum CU splitting is selected in RDO (Rate Distortion Optimization) process. In this process, HEVC demands high computational complexity. In this paper, to reduce the complexity of HEVC, we propose a fast CU mode decision (FCDD) for intra prediction by using FAST (Features from Accelerated Segment Test) corner detection. The proposed method reduces computational complexity with 53.73% of the computational time for the intra prediction while coding performance degradation with 0.7% BDBR is small compared to conventional HEVC.

Optical Flow Measurement Based on Boolean Edge Detection and Hough Transform

  • Chang, Min-Hyuk;Kim, Il-Jung;Park, Jong an
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.119-126
    • /
    • 2003
  • The problem of tracking moving objects in a video stream is discussed in this pa-per. We discussed the popular technique of optical flow for moving object detection. Optical flow finds the velocity vectors at each pixel in the entire video scene. However, optical flow based methods require complex computations and are sensitive to noise. In this paper, we proposed a new method based on the Hough transform and on voting accumulation for improving the accuracy and reducing the computation time. Further, we applied the Boo-lean based edge detector for edge detection. Edge detection and segmentation are used to extract the moving objects in the image sequences and reduce the computation time of the CHT. The Boolean based edge detector provides accurate and very thin edges. The difference of the two edge maps with thin edges gives better localization of moving objects. The simulation results show that the proposed method improves the accuracy of finding the optical flow vectors and more accurately extracts moving objects' information. The process of edge detection and segmentation accurately find the location and areas of the real moving objects, and hence extracting moving information is very easy and accurate. The Combinatorial Hough Transform and voting accumulation based optical flow measures optical flow vectors accurately. The direction of moving objects is also accurately measured.

Deep Learning Object Detection to Clearly Differentiate Between Pedestrians and Motorcycles in Tunnel Environment Using YOLOv3 and Kernelized Correlation Filters

  • Mun, Sungchul;Nguyen, Manh Dung;Kweon, Seokkyu;Bae, Young Hoon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.7
    • /
    • pp.1266-1275
    • /
    • 2019
  • With increasing criminal rates and number of CCTVs, much attention has been paid to intelligent surveillance system on the horizon. Object detection and tracking algorithms have been developed to reduce false alarms and accurately help security agents immediately response to undesirable changes in video clips such as crimes and accidents. Many studies have proposed a variety of algorithms to improve accuracy of detecting and tracking objects outside tunnels. The proposed methods might not work well in a tunnel because of low illuminance significantly susceptible to tail and warning lights of driving vehicles. The detection performance has rarely been tested against the tunnel environment. This study investigated a feasibility of object detection and tracking in an actual tunnel environment by utilizing YOLOv3 and Kernelized Correlation Filter. We tested 40 actual video clips to differentiate pedestrians and motorcycles to evaluate the performance of our algorithm. The experimental results showed significant difference in detection between pedestrians and motorcycles without false positive rates. Our findings are expected to provide a stepping stone of developing efficient detection algorithms suitable for tunnel environment and encouraging other researchers to glean reliable tracking data for smarter and safer City.