• Title/Summary/Keyword: 블러 탐지

Search Result 4, Processing Time 0.017 seconds

Preliminary study on car detection and tracking method using surveillance camera in tunnel environment for accident detection (터널 내 유고상황 자동 판정을 위한 선행 연구: CCTV를 이용한 차량의 탐지와 추적 기법 고찰)

  • Oh, Young-Sup;Shin, Hyu-Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.5
    • /
    • pp.813-827
    • /
    • 2017
  • Surveillance cameras installed in tunnels capture the various video frames effected by dynamic and variable factors. In addition, localizing and managing the cameras in tunnel is not affordable, and quality of capturing frame is effected by time. In this paper, we introduce a new method to detect and track the vehicles in tunnel by using surveillance cameras installed in a tunnel. It is difficult to detect the video frames directly from surveillance cameras due to the motion blur effect and blurring effect on lens by dirt. In order to overcome this difficulties, two new methods such as Differential Frame/Non-Maxima Suppression (DFNMS) and Haar Cascade Detector to track cars are proposed and investigated for their feasibilities. In the study, it was shown that high precision and recall values could be achieved by the two methods, which then be capable of providing practical data and key information to an automatic accident detection system in tunnels.

A Scale Invariant Object Detection Algorithm Using Wavelet Transform in Sea Environment (해양 환경에서 웨이블렛 변환을 이용한 크기 변화에 무관한 물표 탐지 알고리즘)

  • Bazarvaani, Badamtseren;Park, Ki Tae;Jeong, Jongmyeon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.249-255
    • /
    • 2013
  • In this paper, we propose an algorithm to detect scale invariant object from IR image obtained in the sea environment. We create horizontal edge (HL), vertical edge (LH), diagonal edge (HH) of images through 2-D discrete Haar wavelet transform (DHWT) technique after noise reduction using morphology operations. Considering the sea environment, Gaussian blurring to the horizontal and vertical edge images at each level of wavelet is performed and then saliency map is generated by multiplying the blurred horizontal and vertical edges and combining into one image. Then we extract object candidate region by performing a binarization to saliency map. A small area in the object candidate region are removed to produce final result. Experiment results show the feasibility of the proposed algorithm.

Reproducing Summarized Video Contents based on Camera Framing and Focus

  • Hyung Lee;E-Jung Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.85-92
    • /
    • 2023
  • In this paper, we propose a method for automatically generating story-based abbreviated summaries from long-form dramas and movies. From the shooting stage, the basic premise was to compose a frame with illusion of depth considering the golden division as well as focus on the object of interest to focus the viewer's attention in terms of content delivery. To consider how to extract the appropriate frames for this purpose, we utilized elemental techniques that have been utilized in previous work on scene and shot detection, as well as work on identifying focus-related blur. After converting the videos shared on YouTube to frame-by-frame, we divided them into a entire frame and three partial regions for feature extraction, and calculated the results of applying Laplacian operator and FFT to each region to choose the FFT with relative consistency and robustness. By comparing the calculated values for the entire frame with the calculated values for the three regions, the target frames were selected based on the condition that relatively sharp regions could be identified. Based on the selected results, the final frames were extracted by combining the results of an offline change point detection method to ensure the continuity of the frames within the shot, and an edit decision list was constructed to produce an abbreviated summary of 62.77% of the footage with F1-Score of 75.9%

CNN Based Face Tracking and Re-identification for Privacy Protection in Video Contents (비디오 컨텐츠의 프라이버시 보호를 위한 CNN 기반 얼굴 추적 및 재식별 기술)

  • Park, TaeMi;Phu, Ninh Phung;Kim, HyungWon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.63-68
    • /
    • 2021
  • Recently there is sharply increasing interest in watching and creating video contents such as YouTube. However, creating such video contents without privacy protection technique can expose other people in the background in public, which is consequently violating their privacy rights. This paper seeks to remedy these problems and proposes a technique that identifies faces and protecting portrait rights by blurring the face. The key contribution of this paper lies on our deep-learning technique with low detection error and high computation that allow to protect portrait rights in real-time videos. To reduce errors, an efficient tracking algorithm was used in this system with face detection and face recognition algorithm. This paper compares the performance of the proposed system with and without the tracking algorithm. We believe this system can be used wherever the video is used.