• 제목/요약/키워드: video denoising

검색결과 15건 처리시간 0.022초

잡음 제거 기술 기반의 비디오 인페인팅 성능 연구 (A Study on the Video Inpainting Performance using Denoising Technique)

  • 서정윤;백한결;박상효
    • 대한임베디드공학회논문지
    • /
    • 제17권6호
    • /
    • pp.329-335
    • /
    • 2022
  • In this paper, we study the effect of noise on video inpainting, a technique that fills missing regions of video. Since the video may contain noise, the quality of the video may be affected when applying the video inpainting technique. Therefore, in this paper, we compare the inpainting performance in video with and without denoising techniqueDAVIS dataset. For that, we conducted two experiments: 1) applying inpainting technique after denoising the noisy video and 2) applying the inpainting technique to the video and denoising the video. Through the experiment, we observe the effect of denoising technique on the quality of video inpainting and conclude that video inpainting after denoising would improve the quality of the video subjectively and objectively.

A Kalman Filter based Video Denoising Method Using Intensity and Structure Tensor

  • Liu, Yu;Zuo, Chenlin;Tan, Xin;Xiao, Huaxin;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권8호
    • /
    • pp.2866-2880
    • /
    • 2014
  • We propose a video denoising method based on Kalman filter to reduce the noise in video sequences. Firstly, with the strong spatiotemporal correlations of neighboring frames, motion estimation is performed on video frames consisting of previous denoised frames and current noisy frame based on intensity and structure tensor. The current noisy frame is processed in temporal domain by using motion estimation result as the parameter in the Kalman filter, while it is also processed in spatial domain using the Wiener filter. Finally, by weighting the denoised frames from the Kalman and the Wiener filtering, a satisfactory result can be obtained. Experimental results show that the performance of our proposed method is competitive when compared with state-of-the-art video denoising algorithms based on both peak signal-to-noise-ratio and structural similarity evaluations.

High-frame-rate Video Denoising for Ultra-low Illumination

  • Tan, Xin;Liu, Yu;Zhang, Zheng;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4170-4188
    • /
    • 2014
  • In this study, we present a denoising algorithm for high-frame-rate videos in an ultra-low illumination environment on the basis of Kalman filtering model and a new motion segmentation scheme. The Kalman filter removes temporal noise from signals by propagating error covariance statistics. Regarded as the process noise for imaging, motion is important in Kalman filtering. We propose a new motion estimation scheme that is suitable for serious noise. This scheme employs the small motion vector characteristic of high-frame-rate videos. Small changing patches are intentionally neglected because distinguishing details from large-scale noise is difficult and unimportant. Finally, a spatial bilateral filter is used to improve denoising capability in the motion area. Experiments are performed on videos with both synthetic and real noises. Results show that the proposed algorithm outperforms other state-of-the-art methods in both peak signal-to-noise ratio objective evaluation and visual quality.

A New Denoising Method for Time-lapse Video using Background Modeling

  • Park, Sanghyun
    • 한국정보기술학회 영문논문지
    • /
    • 제10권2호
    • /
    • pp.125-138
    • /
    • 2020
  • Due to the development of camera technology, the cost of producing time-lapse video has been reduced, and time-lapse videos are being applied in many fields. Time-lapse video is created using images obtained by shooting for a long time at long intervals. In this paper, we propose a method to improve the quality of time-lapse videos monitoring the changes in plants. Considering the characteristics of time-lapse video, we propose a method of separating the desired and unnecessary objects and removing unnecessary elements. The characteristic of time-lapse videos that we have noticed is that unnecessary elements appear intermittently in the captured images. In the proposed method, noises are removed by applying a codebook background modeling algorithm to use this characteristic. Experimental results show that the proposed method is simple and accurate to find and remove unnecessary elements in time-lapse videos.

Signal Synchronization Using a Flicker Reduction and Denoising Algorithm for Video-Signal Optical Interconnect

  • Sangirov, Jamshid;Ukaegbu, Ikechi Augustine;Lee, Tae-Woo;Cho, Mu-Hee;Park, Hyo-Hoon
    • ETRI Journal
    • /
    • 제34권1호
    • /
    • pp.122-125
    • /
    • 2012
  • A video signal through a high-density optical link has been demonstrated to show the reliability of optical link for high-data-rate transmission. To reduce optical point-to-point links, an electrical link has been utilized for control and clock signaling. The latency and flicker with background noise occurred during the transferring of data across the optical link due to electrical-to-optical with optical-to-electrical conversions. The proposed synchronization technology combined with a flicker and denoising algorithm has given good results and can be applied in high-definition serial data interface (HD-SDI), ultra-HD-SDI, and HD multimedia interface transmission system applications.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

  • Bang, Gun;Hur, Namho;Lee, Seong-Whan
    • ETRI Journal
    • /
    • 제36권2호
    • /
    • pp.242-252
    • /
    • 2014
  • Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

비지역적 평균 기반 시공간 잡음 제거 알고리즘 (Spatio-temporal Denoising Algorithm base on Nonlocal Means)

  • 박상욱;강문기
    • 대한전자공학회논문지SP
    • /
    • 제48권2호
    • /
    • pp.24-31
    • /
    • 2011
  • 동영상 잡음 제거에 있어서 비지역적 평균 기반 시공간 잡음 제거 알고리즘을 제안한다. 기존에 제시된 비지역적 평균 기반 알고리즘은 잡음 제거에 우수한 성능을 보이지만 연산량이 많고 여러 장의 장면 기억장치가 필요하기 때문에 하드웨어 시스템 구현에 많은 어려움이 있다. 그러므로 제안된 알고리즘에서는 무한 충격 응답 기반 시간 영역 잡음 제거 알고리즘을 도입하여 움직임이 적은 영역에서는 자연스러운 잡음 제거가 가능하며 움직임이 많은 영역에서는 연산량 측면에서 효율성을 고려하여 개선된 비지역적 평균 기반 잡음 제거 알고리즘을 적용하여 움직임에 의한 흐려짐을 최소화 하면서 잡음 제거를 수행하였다. 다양한 잡음 정도를 갖는 시험 영상에 대한 실험을 통해 수치적, 시각적 측면에서 각각 비교하여 제안된 알고리즘의 성능이 기존의 알고리즘과 대등하거나 촬영 영상에 따라서는 우수한 성능을 보임을 확인할 수 있었다.

Denoising 3D Skeleton Frames using Intersection Over Union

  • Chuluunsaikhan, Tserenpurev;Kim, Jeong-Hun;Choi, Jong-Hyeok;Nasridinov, Aziz
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.474-475
    • /
    • 2021
  • The accuracy of real-time video analysis system based on 3D skeleton data highly depends on the quality of data. This study proposes a methodology to distinguish noise in 3D skeleton frames using Intersection Over Union (IOU) method. IOU is metric that tells how similar two rectangles (i.e., boxes). Simply, the method decides a frame as noise or not by comparing the frame with a set of valid frames. Our proposed method distinguished noise in 3D skeleton frames with the accuracy of 99%. According to the result, our proposed method can be used to track noise in 3D skeleton frames.

프레임 차와 톤 매핑을 이용한 저조도 영상 향상 (Low-light Image Enhancement Based on Frame Difference and Tone Mapping)

  • 정윤주;이영학;심재창;정순기
    • 한국멀티미디어학회논문지
    • /
    • 제21권9호
    • /
    • pp.1044-1051
    • /
    • 2018
  • In this paper, we propose a new method to improve low light image. In order to improve the image quality of a night image with a moving object as much as the quality of a daytime image, the following tasks were performed. Firstly, we reduce the noisy of the input night image and improve the night image by the tone mapping method. Secondly, we segment the input night image into a foreground with motion and a background without motion. The motion is detected using both the difference between the current frame and the previous frame and the difference between the current frame and the night background image. The background region of the night image takes pixels from corresponding positions in the daytime image. The foreground regions of the night image take the pixels from the corresponding positions of the image which is improved by the tone mapping method. Experimental results show that the proposed method can improve the visual quality more clearly than the existing methods.