• Title/Summary/Keyword: Motion blur

Search Result 115, Processing Time 0.023 seconds

Design and Fabrication of Scanning Backlight Unit using Flat Fluorescent Lamp (면광원을 사용한 Scanning Backlight Unit의 설계 및 제작)

  • Chae, Hyung-Jun;Jung, Yong-Min;Hwang, Sun-Nam;Hur, Jeong-Wook;Lee, Jun-Young;Lim, Sung-Kyoo
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.13 no.5
    • /
    • pp.376-382
    • /
    • 2008
  • In this paper, scanning backlight unit which can reduce motion blur was designed and fabricated using flat fluorescent lamp. The FFL(flat fluorescent lamp) is in the limelight as a new illuminant of next generation BLU(backlight unit), because of simple assembly and reduction of driving components. In order to control lamp brightness, lamp on-time was controlled. In this experiment, it was confirmed that lamp brightness can be dimmed linealy and in a wide range.

Fight Detection in Hockey Videos using Deep Network

  • Mukherjee, Subham;Saini, Rajkumar;Kumar, Pradeep;Roy, Partha Pratim;Dogra, Debi Prosad;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.225-232
    • /
    • 2017
  • Understanding actions in videos is an important task. It helps in finding the anomalies present in videos such as fights. Detection of fights becomes more crucial when it comes to sports. This paper focuses on finding fight scenes in Hockey sport videos using blur & radon transform and convolutional neural networks (CNNs). First, the local motion within the video frames has been extracted using blur information. Next, fast fourier and radon transform have been applied on the local motion. The video frames with fight scene have been identified using transfer learning with the help of pre-trained deep learning model VGG-Net. Finally, a comparison of the methodology has been performed using feed forward neural networks. Accuracies of 56.00% and 75.00% have been achieved using feed forward neural network and VGG16-Net, respectively.

A hybrid coding method for motion-blur reduction in LCD overdrive

  • Park, Sang-Yoon;Wang, Jun;Min, Kyeong-Yuk;Chong, Jong-Wha
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1143-1144
    • /
    • 2008
  • 본 논문에서는 motion-blur를 감소시키는 overdriving 기술의 오류를 감소시키기 위해 hybrid image coding 방법을 제안한다. hybrid image coding방법은 luminance data Y을 압축하기 위한 새로운 Adaptive Quantization Coding (AQC)을 제안하고 chrominance data CbCr을 Block Truncation Coding (BTC)방법으로 압축하는 것이다. 시뮬레이션 결과는 기존의 PSNR과 SD의 비교를 통하여 HIC의 우수함을 확인하여 알고리즘의 효율성을 검증하였다. 제안된 알고리즘은 verilog HDL를 통해 구조를 구현하고 synopsys design compiler를 통하여 합성 $0.13{\mu}m$ Samsung Library구조의 효율성을 확인하였다.

  • PDF

Temporal matching prior network for vehicle license plate detection and recognition in videos

  • Yoo, Seok Bong;Han, Mikyong
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.411-419
    • /
    • 2020
  • In real-world intelligent transportation systems, accuracy in vehicle license plate detection and recognition is considered quite critical. Many algorithms have been proposed for still images, but their accuracy on actual videos is not satisfactory. This stems from several problematic conditions in videos, such as vehicle motion blur, variety in viewpoints, outliers, and the lack of publicly available video datasets. In this study, we focus on these challenges and propose a license plate detection and recognition scheme for videos based on a temporal matching prior network. Specifically, to improve the robustness of detection and recognition accuracy in the presence of motion blur and outliers, forward and bidirectional matching priors between consecutive frames are properly combined with layer structures specifically designed for plate detection. We also built our own video dataset for the deep training of the proposed network. During network training, we perform data augmentation based on image rotation to increase robustness regarding the various viewpoints in videos.

Fast key-frame extraction for 3D reconstruction from a handheld video

  • Choi, Jongho;Kwon, Soonchul;Son, Kwangchul;Yoo, Jisang
    • International journal of advanced smart convergence
    • /
    • v.5 no.4
    • /
    • pp.1-9
    • /
    • 2016
  • In order to reconstruct a 3D model in video sequences, to select key frames that are easy to estimate a geometric model is essential. This paper proposes a method to easily extract informative frames from a handheld video. The method combines selection criteria based on appropriate-baseline determination between frames, frame jumping for fast searching in the video, geometric robust information criterion (GRIC) scores for the frame-to-frame homography and fundamental matrix, and blurry-frame removal. Through experiments with videos taken in indoor space, the proposed method shows creating a more robust 3D point cloud than existing methods, even in the presence of motion blur and degenerate motions.

Automatic Display Quality Measurement by Image Processing

  • Chen, Bo-Sheng;Heish, Chen-Chiung
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1228-1231
    • /
    • 2009
  • This paper presented an automatic system for display quality measurement by image processing. The goal is to replace human eyes for display quality evaluation by computer vision and get the objective quality review for consumer to make purchase of monitor or TV. Color, contrast, brightness, sharpness and motion blur are the main five factors to affect display quality that could be measured by supplying patterns and analyzing the corresponding images captured from webcam. The scores are calculated by image processing techniques. Linear regression model is then adopted to find the relation between human score and the measured display performance.

  • PDF

Robust Motion Compensated Frame Interpolation Using Weight-Overlapped Block Motion Compensation with Variable Block Sizes to Reduce LCD Motion Blurs

  • Lee, Jichan;Choi, Jin Hyuk;Lee, Daeho
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.537-543
    • /
    • 2015
  • Liquid crystal displays (LCDs) have slow responses, so motion blurs are often perceived in fast moving scenes. To reduce this motion blur, we propose a novel method of robust motion compensated frame interpolation (MCFI) based on bidirectional motion estimation (BME) and weight-overlapped block motion compensation (WOBMC) with variable block sizes. In most MCFI methods, a static block size is used, so some block artefacts and motion blurs are observed. However, the proposed method adjusts motion block sizes and search ranges by comparing matching scores, so the precise motion vectors can be estimated in accordance with motions. In the MCFI, overlapping ranges for WOBMC are also determined by adjusted block sizes, so the accurate MCFI can be performed. In the experimental results, the proposed method strongly reduced motion blurs arisen from large motions, and yielded interpolated images with high visual performance and peak signal-to-noise ratio (PSNR).

Novel Frame Interpolation Method for High Image Quality LCDs

  • Itoh, Goh;Mishima, Nao
    • Journal of Information Display
    • /
    • v.5 no.3
    • /
    • pp.1-7
    • /
    • 2004
  • We developed a novel frame interpolation method to interpolate a frame between two successive original frames. Using this method, we are able to apply a double-rate driving method instead of an impulse driving method where a black frame is inserted between two successive original frames. The double-rate driving method enables amelioration of the motion blur of LCDs caused by the characteristics of human vision without reducing the luminosity of the whole screen. The image quality of the double-rate driving method was also found to be better than that of an impulse driving method using our motion picture simulator and an actual panel. Our initial model of our frame interpolation method consists of motion estimation with a maximum matching pixel count estimation function, an area segmentation technique, and motion compensation with variable segmentation threshold. Although salt and pepper noise remained in a portion of an object mainly due to inaccuracy of motion estimation, we verified the validity of our method and the possibility of improvement in hold-type motion blurring.

Extraction of the motion parameters form blurred images (흔들림이 있는 영상의 움직임 방향과 정도의 추정)

  • 최지웅;강문기;박규태
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.11a
    • /
    • pp.129-133
    • /
    • 1997
  • 카메라로부터 얻은 영상은 이를 얻는 과정에서 카메라의 떨림 및 추가되는 노이즈에 의하여 손상을 입을 수 있으며, 이런 과정으로 손상된 영상에는 움직임 번짐현상(motion blur)이 발생하며 이는 영상의 명확도를 현저하게 떨어지게 한다. 움직임 번짐현상은 주파수 영역에서 움직임의 방향으로 주기적인 영점을 발생시키며 그 주기는 움직임의 길이에 반비례한다. 이러한 영점은 원 영상에 의한 영점과 노이즈에 의하여 소실되므로 이들의 영향을 power 영역에서의 평균법으로 최소화시킬 필요가 있다. 본 논문에서는 원영상과 노이즈의 영향인 최소화된 상태에서 2차원 cepstrum을 통하여 번짐현상을 주기와 방향을 계산해내는 알고리즘을 제안한다.

  • PDF

A Genetic Programming Approach to Blind Deconvolution of Noisy Blurred Images (잡음이 있고 흐릿한 영상의 블라인드 디컨벌루션을 위한 유전 프로그래밍 기법)

  • Mahmood, Muhammad Tariq;Chu, Yeon Ho;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Usually, image deconvolution is applied as a preprocessing step in surveillance systems to reduce the effect of motion or out-of-focus blur problem. In this paper, we propose a blind-image deconvolution filtering approach based on genetic programming (GP). A numerical expression is developed using GP process for image restoration which optimally combines and exploits dependencies among features of the blurred image. In order to develop such function, first, a set of feature vectors is formed by considering a small neighborhood around each pixel. At second stage, the estimator is trained and developed through GP process that automatically selects and combines the useful feature information under a fitness criterion. The developed function is then applied to estimate the image pixel intensity of the degraded image. The performance of developed function is estimated using various degraded image sequences. Our comparative analysis highlights the effectiveness of the proposed filter.