• Title/Summary/Keyword: Reconstruction error

Search Result 431, Processing Time 0.022 seconds

A Study on Optimum Subband Filter Bank Design Using Vector Quantizer (벡터 양자화기를 사용한 최적의 부대역 필터 뱅크 구현에 관한 연구)

  • Jee, Innho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.1
    • /
    • pp.107-113
    • /
    • 2017
  • This paper provides a new approach for modeling of vector quantizer(VQ) followed by analysis and design of subband codecs with imbedded VQ's. We compute the mean squared reconstruction error(MSE) which depend on N the number of entries in each codebook, k the length of each codeword, and on the filter bank(FB) coefficients in subband codecs. We show that the optimum M-band filter bank structure in presence of pdf-optimized vector quantizer can be designed by a suitable choice of equivalent scalar quantizer parameters. Specific design examples have been developed for two different classes of filter banks, paraunitary and the biorthogonal FB and the 2 channel case. These theoretical results are confirmed by Monte Carlo simulation.

Image Coding by Block Based Fractal Approximation (블록단위의 프래탈 근사화를 이용한 영상코딩)

  • 정현민;김영규;윤택현;강현철;이병래;박규태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.45-55
    • /
    • 1994
  • In this paper, a block based image approximation technique using the Self Affine System(SAS) from the fractal theory is suggested. Each block of an image is divided into 4 tiles and 4 affine mapping coefficients are found for each tile. To find the affine mapping cefficients that minimize the error between the affine transformed image block and the reconstructed image block, the matrix euation is solved by setting each partial differential coefficients to aero. And to ensure the convergence of coding block. 4 uniformly partitioned affine transformation is applied. Variable block size technique is employed in order to applynatural image reconstruction property of fractal image coding. Large blocks are used for encoding smooth backgrounds to yield high compression efficiency and texture and edge blocks are divided into smaller blocks to preserve the block detail. Affine mapping coefficinets are found for each block having 16$\times$16, 8$\times$8 or 4$\times$4 size. Each block is classified as shade, texture or edge. Average gray level is transmitted for shade bolcks, and coefficients are found for texture and edge blocks. Coefficients are quantized and only 16 bytes per block are transmitted. Using the proposed algorithm, the computational load increases linearly in proportion to image size. PSNR of 31.58dB is obtained as the result using 512$\times$512, 8 bits per pixel Lena image.

  • PDF

ECG Compression Structure Design Using of Multiple Wavelet Basis Functions (다중웨이브렛 기저함수를 이용한 심전도 압축구조설계)

  • Kim Tae-hyung;Kwon Chang-Young;Yoon Dong-Han
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.467-472
    • /
    • 2005
  • ECG signals are recorded for diagnostic purposes in many clinical situations. Also, In order to permit good clinical interpretation, data is needed at high resolutions and sampling rates. Therefore In this paper, we designed to compression structure using multiple wavelet basis function(SWBF) and compared to single wavelet basis function(SWBF) and discrete cosine transform(DCT). For experience objectivity, Simulation was performed using the arrhythmia data with sampling frequency 360Hz, resolution lIbit at MIT-BIH database. An estimate of performance estimate evaluate the reconstruction error. Consequently compression structure using MWBF has high performance result.

Image deblurring via adaptive proximal conjugate gradient method

  • Pan, Han;Jing, Zhongliang;Li, Minzhe;Dong, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4604-4622
    • /
    • 2015
  • It is not easy to reconstruct the geometrical characteristics of the distorted images captured by the devices. One of the most popular optimization methods is fast iterative shrinkage/ thresholding algorithm. In this paper, to deal with its approximation error and the turbulence of the decrease process, an adaptive proximal conjugate gradient (APCG) framework is proposed. It contains three stages. At first stage, a series of adaptive penalty matrices are generated iterate-to-iterate. Second, to trade off the reconstruction accuracy and the computational complexity of the resulting sub-problem, a practical solution is presented, which is characterized by solving the variable ellipsoidal-norm based sub-problem through exploiting the structure of the problem. Third, a correction step is introduced to improve the estimated accuracy. The numerical experiments of the proposed algorithm, in comparison to the favorable state-of-the-art methods, demonstrate the advantages of the proposed method and its potential.

A Precise Projectile Trajectory Registration Algorithm Based on Weighted PDOP (PDOP 가중치 기반 정밀 탄궤적 정합 알고리즘)

  • Shin, Seok-Hyun;Kim, Jong-Ju
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.44 no.6
    • /
    • pp.502-511
    • /
    • 2016
  • Recently, many kind of smart projectiles are being developed. In case of smart projectile, studying in advance, it uses a navigation data acquired from the GNSS receiver to check its location on the geocentric(WGS84) coordinates and to estimate P.O.I(point of impact). However, because of various error inducing factors, the result of positioning involve some errors. We introduce the advanced algorithm for the reconstruction of a navigation trajectory using weighted PDOP, based on a simulated trajectory acquired from PRODAS. It is very fast and robust to noise and shows reliable output. It can be widely used to estimate an actual trajectory of a projectile.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Real-time Fluorescence Lifetime Imaging Microscopy Implementation by Analog Mean-Delay Method through Parallel Data Processing

  • Kim, Jayul;Ryu, Jiheun;Gweon, Daegab
    • Applied Microscopy
    • /
    • v.46 no.1
    • /
    • pp.6-13
    • /
    • 2016
  • Fluorescence lifetime imaging microscopy (FLIM) has been considered an effective technique to investigate chemical properties of the specimens, especially of biological samples. Despite of this advantageous trait, researchers in this field have had difficulties applying FLIM to their systems because acquiring an image using FLIM consumes too much time. Although analog mean-delay (AMD) method was introduced to enhance the imaging speed of commonly used FLIM based on time-correlated single photon counting (TCSPC), a real-time image reconstruction using AMD method has not been implemented due to its data processing obstacles. In this paper, we introduce a real-time image restoration of AMD-FLIM through fast parallel data processing by using Threading Building Blocks (TBB; Intel) and octa-core processor (i7-5960x; Intel). Frame rate of 3.8 frames per second was achieved in $1,024{\times}1,024$ resolution with over 4 million lifetime determinations per second and measurement error within 10%. This image acquisition speed is 184 times faster than that of single-channel TCSPC and 9.2 times faster than that of 8-channel TCSPC (state-of-art photon counting rate of 80 million counts per second) with the same lifetime accuracy of 10% and the same pixel resolution.

An Improvement on FFT-Based Digital Implementation Algorithm for MC-CDMA Systems (MC-CDMA 시스템을 위한 FFT 기반의 디지털 구현 알고리즘 개선)

  • 김만제;나성주;신요안
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.7A
    • /
    • pp.1005-1015
    • /
    • 1999
  • This paper is concerned with an improvement on IFFT (inverse fast Fourier transform) and FFT based baseband digital implementation algorithm for BPSK (binary phase shift keying)-modulated MC-CDMA (multicarrier-code division multiple access) systems, that is functionally equivalent to the conventional implementation algorithm, while reducing computational complexity and bandwidth requirement. We also derive an equalizer structure for the proposed implementation algorithm. The proposed algorithm is based on a variant of FFT algorithm that utilizes a N/2-point FFT/IFFT for simultaneous transformation and reconstruction of two N/2-point real signals. The computer simulations under additive white Gaussian noise channels and frequency selective fading channels using equal gain combiner and maximal ratio combiner diversities, demonstrate the performance of the proposed algorithm.

  • PDF

Investigation of Performance Degradation of Shack Hartmann Wavefront Sensing Due to Pupil Irradiance Profile

  • Lee Jun-Ho;Lee Yaung-Cheol;Kang Eung-Cheol
    • Journal of the Optical Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.16-22
    • /
    • 2006
  • Wavefront sensing using a Shack-Hartmann sensor has been widely used for estimating wavefront errors or distortions. The sensor combines the local slopes, which are estimated from the centroids of each lenslet image, to give the overall wavefront reconstruction. It was previously shown that the pupil-plane irradiance profile effects the centroid estimation. Furthermore, a previous study reported that the reconstructed wavefront from a planar wavefront with a Gaussian pupil irradiance profile contains large focus and spherical aberration terms when there is a focus error. However, it has not been reported yet how seriously the pupil irradiance profiles, which can occur in practical applications, effect the sensing errors. This paper considered two cases when the irradiance profiles are not uniform: 1) when the light source is Gaussian and 2) when there is a partial interference due to a double reflection by a beam splitting element. The images formed by a Shack-Hartmann sensor were simulated through fast Fourier transform and were then supposed to be detected by a noiseless CCD camera. The simulations found that sensing errors, due to the Gaussian irradiance profile and the partial interference, were found to be smaller than RMS ${\lambda}/50$ when ${\lambda}$ is $0.6328\;{\mu}m$, which can be ignored in most practical cases where the reference and test beams have the same irradiance profiles.

Object Tracking with Sparse Representation based on HOG and LBP Features

  • Boragule, Abhijeet;Yeo, JungYeon;Lee, GueeSang
    • International Journal of Contents
    • /
    • v.11 no.3
    • /
    • pp.47-53
    • /
    • 2015
  • Visual object tracking is a fundamental problem in the field of computer vision, as it needs a proper model to account for drastic appearance changes that are caused by shape, textural, and illumination variations. In this paper, we propose a feature-based visual-object-tracking method with a sparse representation. Generally, most appearance-based models use the gray-scale pixel values of the input image, but this might be insufficient for a description of the target object under a variety of conditions. To obtain the proper information regarding the target object, the following combination of features has been exploited as a corresponding representation: First, the features of the target templates are extracted by using the HOG (histogram of gradient) and LBPs (local binary patterns); secondly, a feature-based sparsity is attained by solving the minimization problems, whereby the target object is represented by the selection of the minimum reconstruction error. The strengths of both features are exploited to enhance the overall performance of the tracker; furthermore, the proposed method is integrated with the particle-filter framework and achieves a promising result in terms of challenging tracking videos.