• Title/Summary/Keyword: Regularization

Search Result 487, Processing Time 0.029 seconds

Super-Resolution Image Reconstruction Using Multi-View Cameras (다시점 카메라를 이용한 초고해상도 영상 복원)

  • Ahn, Jae-Kyun;Lee, Jun-Tae;Kim, Chang-Su
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.463-473
    • /
    • 2013
  • In this paper, we propose a super-resolution (SR) image reconstruction algorithm using multi-view images. We acquire 25 images from multi-view cameras, which consist of a $5{\times}5$ array of cameras, and then reconstruct an SR image of the center image using a low resolution (LR) input image and the other 24 LR reference images. First, we estimate disparity maps from the input image to the 24 reference images, respectively. Then, we interpolate a SR image by employing the LR image and matching points in the reference images. Finally, we refine the SR image using an iterative regularization scheme. Experimental results demonstrate that the proposed algorithm provides higher quality SR images than conventional algorithms.

Penalized-Likelihood Image Reconstruction for Transmission Tomography Using Spline Regularizers (스플라인 정칙자를 사용한 투과 단층촬영을 위한 벌점우도 영상재구성)

  • Jung, J.E.;Lee, S.-J.
    • Journal of Biomedical Engineering Research
    • /
    • v.36 no.5
    • /
    • pp.211-220
    • /
    • 2015
  • Recently, model-based iterative reconstruction (MBIR) has played an important role in transmission tomography by significantly improving the quality of reconstructed images for low-dose scans. MBIR is based on the penalized-likelihood (PL) approach, where the penalty term (also known as the regularizer) stabilizes the unstable likelihood term, thereby suppressing the noise. In this work we further improve MBIR by using a more expressive regularizer which can restore the underlying image more accurately. Here we used a spline regularizer derived from a linear combination of the two-dimensional splines with first- and second-order spatial derivatives and applied it to a non-quadratic convex penalty function. To derive a PL algorithm with the spline regularizer, we used a separable paraboloidal surrogates algorithm for convex optimization. The experimental results demonstrate that our regularization method improves reconstruction accuracy in terms of both regional percentage error and contrast recovery coefficient by restoring smooth edges as well as sharp edges more accurately.

On the Measurement of the Depth and Distance from the Defocused Imagesusing the Regularization Method (비초점화 영상에서 정칙화법을 이용한 깊이 및 거리 계측)

  • 차국찬;김종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.886-898
    • /
    • 1995
  • One of the ways to measure the distance in the computer vision is to use the focus and defocus. There are two methods in this way. The first method is caculating the distance from the focused images in a point (MMDFP: the method measuring the distance to the focal plane). The second method is to measure the distance from the difference of the camera parameters, in other words, the apertures of the focal planes, of two images with having the different parameters (MMDCI: the method to measure the distance by comparing two images). The problem of the existing methods in MMDFP is to decide the thresholding vaue on detecting the most optimally focused object in the defocused image. In this case, it could be solved by comparing only the error energy in 3x3 window between two images. In MMDCI, the difficulty is the influence of the deflection effect. Therefor, to minimize its influence, we utilize two differently focused images instead of different aperture images in this paper. At the first, the amount of defocusing between two images is measured through the introduction of regularization and then the distance from the camera to the objects is caculated by the new equation measuring the distance. In the results of simulation, we see the fact to be able to measure the distance from two differently defocused images, and for our approach to be robuster than the method using the different aperture in the noisy image.

  • PDF

An Adaptation Method in Noise Mismatch Conditions for DNN-based Speech Enhancement

  • Xu, Si-Ying;Niu, Tong;Qu, Dan;Long, Xing-Yan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4930-4951
    • /
    • 2018
  • The deep learning based speech enhancement has shown considerable success. However, it still suffers performance degradation under mismatch conditions. In this paper, an adaptation method is proposed to improve the performance under noise mismatch conditions. Firstly, we advise a noise aware training by supplying identity vectors (i-vectors) as parallel input features to adapt deep neural network (DNN) acoustic models with the target noise. Secondly, given a small amount of adaptation data, the noise-dependent DNN is obtained by using $L_2$ regularization from a noise-independent DNN, and forcing the estimated masks to be close to the unadapted condition. Finally, experiments were carried out on different noise and SNR conditions, and the proposed method has achieved significantly 0.1%-9.6% benefits of STOI, and provided consistent improvement in PESQ and segSNR against the baseline systems.

Sub-pixel Motion Compensated Deinterlacing Algorithm (부화소 단위의 움직임 정보를 고려한 순차 주사화)

  • 박민규;최종성;강문기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.322-331
    • /
    • 2003
  • Advances of high-definition television(HDTV) and personal computers call for the mutual conversion between interlaced signal and progressive signal. Especially, deinterlacing which is known as an interlaced to progressive conversion has been recently required and investigated. In this paper, we propose new deinterlacing algorithm considering sub-pixel motion information. In order to reduce the error of motion estimation, we analyze the effect of inaccurate sub-pixel motion information and model it as zero-mean Gaussian noises added respectively to each low resolution image(field). The error caused by inaccurate motion information is reduced by determining regularization parameter according to the error of motion estimation in each channel. The validity of the proposed algorithm is demonstrated both theoretically and experimentally in this paper.

An Improved method of Two Stage Linear Discriminant Analysis

  • Chen, Yarui;Tao, Xin;Xiong, Congcong;Yang, Jucheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1243-1263
    • /
    • 2018
  • The two-stage linear discrimination analysis (TSLDA) is a feature extraction technique to solve the small size sample problem in the field of image recognition. The TSLDA has retained all subspace information of the between-class scatter and within-class scatter. However, the feature information in the four subspaces may not be entirely beneficial for classification, and the regularization procedure for eliminating singular metrics in TSLDA has higher time complexity. In order to address these drawbacks, this paper proposes an improved two-stage linear discriminant analysis (Improved TSLDA). The Improved TSLDA proposes a selection and compression method to extract superior feature information from the four subspaces to constitute optimal projection space, where it defines a single Fisher criterion to measure the importance of single feature vector. Meanwhile, Improved TSLDA also applies an approximation matrix method to eliminate the singular matrices and reduce its time complexity. This paper presents comparative experiments on five face databases and one handwritten digit database to validate the effectiveness of the Improved TSLDA.

Two Dimensional Slow Feature Discriminant Analysis via L2,1 Norm Minimization for Feature Extraction

  • Gu, Xingjian;Shu, Xiangbo;Ren, Shougang;Xu, Huanliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3194-3216
    • /
    • 2018
  • Slow Feature Discriminant Analysis (SFDA) is a supervised feature extraction method inspired by biological mechanism. In this paper, a novel method called Two Dimensional Slow Feature Discriminant Analysis via $L_{2,1}$ norm minimization ($2DSFDA-L_{2,1}$) is proposed. $2DSFDA-L_{2,1}$ integrates $L_{2,1}$ norm regularization and 2D statically uncorrelated constraint to extract discriminant feature. First, $L_{2,1}$ norm regularization can promote the projection matrix row-sparsity, which makes the feature selection and subspace learning simultaneously. Second, uncorrelated features of minimum redundancy are effective for classification. We define 2D statistically uncorrelated model that each row (or column) are independent. Third, we provide a feasible solution by transforming the proposed $L_{2,1}$ nonlinear model into a linear regression type. Additionally, $2DSFDA-L_{2,1}$ is extended to a bilateral projection version called $BSFDA-L_{2,1}$. The advantage of $BSFDA-L_{2,1}$ is that an image can be represented with much less coefficients. Experimental results on three face databases demonstrate that the proposed $2DSFDA-L_{2,1}/BSFDA-L_{2,1}$ can obtain competitive performance.

The Joint Effect of factors on Generalization Performance of Neural Network Learning Procedure (신경망 학습의 일반화 성능향상을 위한 인자들의 결합효과)

  • Yoon YeoChang
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.343-348
    • /
    • 2005
  • The goal of this paper is to study the joint effect of factors of neural network teaming procedure. There are many factors, which may affect the generalization ability and teaming speed of neural networks, such as the initial values of weights, the learning rates, and the regularization coefficients. We will apply a constructive training algerian for neural network, then patterns are trained incrementally by considering them one by one. First, we will investigate the effect of these factors on generalization performance and learning speed. Based on these factors' effect, we will propose a joint method that simultaneously considers these three factors, and dynamically hue the learning rate and regularization coefficient. Then we will present the results of some experimental comparison among these kinds of methods in several simulated nonlinear data. Finally, we will draw conclusions and make plan for future work.

A Fast Scheme for Inverting Single-Hole Electromagnetic Data

  • Kim Hee Joon;Lee Jung-Mo;Lee Ki Ha
    • Proceedings of the KSEEG Conference
    • /
    • 2002.04a
    • /
    • pp.167-169
    • /
    • 2002
  • The extended Born, or localized nonlinear approximation of integral equation (IE) solution has been applied to inverting single-hole electromagnetic (EM) data using a cylindrically symmetric model. The extended Born approximation is less accurate than a full solution but much superior to the simple Born approximation. When applied to the cylindrically symmetric model with a vertical magnetic dipole source, however, the accuracy of the extended Born approximation is greatly improved because the electric field is scalar and continuous everywhere. One of the most important steps in the inversion is the selection of a proper regularization parameter for stability. Occam's inversion (Constable et al., 1987) is an excellent method for obtaining a stable inverse solution. It is extremely slow when combined with a differential equation method because many forward simulations are needed but suitable for the extended Born solution because the Green's functions, the most time consuming part in IE methods, are repeatedly re-usable throughout the inversion. In addition, the If formulation also readily contains a sensitivity matrix, which can be revised at each iteration at little expense. The inversion algorithm developed in this study is quite stable and fast even if the optimum regularization parameter Is sought at each iteration step. Tn this paper we show inversion results using synthetic data obtained from a finite-element method and field data as well.

  • PDF

A Spline-Regularized Sinogram Smoothing Method for Filtered Backprojection Tomographic Reconstruction

  • Lee, S.J.;Kim, H.S.
    • Journal of Biomedical Engineering Research
    • /
    • v.22 no.4
    • /
    • pp.311-319
    • /
    • 2001
  • Statistical reconstruction methods in the context of a Bayesian framework have played an important role in emission tomography since they allow to incorporate a priori information into the reconstruction algorithm. Given the ill-posed nature of tomographic inversion and the poor quality of projection data, the Bayesian approach uses regularizers to stabilize solutions by incorporating suitable prior models. In this work we show that, while the quantitative performance of the standard filtered backprojection (FBP) algorithm is not as good as that of Bayesian methods, the application of spline-regularized smoothing to the sinogram space can make the FBP algorithm improve its performance by inheriting the advantages of using the spline priors in Bayesian methods. We first show how to implement the spline-regularized smoothing filter by deriving mathematical relationship between the regularization and the lowpass filtering. We then compare quantitative performance of our new FBP algorithms using the quantitation of bias/variance and the total squared error (TSE) measured over noise trials. Our numerical results show that the second-order spline filter applied to FBP yields the best results in terms of TSE among the three different spline orders considered in our experiments.

  • PDF