• Title/Summary/Keyword: Convolution Kernel

Search Result 87, Processing Time 0.024 seconds

PDE-PRESERVING PROPERTIES

  • PETERSSON HENRIK
    • Journal of the Korean Mathematical Society
    • /
    • v.42 no.3
    • /
    • pp.573-597
    • /
    • 2005
  • A continuous linear operator T, on the space of entire functions in d variables, is PDE-preserving for a given set $\mathbb{P}\;\subseteq\;\mathbb{C}|\xi_{1},\ldots,\xi_{d}|$ of polynomials if it maps every kernel-set ker P(D), $P\;{\in}\;\mathbb{P}$, invariantly. It is clear that the set $\mathbb{O}({\mathbb{P}})$ of PDE-preserving operators for $\mathbb{P}$ forms an algebra under composition. We study and link properties and structures on the operator side $\mathbb{O}({\mathbb{P}})$ versus the corresponding family $\mathbb{P}$ of polynomials. For our purposes, we introduce notions such as the PDE-preserving hull and basic sets for a given set $\mathbb{P}$ which, roughly, is the largest, respectively a minimal, collection of polynomials that generate all the PDE-preserving operators for $\mathbb{P}$. We also describe PDE-preserving operators via a kernel theorem. We apply Hilbert's Nullstellensatz.

Compressed Representation of CNN for Image Compression in MPEG-NNR (MPEG-NNR의 영상 압축을 위한 CNN 의 압축 표현 기법)

  • Moon, HyeonCheol;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.84-85
    • /
    • 2019
  • MPEG-NNR (Compression of Neural Network for Multimedia Content Description and Analysis) aims to define a compressed and interoperable representation of trained neural networks. In this paper, we present a low-rank approximation to compress a CNN used for image compression, which is one of MPEG-NNR use cases. In the presented method, the low-rank approximation decomposes one 2D kernel matrix of weights into two 1D kernel matrix values in each convolution layer to reduce the data amount of weights. The evaluation results show that the model size of the original CNN is reduced to half as well as the inference runtime is reduced up to about 30% with negligible loss in PSNR.

  • PDF

ON THE INITIAL VALUES OF SOLUTIONS OF A GENERAL FUNCTIONAL EQUATION

  • Chung, Jae-Young;Kim, Do-Han
    • Bulletin of the Korean Mathematical Society
    • /
    • v.48 no.2
    • /
    • pp.387-396
    • /
    • 2011
  • We consider a general functional equation with time variable which arises when we investigate regularity problems of some general functional equations. As a result we prove the regularity of the initial values of the solutions. Also as an application we prove the regularity of solutions of some classical functional equations and their distributional versions.

WAVEFRONT SOLUTIONS IN THE DIFFUSIVE NICHOLSON'S BLOWFLIES EQUATION WITH NONLOCAL DELAY

  • Zhang, Cun-Hua
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.1_2
    • /
    • pp.49-58
    • /
    • 2010
  • In the present article we consider the diffusive Nicholson's blowflies equation with nonlocal delay incorporated into an integral convolution over all the past time and the whole infinite spatial domain $\mathbb{R}$. When the kernel function takes a special function, we construct a pair of lower and upper solutions of the corresponding travelling wave equation and obtain the existence of travelling fronts according to the existence result of travelling wave front solutions for reaction diffusion systems with nonlocal delays developed by Wang, Li and Ruan (J. Differential Equations, 222(2006), 185-232).

Lp-BOUNDEDNESS FOR THE COMMUTATORS OF ROUGH OSCILLATORY SINGULAR INTEGRALS WITH NON-CONVOLUTION PHASES

  • Wu, Huoxiong
    • Journal of the Korean Mathematical Society
    • /
    • v.46 no.3
    • /
    • pp.577-588
    • /
    • 2009
  • In this paper, the author studies the k-th commutators of oscillatory singular integral operators with a BMO function and phases more general than polynomials. For 1 < p < $\infty$, the $L^p$-boundedness of such operators are obtained provided their kernels belong to the spaces $L(log+L)^{k+1}(S^{n-1})$. The results of the corresponding maximal operators are also established.

A new absorbing boundary condition for the FDTD simulation of waveguides (도파관 구조의 FDTD해석을 위한 새로운 흡수경계조건)

  • 박면주;남상욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.12
    • /
    • pp.3227-3234
    • /
    • 1996
  • This paper proposes a new absorbing boundary condition(ABC) for the FDTD simulation of waveguide problems. It is based on the exact analytic expression for the time domain EM wave propatation in the waveguide. The ABC derived from the expression has a convolution form whose kernel (the discrete Green's function) has a simple, closed form formula. Also, it is applicable to the wide variety of waveguide types with conducting boundaries and complex cross-sectional shapes.

  • PDF

An Image Interpolation by Adaptive Parametric Cubic Convolution (3차 회선 보간법에 적응적 매개변수를 적용한 영상 보간)

  • Yoo, Jea-Wook;Park, Dae-Hyun;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.163-171
    • /
    • 2008
  • In this paper, we present an adaptive parametric cubic convolution technique in order to enlarge the low resolution image to the high resolution image. The proposed method consists of two steps. During the first interpolation step, we acquire adaptive parameters in introducing a new cost-function to reflect frequency properties. And, the second interpolation step performs cubic convolution by applying the parameters obtained from the first step. The enhanced interpolation kernel using adaptive parameters produces output image better than the conventional one using a fixed parameter. Experimental results show that the proposed method can not only provides the performances of $0.5{\sim}4dB$ improvements in terms of PSNR, but also exhibit better edge preservation ability and original image similarity than conventional methods in the enlarged images.

  • PDF

MTF Assessment and Image Restoration Technique for Post-Launch Calibration of DubaiSat-1 (DubaiSat-1의 발사 후 검보정을 위한 MTF 평가 및 영상복원 기법)

  • Hwang, Hyun-Deok;Park, Won-Kyu;Kwak, Sung-Hee
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.5
    • /
    • pp.573-586
    • /
    • 2011
  • The MTF(modulation transfer function) is one of parameters to evaluate the performance of imaging systems. Also, it can be used to restore information that is lost by a harsh space environment (radioactivity, extreme cold/heat condition and electromagnetic field etc.), atmospheric effects and falloff of system performance etc. This paper evaluated the MTF values of images taken by DubaiSat-1 satellite which was launched in 2009 by EIAST(Emirates Institute for Advanced Science and Technology) and Satrec Initiative. Generally, the MTF was assessed using various methods such as a point source method and a knife-edge method. This paper used the slanted-edge method. The slantededge method is the ISO 12233 standard for the MTF measurement of electronic still-picture cameras. The method is adapted to estimate the MTF values of line-scanning telescopes. After assessing the MTF, we performed the MTF compensation by generating a MTF convolution kernel based on the PSF(point spread function) with image denoising to enhance the image quality.

Research on a handwritten character recognition algorithm based on an extended nonlinear kernel residual network

  • Rao, Zheheng;Zeng, Chunyan;Wu, Minghu;Wang, Zhifeng;Zhao, Nan;Liu, Min;Wan, Xiangkui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.413-435
    • /
    • 2018
  • Although the accuracy of handwritten character recognition based on deep networks has been shown to be superior to that of the traditional method, the use of an overly deep network significantly increases time consumption during parameter training. For this reason, this paper took the training time and recognition accuracy into consideration and proposed a novel handwritten character recognition algorithm with newly designed network structure, which is based on an extended nonlinear kernel residual network. This network is a non-extremely deep network, and its main design is as follows:(1) Design of an unsupervised apriori algorithm for intra-class clustering, making the subsequent network training more pertinent; (2) presentation of an intermediate convolution model with a pre-processed width level of 2;(3) presentation of a composite residual structure that designs a multi-level quick link; and (4) addition of a Dropout layer after the parameter optimization. The algorithm shows superior results on MNIST and SVHN dataset, which are two character benchmark recognition datasets, and achieves better recognition accuracy and higher recognition efficiency than other deep structures with the same number of layers.

History of the Photon Beam Dose Calculation Algorithm in Radiation Treatment Planning System

  • Kim, Dong Wook;Park, Kwangwoo;Kim, Hojin;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.54-62
    • /
    • 2020
  • Dose calculation algorithms play an important role in radiation therapy and are even the basis for optimizing treatment plans, an important feature in the development of complex treatment technologies such as intensity-modulated radiation therapy. We reviewed the past and current status of dose calculation algorithms used in the treatment planning system for radiation therapy. The radiation-calculating dose calculation algorithm can be broadly classified into three main groups based on the mechanisms used: (1) factor-based, (2) model-based, and (3) principle-based. Factor-based algorithms are a type of empirical dose calculation that interpolates or extrapolates the dose in some basic measurements. Model-based algorithms, represented by the pencil beam convolution, analytical anisotropic, and collapse cone convolution algorithms, use a simplified physical process by using a convolution equation that convolutes the primary photon energy fluence with a kernel. Model-based algorithms allowing side scattering when beams are transmitted to the heterogeneous media provide more precise dose calculation results than correction-based algorithms. Principle-based algorithms, represented by Monte Carlo dose calculations, simulate all real physical processes involving beam particles during transportation; therefore, dose calculations are accurate but time consuming. For approximately 70 years, through the development of dose calculation algorithms and computing technology, the accuracy of dose calculation seems close to our clinical needs. Next-generation dose calculation algorithms are expected to include biologically equivalent doses or biologically effective doses, and doctors expect to be able to use them to improve the quality of treatment in the near future.