• Title/Summary/Keyword: sharp domain

Search Result 68, Processing Time 0.03 seconds

Multiscale Regularization Method for Image Restoration (다중척도 정칙화 방법을 이용한 영상복원)

  • 이남용
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.3
    • /
    • pp.173-180
    • /
    • 2004
  • In this paper we provide a new image restoration method based on the multiscale regularization in the redundant wavelet transform domain. The proposed method uses the redundant wavelet transform to decompose the single-scale image restoration problem to multiscale ones and applies scale dependent regularization to the decomposed restoration problems. The proposed method recovers sharp edges by applying rather less regularization to wavelet related restorations, while suppressing the resulting noise magnification by the wavelet shrinkage algorithm. The improved performance of the proposed method over more traditional Wiener filtering is shown through numerical experiments.

  • PDF

Optical VSB Filtering of 12.5-GHz Spaced 64 × 12.4 Gb/s WDM Channels Using a Pair of Fabry-Perot Filters

  • Batsuren, Budsuren;Kim, Hyung Hwan;Eom, Chan Yong;Choi, Jin Joo;Lee, Jae Seung
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.1
    • /
    • pp.63-67
    • /
    • 2013
  • We perform an optical vestigial sideband (VSB) filtering using a pair of Fabry-Perot (FP) filters. The transmittance curve of each FP filter is made to have sharp skirts using an offset between input and output coupling fibers. Moreover, the accurate periodicity of FP filters in the optical frequency domain enables the simultaneous VSB filtering of a large number of optical channels. With this VSB filtering technique, we transmit 12.5-GHz spaced $64{\times}12.4-Gb/s$ wavelength-division-multiplexing channels over a single-mode fiber up to 150 km without any dispersion compensations. When the channel spacing is reduced to 10 GHz, we achieve the spectral efficiency of 1 bit/s/Hz in conventional optical intensity modulation systems up to 125 km.

Molecular Dynamics Simulation on the Thermal Boundary Resistance of a Thin-film and Experimental Validation (분자동역학을 이용한 박막의 열경계저항 예측 및 실험적 검증)

  • Suk, Myung Eun;Kim, Yun Young
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.2
    • /
    • pp.103-108
    • /
    • 2019
  • Non-equilibrium molecular dynamics simulation on the thermal boundary resistance(TBR) of an aluminum(Al)/silicon(Si) interface was performed in the present study. The constant heat flux across the Si/Al interface was simulated by adding the kinetic energy in hot Si region and removing the same amount of the energy from the cold Al region. The TBR estimated from the sharp temperature drop at the interface was independent of heat flux and equal to $5.13{\pm}0.17K{\cdot}m^2/GW$ at 300K. The simulation result was experimentally confirmed by the time-domain thermoreflectance technique. A 90nm thick Al film was deposited on a Si(100) wafer using an e-beam evaporator and the TBR on the film/substrate interface was measured using the time-domain thermoreflectance technique based on a femtosecond laser system. A numerical solution of the transient heat conduction equation was obtained using the finite difference method to estimate the TBR value. Experimental results were compared to the prediction and discussions on the nanoscale thermal transport phenomena were made.

MTF Evaluation and Clinical Application according to the Characteristic Kernels in the Computed Tomogrsphy (Kernel 특성에 따른 MTF 평가 및 임상적 적용에 관한 연구)

  • Yoo, Beong-Gyu;Lee, Jong-Seok;Kweon, Dae-Cheol
    • Progress in Medical Physics
    • /
    • v.18 no.2
    • /
    • pp.55-64
    • /
    • 2007
  • Our objective was to evaluate the clinical feasibility of spatial domain filtering as an alternative to additional image reconstruction using different kernels in CT. Kernels were grouped as H30 (head medium smooth), B30 (body medium smooth), S80 (special) and U95 (ultra sharp). Derived from thin coilimated source images, four sets of images were generated using phantom kernels. MTF (50%, 10%, 2%) measured with H30 (3.25, 5.68, 7.45 Ip/cm) B30 (3.84, 6.25, 7.72 Ip/cm), S80 (4.69, 9.49, 12.34 Ip/cm), and U95 (14.19, 20.31, 24.67 Ip/cm). Spatial resolution for the U95 kernel (0.6 mm) was 33.3% greater than that of the H30 and B30 (0.8 mm) kernels. Initially scanned kernels images were rated for subjective image qualify, using a five-point scale. Image scanned with a convolution kernel led to an increase in noise (U95), whereas the results for CT attenuation coefficient were comparable. CT images increase the diagnostic accuracy in head (H30), abdomen (B30), temporal bone and lung (U95) kernels may be controlled by adjusting CT various algorithms, which should be adjusted to take into account the kernels of the CT undergoing the examination.

  • PDF

LARGE TIME ASYMPTOTICS OF LEVY PROCESSES AND RANDOM WALKS

  • Jain, Naresh C.
    • Journal of the Korean Mathematical Society
    • /
    • v.35 no.3
    • /
    • pp.583-611
    • /
    • 1998
  • We consider a general class of real-valued Levy processes {X(t), $t\geq0$}, and obtain suitable large deviation results for the empiricals L(t, A) defined by $t^{-1}{\int^t}_01_A$(X(s)ds for t > 0 and a Borel subset A of R. These results are used to obtain the asymptotic behavior of P{Z(t) < a}, where Z(t) = $sup_{u\leqt}\midx(u)\mid$ as $t\longrightarrow\infty$, in terms of the rate function in the large deviation principle. A subclass of these processes is the Feller class: there exist nonrandom functions b(t) and a(t) > 0 such that {(X(t) - b(t))/a(t) : t > 0} is stochastically compact, i.e., each sequence has a weakly convergent subsequence with a nondegenerate limit. The stable processes are in this class, but it is much larger. We consider processes in this class for which b(t) may be taken to be zero. For any t > 0, we consider the renormalized process ${X(u\psi(t))/a(\psi(t)),u\geq0}$, where $\psi$(t) = $t(log log t)^{-1}$, and obtain large deviation probability estimates for $L_{t}(A)$ := $(log log t)^{-1}$${\int_{0}}^{loglogt}1_A$$(X(u\psi(t))/a(\psi(t)))dv$. It turns out that the upper and lower bounds are sharp and depend on the entire compact set of limit laws of {X(t)/a(t)}. The results extend to random walks in the Feller class as well. Earlier results of this nature were obtained by Donsker and Varadhan for symmetric stable processes and by Jain for random walks in the domain of attraction of a stable law.

  • PDF

A New Closed-form Transfer Fuction for the Design of Wideband Lowpass MAXFLAT FIR filters with Zero Phase (제로 위상을 갖는 광대역 저역통과 MAXFLAT FIR 필터 설계를 위한 새로운 폐쇄형 전달 함수)

  • Jeon, Joon-Hyeon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.7C
    • /
    • pp.658-666
    • /
    • 2007
  • In general, the earlier linear-phase MAXFLAT(maximally flat) lowpass FIR filters have the main disadvantage of a gain response in the half frequency band $(0{\leq}w{\leq}{\pi}/2)$ by the closed form transfer functions used in design techniques for realizing them. Moreover, most of them has existent problems as follows : ripple error in the stopband, gentle-cutoff attenuation, phase and group delay and inexact cutoff frequency response. It is due to the approximation algorithms such as Chebyshev norm and Remez exchange which are used to approach MAXFLAT and linear-phase characteristics in frequency domain. In this paper, a new mathematically closed-form transfer function is introduced for the design of MAXFLAT lowpass FIR filters which have the zero-phase and wideband-gain response. In addition, we verify that the closed-form transfer function is easily realized due to our generalized formulas derived newly by using MAXFLAT conditions including an arbitrary cutoff point. This method is, therefore, useful for "simple and quick designs". Conclusively, we propose a technique for the design of new zero-phase wideband MAXFLAT lowpass FIR filters which can achieve sharp-cutoff attenuation exceeding 250 dB almost everywhere.

Statistical Back Trajectory Analysis for Estimation of CO2 Emission Source Regions (공기괴 역궤적 모델의 통계 분석을 통한 이산화탄소 배출 지역 추정)

  • Li, Shanlan;Park, Sunyoung;Park, Mi-Kyung;Jo, Chun Ok;Kim, Jae-Yeon;Kim, Ji-Yoon;Kim, Kyung-Ryul
    • Atmosphere
    • /
    • v.24 no.2
    • /
    • pp.245-251
    • /
    • 2014
  • Statistical trajectory analysis has been widely used to identify potential source regions for chemically and radiatively important chemical species in the atmosphere. The most widely used method is a statistical source-receptor model developed by Stohl (1996), of which the underlying principle is that elevated concentrations at an observation site are proportionally related to both the average concentrations on a specific grid cell where the observed air mass has been passing over and the residence time staying over that grid cell. Thus, the method can compute a residence-time-weighted mean concentration for each grid cell by superimposing the back trajectory domain on the grid matrix. The concentration on a grid cell could be used as a proxy for potential source strength of corresponding species. This technical note describes the statistical trajectory approach and introduces its application to estimate potential source regions of $CO_2$ enhancements observed at Korean Global Atmosphere Watch Observatory in Anmyeon-do. Back trajectories are calculated using HYSPLIT 4 model based on wind fields provided by NCEP GDAS. The identified $CO_2$ potential source regions responsible for the pollution events observed at Anmyeon-do in 2010 were mainly Beijing area and the Northern China where Haerbin, Shenyang and Changchun mega cities are located. This is consistent with bottom-up emission information. In spite of inherent uncertainties of this method in estimating sharp spatial gradients within the vicinity of the emission hot spots, this study suggests that the statistical trajectory analysis can be a useful tool for identifying anthropogenic potential source regions for major GHGs.

Denoise of Synthetic and Earth Tidal Effect using Wavelet Transform (웨이브렛 변환을 응용한 합성자료 및 기조력 자료의 잡음 제거)

  • Im, Hyeong Rae;Jin, Hong Seong;Gwon, Byeong Du
    • Journal of the Korean Geophysical Society
    • /
    • v.2 no.2
    • /
    • pp.143-152
    • /
    • 1999
  • We have studied a denoising technique involving wavelet transform for improving the quality of geophysical data during the preprocessing stage. To assess the effectiveness of this technique, we have made synthetic data contaminated by random noises and compared the results of denoising with those obtained by conventional low-pass filtering. The low-pass filtering of the sinusoidal signal having a sharp discontinuity between the first and last sample values shows apparent errors related to Gibbs' phenomena. For the case of bump signal, the low-pass filtering induces maximum errors on peak values by removing some high-frequency components of signal itself. The wavelet transform technique, however, denoises these signals with much less adverse effects owing to its pertinent properties on locality of wavelet and easy discrimination of noise and signal in the wavelet domain. The field data of gravity tide are denoised by using soft threshold, which shrinked all the wavelet coefficients toward the origin, and the G-factor is determined by comparing the denoised data and theoretical data.

  • PDF

Adaptive Postprocessing Technique for Enhancement of DCT-coded Images (DCT 기반 압축 영상의 화질 개선을 위한 적응적 후처리 기법)

  • Kim, Jong-Ho;Park, Sang-Hyun;Kang, Eui-Sung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.930-933
    • /
    • 2011
  • This paper addresses an adaptive postprocessing method applied in the spatial domain for block-based discrete cosine transform (BDCT) coded images. The proposed algorithm is designed by a serial concatenation of a 1D simple smoothing filter and a 2D directional filter. The 1D smoothing filter is applied according to the block type, which is determined by an adaptive threshold. It depends on local statistical properties, and updates block types appropriately by a simple rule, which affects the performance of deblocking processes. In addition, the 2D directional filter is introduced to suppress the ringing effects at the sharp edges and the block discontinuities while preserving true edges and textural information. Comprehensive experiments indicate that the proposed algorithm outperforms many deblocking methods in the literature, in terms of PSNR and subjective visual quality evaluated by GBIM.

  • PDF

Dislocation in Semi-infinite Half Plane Subject to Adhesive Complete Contact with Square Wedge: Part I - Derivation of Corrective Functions (직각 쐐기와 응착접촉 하는 반무한 평판 내 전위: 제1부 - 보정 함수 유도)

  • Kim, Hyung-Kyu
    • Tribology and Lubricants
    • /
    • v.38 no.3
    • /
    • pp.73-83
    • /
    • 2022
  • This paper is concerned with an analysis of a surface edge crack emanated from a sharp contact edge. For a geometrical model, a square wedge is in contact with a half plane whose materials are identical, and a surface perpendicular crack initiated from the contact edge exists in the half plane. To analyze this crack problem, it is necessary to evaluate the stress field on the crack line which are induced by the contact tractions and pseudo-dislocations that simulate the crack, using the Bueckner principle. In this Part I, the stress filed in the half plane due to the contact is re-summarized using an asymptotic analysis method, which has been published before by the author. Further focus is given to the stress field in the half plane due to a pseudo-edge dislocation, which will provide a stress solution due to a crack (i.e. a continuous distribution of edge dislocations) later, using the Burgers vector. Essential result of the present work is the corrective functions which modify the stress field of an infinite domain to apply for the present one which has free surfaces, and thus the infiniteness is no longer preserved. Numerical methods and coordinate normalization are used, which was developed for an edge crack problem, using the Gauss-Jacobi integration formula. The convergence of the corrective functions are investigated here. Features of the corrective functions and their application to a crack problem will be given in Part II.