• Title/Summary/Keyword: Noise robustness

Search Result 565, Processing Time 0.028 seconds

Enhancing Robustness of Information Hiding Through Low-Density Parity-Check Codes

  • Yi, Yu;Lee, Moon-Ho;Kim, Ji-Hyun;Hwang, Gi-Yean
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.437-451
    • /
    • 2003
  • With the rapid growth of internet technologies and wide availability of multimedia computing facilities, the enforcement of multimedia copyright protection becomes an important issue. Digital watermarking is viewed as an effective way to deter content users from illegal distributions. In recent years, digital watermarking has been intensively studied to achieve this goal. However, when the watermarked media is transmitted over the channels modeled as the additive white Gaussian noise (AWGN) channel, the watermark information is often interfered by the channel noise and produces a large number of errors. So many error-correcting codes have been applied in the digital watermarking system to protect the embedded message from the disturbance of the noise, such as BCH codes, Reef-Solomon (RS) codes and Turbo codes. Recently, low-density parity-check (LDPC) codes were demonstrated as good error correcting codes achieving near Shannon limit performance and outperforming turbo codes nth low decoding complexity. In this paper, in order to mitigate the channel conditions and improve the quality of watermark, we proposed the application of LDPC codes on implementing a fairly robust digital image watermarking system. The implemented watermarking system operates in the spectrum domain where a subset of the discrete wavelet transform (DWT) coefficients is modified by the watermark without using original image during watermark extraction. The quality of watermark is evaluated by taking Into account the trade-off between the chip-rate and the rate of LDPC codes. Many simulation results are presented in this paper, these results indicate that the quality of the watermark is improved greatly and the proposed system based on LDPC codes is very robust to attacks.

A New Calculation Method of Equalizer algorithms based on the Probability Correlation (확률분포 상관도에 기반한 Equalizer 알고리듬의 새로운 연산 방식)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.5
    • /
    • pp.3132-3138
    • /
    • 2014
  • In many communication systems, intersymbol interference, DC and impulsive noise are hard-to-solve problems. For the purpose of cancelling such interferences, the concept of lagged cross-correlation of probability has been used for blind equalization. However, this algorithm has a large burden of computation. In this paper, a recursive method of the algorithm based on the lagged probability correlation is proposed. The summation operation in the calculation of gradient of the cost is transformed into a recursive gradient calculation. The recursive method shows to reduce the high computational complexity of the algorithm from O(NM) to O(M) for M symbols and N block data having advantages in implementation while keeping the robustness against those interferences. From the results of the simulation, the proposed method yields the same learning performance with reduced computation complexity.

Overall damage identification of flag-shaped hysteresis systems under seismic excitation

  • Zhou, Cong;Chase, J. Geoffrey;Rodgers, Geoffrey W.;Xu, Chao;Tomlinson, Hamish
    • Smart Structures and Systems
    • /
    • v.16 no.1
    • /
    • pp.163-181
    • /
    • 2015
  • This research investigates the structural health monitoring of nonlinear structures after a major seismic event. It considers the identification of flag-shaped or pinched hysteresis behavior in response to structures as a more general case of a normal hysteresis curve without pinching. The method is based on the overall least squares methods and the log likelihood ratio test. In particular, the structural response is divided into different loading and unloading sub-half cycles. The overall least squares analysis is first implemented to obtain the minimum residual mean square estimates of structural parameters for each sub-half cycle with the number of segments assumed. The log likelihood ratio test is used to assess the likelihood of these nonlinear segments being true representations in the presence of noise and model error. The resulting regression coefficients for identified segmented regression models are finally used to obtain stiffness, yielding deformation and energy dissipation parameters. The performance of the method is illustrated using a single degree of freedom system and a suite of 20 earthquake records. RMS noise of 5%, 10%, 15% and 20% is added to the response data to assess the robustness of the identification routine. The proposed method is computationally efficient and accurate in identifying the damage parameters within 10% average of the known values even with 20% added noise. The method requires no user input and could thus be automated and performed in real-time for each sub-half cycle, with results available effectively immediately after an event as well as during an event, if required.

A New Adaptive Kernel Estimation Method for Correntropy Equalizers (코렌트로피 이퀄라이져를 위한 새로운 커널 사이즈 적응 추정 방법)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.627-632
    • /
    • 2021
  • ITL (information-theoretic learning) has been applied successfully to adaptive signal processing and machine learning applications, but there are difficulties in deciding the kernel size, which has a great impact on the system performance. The correntropy algorithm, one of the ITL methods, has superior properties of impulsive-noise robustness and channel-distortion compensation. On the other hand, it is also sensitive to the kernel sizes that can lead to system instability. In this paper, considering the sensitivity of the kernel size cubed in the denominator of the cost function slope, a new adaptive kernel estimation method using the rate of change in error power in respect to the kernel size variation is proposed for the correntropy algorithm. In a distortion-compensation experiment for impulsive-noise and multipath-distorted channel, the performance of the proposed kernel-adjusted correntropy algorithm was examined. The proposed method shows a two times faster convergence speed than the conventional algorithm with a fixed kernel size. In addition, the proposed algorithm converged appropriately for kernel sizes ranging from 2.0 to 6.0. Hence, the proposed method has a wide acceptable margin of initial kernel sizes.

Unscented Kalman Filter with Multiple Sigma Points for Robust System Identification of Sudden Structural Damage (다중 분산점 칼만필터를 이용한 급격한 구조손상 탐지 기법 개발)

  • Se-Hyeok Lee;Sang-ri Yi;Jin Ho Lee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.4
    • /
    • pp.233-242
    • /
    • 2023
  • The unscented Kalman filter (UKF), which is widely used to estimate the states of nonlinear dynamic systems, can be improved to realize robust system identification by using multiple sigma-point sets. When using Kalman filter methods for system identification, artificial noises must be appropriately selected to achieve optimal estimation performance. Additionally, an appropriate scaling factor for the sigma-points must be selected to capture the nonlinearity of the state-space model. This study entailed the use of Bouc-Wen hysteresis model to examine the nonlinear behavior of a single-degree-of-freedom oscillator. On the basis of the effects of the selected artificial noises and scaling factor, a new UKF method using multiple sigma-point sets was devised for improved robustness of the estimation over various signal-to-noise-ratio values. The results demonstrate that the proposed method can accurately track nonlinear system states even when the measurement noise levels are high, while being robust to the selection of artificial noise levels.

The Robustness Wavelet Watermarking with Adaptive Weight MASK (적응 가중치 마스크 처리 기반 강인한 웨이브릿 워터마킹)

  • 정성록;김태효
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.2
    • /
    • pp.46-52
    • /
    • 2003
  • In this paper, the wavelet watermarking algorithm based on adaptive weight MASK processing as a watermark embedded-method for Copyright Protection of Digital contents is Proposed. Because watermark induce as a noise of original image, the watermark size should be limited for preventing quality losses and embedding watermark into images. Therefore, it should be preserve the best condition of the factors, robustness, capacity and visual quality. Tn order to solve this problem, we propose watermarking embedded method by applying adaptive weight MASK to the algorithm and optimize its efficiency. In that result, the watermarked images are improved about external attack. Specifically, correlation coefficient has over 0.8 on both modifications of brightness and contrast. Also, correlation coefficient of wavelet compression of embedded watermark last by over 0.65.

  • PDF

Image Watermarking for Identification Forgery Prevention (신분증 위변조 방지를 위한 이미지 워터마킹)

  • Nah, Ji-Hah;Kim, Jong-Weon;Kim, Jae-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.552-559
    • /
    • 2011
  • In this paper, a new image watermarking algorithm is proposed which can hide specific information of an ID card's owner in photo image for preventing ID's photo forgery. Proposed algorithm uses the image segmentation and the correlation peak position modulation of spread spectrum. The watermark embedded in photo ensures not only robustness against printing and scanning but also sufficient information capacity hiding unique number such as social security numbers in small-sized photo. Another advantage of proposed method is extracting accurate information with error tolerance within some rotation range by using $2^h{\times}2^w$ unit sample space not instead $1{\times}1$ pixels for insertion and extraction of information. 40 bits information can be embedded and extracted at $256{\times}256$ sized ID photo with BER value of 0 % when the test condition is 300dpi scanner and photo printer with 22 photos. In conclusion, proposed algorithm shows the robustness for noise and rotational errors occured during printing and scanning.

Development of Adaptive Digital Image Watermarking Techniques (적응형 영상 워터마킹 알고리즘 개발)

  • Min, Jun-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1112-1119
    • /
    • 1999
  • Digital watermarking is to embed imperceptible mark into image, video, audio and text data to prevent the illegal copy of multimedia data, arbitrary modification, and also illegal sales of the copes without agreement of copyright ownership. The DCT(discrete Cosine Transforms) transforms of original image is conducted in this research and these DCT coefficients are expanded by Fourier series expansion algorithm. In order to embed the imperceptible and robust watermark, the Fourier coefficients(lower frequency coefficients) can be calculated using sine and cosine function which have a complete orthogonal basis function, and the watermark is embedded into these coefficients, In the experiment, we can show robustness with respect to image distortion such as JPEG compression, bluring and adding uniform noise. The correlation coefficient are in the range from 0.5467 to 0.9507.

  • PDF

Evaluation of shape similarity for 3D models (3차원 모델을 위한 형상 유사성 평가)

  • Kim, Jeong-Sik;Choi, Soo-Mi
    • The KIPS Transactions:PartA
    • /
    • v.10A no.4
    • /
    • pp.357-368
    • /
    • 2003
  • Evaluation of shape similarity for 3D models is essential in many areas - medicine, mechanical engineering, molecular biology, etc. Moreover, as 3D models are commonly used on the Web, many researches have been made on the classification and retrieval of 3D models. In this paper, we describe methods for 3D shape representation and major concepts of similarity evaluation, and analyze the key features of recent researches for shape comparison after classifying them into four categories including multi-resolution, topology, 2D image, and statistics based methods. In addition, we evaluated the performance of the reviewed methods by the selected criteria such as uniqueness, robustness, invariance, multi-resolution, efficiency, and comparison scope. Multi-resolution based methods have resulted in decreased computation time for comparison and increased preprocessing time. The methods using geometric and topological information were able to compare more various types of models and were robust to partial shape comparison. 2D image based methods incurred overheads in time and space complexity. Statistics based methods allowed for shape comparison without pose-normalization and showed robustness against affine transformations and noise.

A Performance Evaluation of the CCA Adaptive Equalization Algorithm by Step Size (스텝 크기에 의한 CCA 적응 등화 알고리즘의 성능 평가)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.67-72
    • /
    • 2019
  • This paper evaluates the performance of CCA (Compact Constellation Algorithm) adaptive equalization algorithm by varying the step size for minimization of the distortion effect in the communication channel. The CCA combines the conventional DDA and RCA algorithm, it uses the constant modulus of the transmission signal and the considering the output of decision device by the power of compact slice weighting value in order to improving the initial convergence characteristics and the equalization noise by misadjustment in the steady state. In this process, the compact slice weight values were fixed, and the performance of CCA adaptive equalization algorithm was evaluated by the varing the three values of step size for adaptation. As a result of computer simulation, it shows that the smaller step size gives slow convergence speed, but gives excellent performance after at steady state. Especially in SER performance, the small step size gives more robustness that large values.