• Title/Summary/Keyword: 국부적 가중치

Search Result 79, Processing Time 0.029 seconds

Non-Local Means Denoising Method using Weighting Function based on Mixed norm (혼합 norm 기반의 가중치 함수를 이용한 평균 노이즈 제거 기법)

  • Kim, Dong-Young;Oh, Jong-Geun;Hong, Min-Cheol
    • Journal of IKEEE
    • /
    • v.20 no.2
    • /
    • pp.136-142
    • /
    • 2016
  • This paper presents a non-local means (NLM) denoising algorithm based on a new weighting function using a mixed norm. The fidelity of the difference between an anchor patch and the reference patch in the NLM denoising depends on noise level and local activity. This paper introduces a new weighting function based on a mixed norm type of which the order is determined by noise level and local activity of an anchor patch, so that the performance of the NLM denoising can be enhanced. Experimental results demonstrate the objective and subjective capability of the proposed algorithm. In addition, it was verified that the proposed algorithm can be used to improve the performance of the other $l_2$ norm based non-local means denoising algorithms

The Edge Enhanced Error Diffusion Appling Edge Information Weights (에지 정보 가중치를 적용한 에지 강조 오차 확산 방법)

  • 곽내정;양운모;유창연;한재혁
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.3
    • /
    • pp.11-18
    • /
    • 2003
  • Error diffusion is a procedure for generating high quality bilevel images from continuous-tone images but blurs the edge information. To solve this problem, we propose the improved method appling edge enhanced weights based on local characteristic of the original images. We consider edge information as local characteristic. First, we produce edges by appling 3$\times$3 sobel operator to the original image. The edge is normalized from 0 to 1. Edge information weights are computed by using sinusoidal function and the normalized edge information. The edge enhanced weights are computed by using edge information weights multiplied input pixels. The proposed method is compared with conventional methods by measuring the edge correlation and quality of the recovered images from the halftoned images. The proposed method provides better quality than the conventional method due to the enhanced edge and represents efficiently the detail edge. Also, the proposed method is improved in edge representation than the conventional method.

  • PDF

K-means Algorithm in outside weight region of convergence for initial iteration learning (초기 반복학습 시 수렴영역을 벗어난 가중치에 의한 K-means 알고리즘)

  • Park SoHee;Cho CheHwang
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.143-146
    • /
    • 2001
  • 본 논문에서는 랜덤초기화 방법을 사용하여 초기 코드북을 생성하고, 이를 이용하여 초기 반복학습 시 수렴영역을 벗어난 2 이상의 가중치에 의한 K-means 알고리즘을 제안한다. 기존의 K-means 알고리즘이 국부적으로 최적화되고 초기 반복학습 시에 가중치의 영향이 크다는 점을 이용하여, 제안된 방법에서는 초기 반복학습 시의 가중치를 수렴영역에서 벗어난 큰 값으로 주고 이후 반복학습시의 가증치는 수렴영역 안에 있는 값으로 고정하여 코드북을 설계한다. 또한 초기 코드북을 얻기 위해 Splitting 방법과 같은 추가적인 과정 없이 랜덤한 방법에 의한 초기 코드북을 적용함으로써 제안된 알고리즘이 단순한 구조를 가지며, 구해진 코드북의 성능도 우수함을 확인할 수 있었다.

  • PDF

A Structural Approach to On-line Signature Verification (구조적 접근방식의 온라인 자동 서명 겁증 기법)

  • Kim, Seong-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.385-396
    • /
    • 2005
  • In this paper, a new structural approach to on-line signature verification is presented. A primitive pattern is defined as a part segmented by a local minimal position of speed. And a structural description of signature is composed of subpatterns which are defined as such forms as rotation shape, cusp shape and bell shape, acquired by composition of the primitives regarding the directional changes. As the matching method to find identical parts between two signatures, a modified DP(dynamic programming) matching algorithm is presented. And also, variation and complexity of local parts are computed from the training samples, and reference model and decision boundary are derived from these. Error rate, execution time and memory usage are compared among the functional approach, the parametric approach and the proposed structural approach. It is found that the average error rate can be reduced from 14.2% to 4.05% when the local parts of a signature are weighted and the complexity is used as a factor of decision threshold. Though the error rate is similar to that of functional approaches. time consumption and memory usage of the proposed structural approach are shown to be very effective.

  • PDF

A Technique for On-line Automatic Signature Verification based on a Structural Representation (필기의 구조적 표현에 의한 온라인 자동 서명 검증 기법)

  • Kim, Seong-Hoon;Jang, Mun-Ik;Kim, Jai-Hie
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.11
    • /
    • pp.2884-2896
    • /
    • 1998
  • For on-line signature verification, the local shape of a signature is an important information. The current approaches, in which signatures are represented into a function of time or a feature vector without regarding of local shape, have not used the various features of local shapes, for example, local variation of a signer, local complexity of signature or local difficulty of forger, and etc. In this paper, we propose a new technique for on-line signature verification based on a structural signature representation so as to analyze local shape and to make a selection of important local parts in matching process. That is. based on a structural representation of signature, a technique of important of local weighting and personalized decision threshold is newly introduced and its experimental results under different conditions are compared.

  • PDF

A New Hybrid Weight Pooling Method for Object Image Quality Assessment with Luminance Adaptation Effect and Visual Saliency Effect (광적응 효과와 시각 집중 효과를 이용한 새로운 객관적 영상 화질 측정 용 하이브리드 가중치 풀링 기법)

  • Shahab Uddin, A.F.M.;Kim, Donghyun;Choi, Jeung Won;Chung, TaeChoong;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.827-835
    • /
    • 2019
  • In the pooling stage of a full reference image quality assessment (FR-IQA) technique, the global perceived quality for any distorted image is usually measured from the quality of its local image patches. But all the image patches do not have equal contribution when estimating the overall visual quality since the degree of degradation on those patches depends on various considerations i.e., types of the patches, types of the distortions, distortion sensitivities of the patches, saliency score of the patches, etc. As a result, weighted pooling strategy comes into account and different weighting mechanisms are used by the existing FR-IQA methods. This paper performs a thorough analysis and proposes a novel weighting function by considering the luminance adaptation as well as the visual saliency effect to offer more appropriate local weights, which can be adopted in the existing FR-IQA frameworks to improve their prediction accuracy. The extended experimental results show the effectiveness of the proposed weighting function.

Design and Performance Analysis of Adaptive First-Order Decimator Using Local Intelligibility (국부 가해성을 이용한 적응형 선형 축소기의 설계 및 성능 분석)

  • Kwak, No-Yoon
    • Journal of Digital Contents Society
    • /
    • v.9 no.1
    • /
    • pp.17-26
    • /
    • 2008
  • This paper has for its object to propose AFOD(Adaptive First-Order Decimator) which sets a value of decimated element as an average of a value of neighbor intelligible component and a output value of FOD(First-Order Decimator) for the target pixel, and to analyze its performance in terms of subjective image quality and hardware complexity. In the proposed AFOD, a target pixel located at the center of sliding window is selected first, then the gradient amplitudes of its right neighbor pixel and its lower neighbor pixel are calculated using first order derivative operator respectively. Secondly, each gradient amplitude is divided by the summation result of two gradient amplitudes to generate each local intelligible weight. Next, a value of neighbor intelligible component is defined by adding a value of the right neighbor pixel times its local intelligible weight to a value of the lower neighbor pixel times its intelligible weight. Since the proposed method adaptively reflects neighbor intelligible informations of neighbor pixels on the decimated element according to each local intelligible weight, it can effectively suppress the blurring effect being the demerit of FOD. It also possesses the advantages that it can keep the merits of FOD with the good results on average but also lower computational cost.

  • PDF

Performance Analysis of Adaptive FOD Algorithm Using Neighbor Intelligible Components (인접 가해 성분을 이용한 적응적 선형 축소 알고리즘의 성능 분석)

  • Kwak, No-Yoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11a
    • /
    • pp.591-594
    • /
    • 2003
  • 본 논문은 중심 화소의 FOD 성분값과 인접 가해 성분값의 평균으로 축소 성분값을 산출함으로써 FOD에 적응성을 부여한 디지털 영상 축소 알고리즘의 성능을 분석함에 그 목적이 있다. 제안된 방법은, 중심 화소의 우측 및 하측 인접 화소의 기울기의 크기를 이용하여 산출한 각각의 국부 가해 가중치를 우측 및 하측 인접 화소값에 곱한 후에 그 결과를 합산함으로써 인접 가해 성분값을 구하고 FOD 성분값과 이 인접 가해 성분값을 평균하여 축소 성분값을 구하는 과정을 전체 영역에 반복적으로 수행함으로써 축소 영상을 얻을 수 있다. 제안된 축소 방법에 따르면, 적은 연산량을 요하면서도 평균적으로 우수한 결과를 제공하는 FOD 방식의 장점을 취하면서 인접 화소의 유효 가해 성분을 각각의 국부 가해 가중치에 따라 축소 성분값에 적응적으로 반영함으로써 FOD의 단점인 몽롱화 현상을 효과적으로 억제시킬 수 있는 바, 개선된 정보 보존성을 제공할 수 있는 이점이 있다. 본고에서는 주관적인 성능과 하드웨어 복잡도 측면에서 제안된 방법과 기존의 각 방식에 대한 성능을 분석 평가한다.

  • PDF

Saliency Detection Using Entropy Weight and Weber's Law (엔트로피 가중치와 웨버 법칙을 이용한 세일리언시 검출)

  • Lee, Ho Sang;Moon, Sang Whan;Eom, Il Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.88-95
    • /
    • 2017
  • In this paper, we present a saliency detection method using entropy weight and Weber contrast in the wavelet transform domain. Our method is based on the commonly exploited conventional algorithms that are composed of the local bottom-up approach and global top-down approach. First, we perform the multi-level wavelet transform for the CIE Lab color images, and obtain global saliency by adding the local Weber contrasts to the corresponding low-frequency wavelet coefficients. Next, the local saliency is obtained by applying Gaussian filter that is weighted by entropy of wavelet high-frequency subband. The final saliency map is detected by non-lineally combining the local and global saliencies. To evaluate the proposed saliency detection method, we perform computer simulations for two image databases. Simulations results show the proposed method represents superior performance to the conventional algorithms.

A Study on Multiple Filter for Mixed Noise Removal (복합잡음 제거를 위한 다중 필터에 관한 연구)

  • Kwon, Se-Ik;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2029-2036
    • /
    • 2017
  • Currently, the demand for multimedia services is increasing with the rapid development of the digital age. Image data is corrupted by various noises and typical noise is mainly AWGN, salt and pepper noise and the complex noise that these two noises are mixed. Therefore, in this paper, the noise is processed by classifying AWGN and salt and pepper noise through noise judgment. In the case of AWGN, the outputs of spatial weighted filter and pixel change weighted filter are composed and processed, and the composite weights are applied differently according to the standard deviation of the local mask. In the case of salt and pepper noise, cubic spline interpolation and local histogram weighted filters are composed and processed. This study suggested the multiple image restoration filter algorithm which is processed by applying different composite weights according to the salt and pepper noise density of the local mask.