• Title/Summary/Keyword: Distortion Outliers

Search Result 9, Processing Time 0.021 seconds

A Robust Vector Quantization Method against Distortion Outlier and Source Mismatch (이상 신호왜곡과 소스 불일치에 강인한 벡터 양자화 방법)

  • Noh, Myung-Hoon;Kim, Moo-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.74-80
    • /
    • 2012
  • In resolution-constrained quantization, the size of Voronoi cell varies depending on probability density function of the input data, which causes large amount of distortion outliers. We propose a vector quantization method that reduces distortion outliers by combining the generalized Lloyd algorithm (GLA) and the cell-size constrained vector quantization (CCVQ) scheme. The training data are divided into the inside and outside regions according to the size of Voronoi cell, and consequently CCVQ and GLA are applied to each region, respectively. As CCVQ is applied to the densely populated region of the source instead of GLA, the number of centroids for the outside region can be increased such that distortion outliers can be decreased. In real-world environment, source mismatch between training and test data is inevitable. For the source mismatch case, the proposed algorithm improves performance in terms of average distortion and distortion outliers.

Soft-Decision Based Quantization of the Multimedia Signal Considering the Outliers in Rate-Allocation and Distortion (이상 비트율 할당과 신호왜곡 문제점을 고려한 멀티미디어 신호의 연판정 양자화 방법)

  • Lim, Jong-Wook;Noh, Myung-Hoon;Kim, Moo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.286-293
    • /
    • 2010
  • There are two major conventional quantization algorithms: resolution-constrained quantization (RCQ) and entropy-constrained quantization (ECQ). Although RCQ works well for fixed transmission-rate, it produces the distortion outliers since the cell sizes are different. Compared with RCQ, ECQ has the constraints on the cell size but it produces the rate outliers. We propose the cell-size constrained vector quantization (CCVQ) that improves the generalized Lloyd algorithm (GLA). The CCVQ algorithm is able to make a soft-decision between RCQ and ECQ by using the flexible penalty measure according to the cell size. Although the proposed method increases the small amount of overall mean-distortion, it can reduce the distortion outliers.

Local stereo matching using combined matching cost and adaptive cost aggregation

  • Zhu, Shiping;Li, Zheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.224-241
    • /
    • 2015
  • Multiview plus depth (MVD) videos are widely used in free-viewpoint TV systems. The best-known technique to determine depth information is based on stereo vision. In this paper, we propose a novel local stereo matching algorithm which is radiometric invariant. The key idea is to use a combined matching cost of intensity and gradient based similarity measure. In addition, we realize an adaptive cost aggregation scheme by constructing an adaptive support window for each pixel, which can solve the boundary and low texture problems. In the disparity refinement process, we propose a four-step post-processing technique to handle outliers and occlusions. Moreover, we conduct stereo reconstruction tests to verify the performance of the algorithm more intuitively. Experimental results show that the proposed method is effective and robust against local radiometric distortion. It has an average error of 5.93% on the Middlebury benchmark and is compatible to the state-of-art local methods.

Improved Lexicon-driven based Chord Symbol Recognition in Musical Images

  • Dinh, Cong Minh;Do, Luu Ngoc;Yang, Hyung-Jeong;Kim, Soo-Hyung;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.12 no.4
    • /
    • pp.53-61
    • /
    • 2016
  • Although extensively developed, optical music recognition systems have mostly focused on musical symbols (notes, rests, etc.), while disregarding the chord symbols. The process becomes difficult when the images are distorted or slurred, although this can be resolved using optical character recognition systems. Moreover, the appearance of outliers (lyrics, dynamics, etc.) increases the complexity of the chord recognition. Therefore, we propose a new approach addressing these issues. After binarization, un-distortion, and stave and lyric removal of a musical image, a rule-based method is applied to detect the potential regions of chord symbols. Next, a lexicon-driven approach is used to optimally and simultaneously separate and recognize characters. The score that is returned from the recognition process is used to detect the outliers. The effectiveness of our system is demonstrated through impressive accuracy of experimental results on two datasets having a variety of resolutions.

A Stereo Matching Algorithm with Projective Distortion of Variable Windows (가변 윈도우의 투영왜곡을 고려한 스테레오 정합 알고리듬)

  • Kim, Gyeong-Beom;Jeong, Seong-Jong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.25 no.3
    • /
    • pp.461-469
    • /
    • 2001
  • Existing area-based stereo algorithms rely heavily on rectangular windows for computing correspondence. While the algorithms with the rectangular windows are efficient, they generate relatively large matching errors due to variations of disparity profiles near depth discontinuities and doesnt take into account local deformations of the windows due to projective distortion. In this paper, in order to deal with these problems, a new correlation function with 4 directional line masks, based on robust estimator, is proposed for the selection of potential matching points. These points is selected to consider depth discontinuities and reduce effects on outliers. The proposed matching method finds an arbitrarily-shaped variable window around a pixel in the 3d array which is constructed with the selected matching points. In addition, the method take into account the local deformation of the variable window with a constant disparity, and perform the estimation of sub-pixel disparities. Experiments with various synthetic images show that the proposed technique significantly reduces matching errors both in the vicinity of depth discontinuities and in continuously smooth areas, and also does not be affected drastically due to outlier and noise.

Locating and Searching Hidden Messages in Stego-Images (스테고 이미지에서 은닉메시지 감지기법)

  • Ji, Seon-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.3
    • /
    • pp.37-43
    • /
    • 2009
  • Steganography conceals the fact that hidden message is being sent on the internet. Steganalysis can be detected the abrupt changes in the statistics of a stego-data. After message embedding, I have analyzed for the statistical significance of the fact the occurrence of differences among the four-neighboring pixels. In this case, when a embedding messages within a images is small, use EC value and chi-square test to determine whether a distribution in an images matches a distribution that shows distortion from stego-data.

Efficient Link Aggregation in Delay-Bandwidth Sensitive Networks (지연과 대역폭이 민감한 망에서의 효율적인 링크 집단화 방법)

  • Kwon, So-Ra;Jeon, Chang-Ho
    • Journal of Internet Computing and Services
    • /
    • v.12 no.5
    • /
    • pp.11-19
    • /
    • 2011
  • In this paper, Service Boundary Line approximation method is proposed to improve the accuracy of aggregated link state information for source routing in transport networks that conduct hierarchical QoS routing. The proposed method is especially useful for aggregating links that have both delay and bandwidth as their QoS parameters. This method selects the main path weight in the network and transports the data to the external networks together with the aggregation information, reducing information distortion caused from the loss of some path weight during aggregation process. In this paper, the main path weight is defined as outlier. Service Boundary Line has 2k+5parameters. k is the number of outliers. The number of storage spaces of Service Boundary Line changes according to the number of outliers. Simulation results show that our approximation method requires a storage space that 1.5-2 times larger than those in other known techniques depending on outlier selection method, but its information accuracy of proposed method in the ratio between storage space and information accuracy is higher.

Empirical Modeling for Cache Miss Rates in Multiprocessors (다중 프로세서에서의 캐시접근 실패율을 위한 경험적 모델링)

  • Lee, Kang-Woo;Yang, Gi-Joo;Park, Choon-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.1_2
    • /
    • pp.15-34
    • /
    • 2006
  • This paper introduces an empirical modeling technique. This technique uses a set of sample results which are collected from a few small scale simulations. Empirical models are developed by applying a couple of statistical estimation techniques to these samples. We built two types of models for cache miss rates in Symmetric Multiprocessor systems. One is for the changes of input data set size while the specification of target system is fixed. The other is for the changes of the number of processors in target system while the input data set size is fixed. To develop accurate models, we built individual model for every kind of cache misses for each shared data structure in a program. The final model is then obtained by integrating them. Besides, combined use of Least Mean Squares and Robust Estimations enhances the quality of models by minimizing the distortion due to outliers. Empirical modeling technique produces extremely accurate models without analysis on sample data. In addition, since only snail scale simulations are necessary, once a set of samples can be collected, empirical method can be adopted in any research areas. In 17 cases among 24 trials, empirical models present extremely low prediction errors below $1\%$. In the remaining cases, the accuracy is excellent, as well. The models sustain high quality even when the behavioral characteristics of programs are irregular and the number of samples are barely enough.

Analysis of the Optimal Window Size of Hampel Filter for Calibration of Real-time Water Level in Agricultural Reservoirs (농업용저수지의 실시간 수위 보정을 위한 Hampel Filter의 최적 Window Size 분석)

  • Joo, Dong-Hyuk;Na, Ra;Kim, Ha-Young;Choi, Gyu-Hoon;Kwon, Jae-Hwan;Yoo, Seung-Hwan
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.3
    • /
    • pp.9-24
    • /
    • 2022
  • Currently, a vast amount of hydrologic data is accumulated in real-time through automatic water level measuring instruments in agricultural reservoirs. At the same time, false and missing data points are also increasing. The applicability and reliability of quality control of hydrological data must be secured for efficient agricultural water management through calculation of water supply and disaster management. Considering the characteristics of irregularities in hydrological data caused by irrigation water usage and rainfall pattern, the Korea Rural Community Corporation is currently applying the Hampel filter as a water level data quality management method. This method uses window size as a key parameter, and if window size is large, distortion of data may occur and if window size is small, many outliers are not removed which reduces the reliability of the corrected data. Thus, selection of the optimal window size for individual reservoir is required. To ensure reliability, we compared and analyzed the RMSE (Root Mean Square Error) and NSE (Nash-Sutcliffe model efficiency coefficient) of the corrected data and the daily water level of the RIMS (Rural Infrastructure Management System) data, and the automatic outlier detection standards used by the Ministry of Environment. To select the optimal window size, we used the classification performance evaluation index of the error matrix and the rainfall data of the irrigation period, showing the optimal values at 3 h. The efficient reservoir automatic calibration technique can reduce manpower and time required for manual calibration, and is expected to improve the reliability of water level data and the value of water resources.