• Title/Summary/Keyword: 벡터 근사

Search Result 176, Processing Time 0.028 seconds

On Approximating the Euclidean Distance for Dimensionality Reduction (차원 축소를 위한 유클리드 거리의 근사 방안)

  • Jeon Seungdo;Kim Sang-Wook;Kim KiDong;Choi Byung-Uk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.67-69
    • /
    • 2005
  • 고차원 공간상의 벡터들 간의 유클리드 거리를 빠르게 계산하는 것은 멀티미디어 정보 검색을 위하여 매우 중요하다. 본 논문에서는 고차원 공간상의 두 벡터들 간의 유클리드 거리를 효과적으로 근사하는 방법을 제안한다. 이러한 근사를 위하여 두 벡터들의 놈(norm)을 사용하는 방법이 기존에 제안된 바 있다. 그러나 기존의 방법은 두 벡터간의 각도 성분을 무시하므로 근사 오차가 매우 커지는 문제점을 가진다. 본 연구에서 제안하는 방법은 기준 벡터라 부르는 별도의 벡터를 이용하여 추정된 두 벡터간의 각도 성분을 유클리드 거리 근사에 사용한다. 이 결과, 각도 성분을 무시하는 기존의 방법과 비교하여 근사 오차를 크게 줄일 수 있다. 또한, 제안된 방법에 의한 근사 값은 유클리드 거리 보다 항상 작다는 것을 이론적으로 증명하였다. 이는 제안된 방법으로 멀티미디어 정보 검색을 수행할 때 착오 기각이 발생하지 않음을 의미하는 것이다. 다양한 실험에 의한 성능 평가를 통하여 제안하는 방법의 우수성을 규명한다.

  • PDF

Vector Approximation Bitmap Indexing Method for High Dimensional Multimedia Database (고차원 멀티미디어 데이터 검색을 위한 벡터 근사 비트맵 색인 방법)

  • Park Joo-Hyoun;Son Dea-On;Nang Jong-Ho;Joo Bok-Gyu
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.455-462
    • /
    • 2006
  • Recently, the filtering approach using vector approximation such as VA-file[1] or LPC-file[2] have been proposed to support similarity search in high dimensional data space. This approach filters out many irrelevant vectors by calculating the approximate distance from a query vector using the compact approximations of vectors in database. Accordingly, the total elapsed time for similarity search is reduced because the disk I/O time is eliminated by reading the compact approximations instead of original vectors. However, the search time of the VA-file or LPC-file is not much lessened compared to the brute-force search because it requires a lot of computations for calculating the approximate distance. This paper proposes a new bitmap index structure in order to minimize the calculating time. To improve the calculating speed, a specific value of an object is saved in a bit pattern that shows a spatial position of the feature vector on a data space, and the calculation for a distance between objects is performed by the XOR bit calculation that is much faster than the real vector calculation. According to the experiment, the method that this paper suggests has shortened the total searching time to the extent of about one fourth of the sequential searching time, and to the utmost two times of the existing methods by shortening the great deal of calculating time, although this method has a longer data reading time compared to the existing vector approximation based approach. Consequently, it can be confirmed that we can improve even more the searching performance by shortening the calculating time for filtering of the existing vector approximation methods when the database speed is fast enough.

Performance Enhancement of a DVA-tree by the Independent Vector Approximation (독립적인 벡터 근사에 의한 분산 벡터 근사 트리의 성능 강화)

  • Choi, Hyun-Hwa;Lee, Kyu-Chul
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.151-160
    • /
    • 2012
  • Most of the distributed high-dimensional indexing structures provide a reasonable search performance especially when the dataset is uniformly distributed. However, in case when the dataset is clustered or skewed, the search performances gradually degrade as compared with the uniformly distributed dataset. We propose a method of improving the k-nearest neighbor search performance for the distributed vector approximation-tree based on the strongly clustered or skewed dataset. The basic idea is to compute volumes of the leaf nodes on the top-tree of a distributed vector approximation-tree and to assign different number of bits to them in order to assure an identification performance of vector approximation. In other words, it can be done by assigning more bits to the high-density clusters. We conducted experiments to compare the search performance with the distributed hybrid spill-tree and distributed vector approximation-tree by using the synthetic and real data sets. The experimental results show that our proposed scheme provides consistent results with significant performance improvements of the distributed vector approximation-tree for strongly clustered or skewed datasets.

Saddlepoint Approximation to the Smooth Functions of Means Model (평균 벡터의 평활함수모형에 대한 안부점근사 -스튜던트화 분산을 중심으로-)

  • 나종화;김주성
    • The Korean Journal of Applied Statistics
    • /
    • v.14 no.2
    • /
    • pp.333-344
    • /
    • 2001
  • 통계적 추론에 사용되는 많은 통계량들은 평균벡터의 평활함수의 형태로 표현이 가능하다. 본 연구에서는 이들 통계량들의 분포함수에 대한 안부점근사법을 제시하였다. 이 방법은 Na(1998)에서 제시된 일반적 통계량의 분포함수에 대한 안부점근사법이 평균벡터의 평활함수모형에 특히 유용하게 사용될 수 있음을 보인 것이다. 이 근사법은 정규근사에 비해 근사의 정도가 뛰어나며, 특히 통계량의 꼬리부분의 확률에 대해서도 정확도가 그대로 유지되는 장점이 있어 정밀한 추론이 요구되는 많은 문제에 효과적으로 사용될 수 있다. 모의 실험에 사용할 평균벡터의 평활함수 모형으로는 스튜던트화 분산을 고려하였다.

  • PDF

An Effective Method for Approximating the Euclidean Distance in High-Dimensional Space (고차원 공간에서 유클리드 거리의 효과적인 근사 방안)

  • Jeong, Seung-Do;Kim, Sang-Wook;Kim, Ki-Dong;Choi, Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.5
    • /
    • pp.69-78
    • /
    • 2005
  • It is crucial to compute the Euclidean distance between two vectors efficiently in high dimensional space for multimedia information retrieval. In this paper, we propose an effective method for approximating the Euclidean distance between two high-dimensional vectors. For this approximation, a previous method, which simply employs norms of two vectors, has been proposed. This method, however, ignores the angle between two vectors in approximation, and thus suffers from large approximation errors. Our method introduces an additional vector called a reference vector for estimating the angle between the two vectors, and approximates the Euclidean distance accurately by using the estimated angle. This makes the approximation errors reduced significantly compared with the previous method. Also, we formally prove that the value approximated by our method is always smaller than the actual Euclidean distance. This implies that our method does not incur any false dismissal in multimedia information retrieval. Finally, we verify the superiority of the proposed method via performance evaluation with extensive experiments.

Nu-SVR Learning with Predetermined Basis Functions Included (정해진 기저함수가 포함되는 Nu-SVR 학습방법)

  • Kim, Young-Il;Cho, Won-Hee;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.316-321
    • /
    • 2003
  • Recently, support vector learning attracts great interests in the areas of pattern classification, function approximation, and abnormality detection. It is well-known that among the various support vector learning methods, the so-called no-versions are particularly useful in cases that we need to control the total number of support vectors. In this paper, we consider the problem of function approximation utilizing both predetermined basis functions and a no-version support vector learning called $\nu-SVR$. After reviewing $\varepsilon-SVR$, $\nu-SVR$, and a semi-parametric approach, this paper presents an extension of the conventional $\nu-SVR$ method toward the direction that can utilize Predetermined basis functions. Moreover, the applicability of the presented method is illustrated via an example.

Index Structure for Efficient Similarity Search of Multi-Dimensional Data (다차원 데이터의 효과적인 유사도 검색을 위한 색인구조)

  • 복경수;허정필;유재수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.97-99
    • /
    • 2004
  • 본 논문에서는 다차원 데이터의 유사도 검색을 효과적으로 수행하기 위한 색인 구조를 제안한다. 제안하는 색인 구조는 차원의 저주 현상을 극복하기 위한 벡터 근사 기반의 색인 구조이다. 제안하는 색인 구조는 부모 노드를 기준으로 KDB-트리와 유사한 영역 분할 방식으로 분할하고 분할된 각 영역은 데이터의 분포 특성에 따라 동적 비트를 할당하여 벡터 근사화된 영역을 표현한다. 따라서, 하나의 노드 안에 않은 영역 정보를 저장하여 트리의 깊이를 줄일 수 있다. 또한 다차원의 특징 벡터 공간에 상대적인 비트를 할당하기 때문에 군집화되어 있는 데이터에 대해서 효과적이다 제안하는 색인 구조의 우수성을 보이기 위해 다양한 실험을 통하여 성능의 우수성을 입증한다.

  • PDF

Kernel Adatron Algorithm of Support Vector Machine for Function Approximation (함수근사를 위한 서포트 벡터 기계의 커널 애더트론 알고리즘)

  • Seok, Kyung-Ha;Hwang, Chang-Ha
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.6
    • /
    • pp.1867-1873
    • /
    • 2000
  • Function approximation from a set of input-output pairs has numerous applications in scientific and engineering areas. Support vector machine (SVM) is a new and very promising classification, regression and function approximation technique developed by Vapnik and his group at AT&TG Bell Laboratories. However, it has failed to establish itself as common machine learning tool. This is partly due to the fact that this is not easy to implement, and its standard implementation requires the use of optimization package for quadratic programming (QP). In this appear we present simple iterative Kernel Adatron (KA) algorithm for function approximation and compare it with standard SVM algorithm using QP.

  • PDF

An Effective Method for Dimensionality Reduction in High-Dimensional Space (고차원 공간에서 효과적인 차원 축소 기법)

  • Jeong Seung-Do;Kim Sang-Wook;Choi Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.88-102
    • /
    • 2006
  • In multimedia information retrieval, multimedia data are represented as vectors in high dimensional space. To search these vectors effectively, a variety of indexing methods have been proposed. However, the performance of these indexing methods degrades dramatically with increasing dimensionality, which is known as the dimensionality curse. To resolve the dimensionality curse, dimensionality reduction methods have been proposed. They map feature vectors in high dimensional space into the ones in low dimensional space before indexing the data. This paper proposes a method for dimensionality reduction based on a function approximating the Euclidean distance, which makes use of the norm and angle components of a vector. First, we identify the causes of the errors in angle estimation for approximating the Euclidean distance, and discuss basic directions to reduce those errors. Then, we propose a novel method for dimensionality reduction that composes a set of subvectors from a feature vector and maintains only the norm and the estimated angle for every subvector. The selection of a good reference vector is important for accurate estimation of the angle component. We present criteria for being a good reference vector, and propose a method that chooses a good reference vector by using Levenberg-Marquardt algorithm. Also, we define a novel distance function, and formally prove that the distance function lower-bounds the Euclidean distance. This implies that our approach does not incur any false dismissals in reducing the dimensionality effectively. Finally, we verify the superiority of the proposed method via performance evaluation with extensive experiments.