• 제목/요약/키워드: Scale invariant

검색결과 363건 처리시간 0.028초

Scale Invariant Auto-context for Object Segmentation and Labeling

  • Ji, Hongwei;He, Jiangping;Yang, Xin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권8호
    • /
    • pp.2881-2894
    • /
    • 2014
  • In complicated environment, context information plays an important role in image segmentation/labeling. The recently proposed auto-context algorithm is one of the effective context-based methods. However, the standard auto-context approach samples the context locations utilizing a fixed radius sequence, which is sensitive to large scale-change of objects. In this paper, we present a scale invariant auto-context (SIAC) algorithm which is an improved version of the auto-context algorithm. In order to achieve scale-invariance, we try to approximate the optimal scale for the image in an iterative way and adopt the corresponding optimal radius sequence for context location sampling, both in training and testing. In each iteration of the proposed SIAC algorithm, we use the current classification map to estimate the image scale, and the corresponding radius sequence is then used for choosing context locations. The algorithm iteratively updates the classification maps, as well as the image scales, until convergence. We demonstrate the SIAC algorithm on several image segmentation/labeling tasks. The results demonstrate improvement over the standard auto-context algorithm when large scale-change of objects exists.

ESTIMATION OF SCALE PARAMETER FROM RAYLEIGH DISTRIBUTION UNDER ENTROPY LOSS

  • Chung, Youn-Shik
    • Journal of applied mathematics & informatics
    • /
    • 제2권1호
    • /
    • pp.33-40
    • /
    • 1995
  • Entropy loss is derived by the scale parameter of Rayleigh distribution. Under this entropy loss we obtain the best invariant estimators and the Bayes estimators of the scale parameter. Also we compare MLE with the proposed estimators.

윤곽선 변동율을 이용한 물체의 2차원 형태 기술 (Two-Dimensional Shape Description of Objects using The Contour Fluctuation Ratio)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제5권2호
    • /
    • pp.158-166
    • /
    • 2002
  • 본 논문에서는 윤곽선 세그먼트의 양 끝점을 잇는 직선과 곡선의 길이의 비율로 윤곽선 변동율을 정의하고, 이로부터 윤곽선의 형태를 기술하는 방법을 제안하였다. 윤곽선 변동율은 윤곽선 세그먼트로부터 계산되기 때문에 회전이나 크기 변형에 불변하는 윤곽선 세그먼트를 추출해야 한다. 이를 위하여 전체 윤곽선의 길이에 비례하는 상대적인 길이로 윤곽선을 분할하고 윤곽선 상의 모든 점을 분할점으로 하는 중첩된 윤곽선 세그먼트를 이용하였다. 윤곽선 변동율은 윤곽선 세그먼트의 단위 길이에 따라 국소적 또는 전역적인 특징을 나타내므로, 윤곽선 변동율의 분포를 나타내는 특징 벡터로 물체의 형태를 기술하고, 단위 길이별로 특징 벡터를 비교하여 윤곽선 형태의 유사도를 계산한다. 제안된 방법을 구현하여 15종의 물고기 영상에 대하여 회전 및 크기 변형을 가한 총 165개의 영상에 대하여 실험한 결과, 회전 및 크기 변형에 대한 불변성은 물론 정규화된 체인코드 히스토그램(NCCH)과 링 프로젝션(TRP)을 이용한 방법에 비하여 군집화 능력이 우수함을 확인할 수 있었다.

  • PDF

지역적 매칭쌍 특성에 기반한 고해상도영상의 자동기하보정 (Automatic Registration of High Resolution Satellite Images using Local Properties of Tie Points)

  • 한유경;번영기;최재완;한동엽;김용일
    • 한국측량학회지
    • /
    • 제28권3호
    • /
    • pp.353-359
    • /
    • 2010
  • 본 논문은 Scale Invariant Feature Transform(SIFT) 기술자를 이용한 매칭 방법을 개선하여 고해상도영상에서 보다 많은 매칭쌍(tie points)을 추출함으로써 고해상도영상 자동기하보정의 결과향상을 목적으로 한다. 이를 위해 기준(reference)영상과 대상(sensed)영상의 특징점(interest points)간의 위치관계를 추가적으로 이용하여 매칭쌍을 추출하였다. SIFT 기술자를 이용하여 어핀(affine)변환계수를 추정한 후, 이를 통해 대상영상의 특징점 좌표를 기준영상 좌표체계로 변환하였다. 변환된 대상영상의 특징점과 기준영상의 특징점간의 공간거리(spatial distance)정보를 이용하여 최종적으로 매칭쌍을 추출하였다. 추출된 매칭쌍으로 piecewise linear function을 구성하여 고해상도 영상간 자동기하보정을 수행하였다. 제안한 기법을 통하여, 기존 SIFT 기법에 의해 추출한 결과에 비해 영상 전역에 걸쳐 고르게 분포된 다수의 매칭쌍을 추출할 수 있었다.

MEGH: A New Affine Invariant Descriptor

  • Dong, Xiaojie;Liu, Erqi;Yang, Jie;Wu, Qiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권7호
    • /
    • pp.1690-1704
    • /
    • 2013
  • An affine invariant descriptor is proposed, which is able to well represent the affine covariant regions. Estimating main orientation is still problematic in many existing method, such as SIFT (scale invariant feature transform) and SURF (speeded up robust features). Instead of aligning the estimated main orientation, in this paper ellipse orientation is directly used. According to ellipse orientation, affine covariant regions are firstly divided into 4 sub-regions with equal angles. Since affine covariant regions are divided from the ellipse orientation, the divided sub-regions are rotation invariant regardless the rotation, if any, of ellipse. Meanwhile, the affine covariant regions are normalized into a circular region. In the end, the gradients of pixels in the circular region are calculated and the partition-based descriptor is created by using the gradients. Compared with the existing descriptors including MROGH, SIFT, GLOH, PCA-SIFT and spin images, the proposed descriptor demonstrates superior performance according to extensive experiments.

불변 패턴인식 알고리즘의 비교연구 (Comparison of invariant pattern recognition algorithms)

  • 강대성
    • 전자공학회논문지B
    • /
    • 제33B권8호
    • /
    • pp.30-41
    • /
    • 1996
  • This paper presents a comparative study of four pattern recognition algorithms which are invariant to translations, rotations, and scale changes of the input object; namely, object shape features (OSF), geometrica fourier mellin transform (GFMT), moment invariants (MI), and centered polar exponential transform (CPET). Pattern description is obviously one of the most important aspects of pattern recognition, which is useful to describe the object shape independently of translation, rotation, or size. We first discuss problems that arise in the conventional invariant pattern recognition algorithms, or size. We first discuss problems that arise in the coventional invariant pattern recognition algorithms, then we analyze their performance using the same criterion. Computer simulations with several distorted images show that the CPET algorithm yields better performance than the other ones.

  • PDF

Best Invariant Estimators In the Scale Parameter Problem

  • Choi, Kuey-Chung
    • 호남수학학술지
    • /
    • 제13권1호
    • /
    • pp.53-63
    • /
    • 1991
  • In this paper we first present the elements of the theory of families of distributions and corresponding estimators having structual properties which are preserved under certain groups of transformations, called "Invariance Principle". The invariance principle is an intuitively appealing decision principle which is frequently used, even in classical statistics. It is interesting not only in its own right, but also because of its strong relationship with several other proposal approaches to statistics, including the fiducial inference of Fisher [3, 4], the structural inference of Fraser [5], and the use of noninformative priors of Jeffreys [6]. Unfortunately, a space precludes the discussion of fiducial inference and structural inference. Many of the key ideas in these approaches will, however, be brought out in the discussion of invarience and its relationship to the use of noninformatives priors. This principle is also applied to the problem of finding the best scale invariant estimator in the scale parameter problem. Finally, several examples are subsequently given.

  • PDF

Size, Scale and Rotation Invariant Proposed Feature vectors for Trademark Recognition

  • Faisal zafa, Muhammad;Mohamad, Dzulkifli
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -3
    • /
    • pp.1420-1423
    • /
    • 2002
  • The classification and recognition of two-dimensional trademark patterns independently of their position, orientation, size and scale by proposing two feature vectors has been discussed. The paper presents experimentation on two feature vectors showing size- invariance and scale-invariance respectively. Both feature vectors are equally invariant to rotation as well. The feature extraction is based on local as well as global statistics of the image. These feature vectors have appealing mathematical simplicity and are versatile. The results so far have shown the best performance of the developed system based on these unique sets of feature. The goal has been achieved by segmenting the image using connected-component (nearest neighbours) algorithm. Second part of this work considers the possibility of using back propagation neural networks (BPN) for the learning and matching tasks, by simply feeding the feature vectosr. The effectiveness of the proposed feature vectors is tested with various trademarks, not used in learning phase.

  • PDF

SIFT 와 SURF 알고리즘의 성능적 비교 분석 (Comparative Analysis of the Performance of SIFT and SURF)

  • 이용환;박제호;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제12권3호
    • /
    • pp.59-64
    • /
    • 2013
  • Accurate and robust image registration is important task in many applications such as image retrieval and computer vision. To perform the image registration, essential required steps are needed in the process: feature detection, extraction, matching, and reconstruction of image. In the process of these function, feature extraction not only plays a key role, but also have a big effect on its performance. There are two representative algorithms for extracting image features, which are scale invariant feature transform (SIFT) and speeded up robust feature (SURF). In this paper, we present and evaluate two methods, focusing on comparative analysis of the performance. Experiments for accurate and robust feature detection are shown on various environments such like scale changes, rotation and affine transformation. Experimental trials revealed that SURF algorithm exhibited a significant result in both extracting feature points and matching time, compared to SIFT method.