• Title/Summary/Keyword: 스케일 불변 특징

Search Result 15, Processing Time 0.027 seconds

Place Modeling and Recognition using Distribution of Scale Invariant Features (스케일 불변 특징들의 분포를 이용한 장소의 모델링 및 인식)

  • Hu, Yi;Shin, Bum-Joo;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.51-58
    • /
    • 2008
  • In this paper, we propose a place modeling based on the distribution of scale-invariant features, and a place recognition method that recognizes places by comparing the place model in a database with the extracted features from input data. The proposed method is based on the assumption that every place can be represented by unique feature distributions that are distinguishable from others. The proposed method uses global information of each place where one place is represented by one distribution model. Therefore, the main contribution of the proposed method is that the time cost corresponding to the increase of the number of places grows linearly without increasing exponentially. For the performance evaluation of the proposed method, the different number of frames and the different number of features are used, respectively. Empirical results illustrate that our approach achieves better performance in space and time cost comparing to other approaches. We expect that the Proposed method is applicable to many ubiquitous systems such as robot navigation, vision system for blind people, wearable computing, and so on.

  • PDF

Extended SURF Algorithm with Color Invariant Feature (컬러 불변 특징을 갖는 확장된 SURF 알고리즘)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2009.01a
    • /
    • pp.193-196
    • /
    • 2009
  • 여러 개의 영상으로부터 스케일, 조명, 시점 등의 환경변화를 고려하여 대응점을 찾는 일은 쉽지 않다. SURF는 이러한 환경변화에 불변하는 특징점을 찾는 알고리즘중 하나로서 일반적으로 성능이 우수하다고 알려진 SIFT와 견줄만한 성능을 보이면서 속도를 크게 향상시킨 알고리즘이다. 하지만 SURF는 그레이공간 상의 정보만 이용함에 따라 컬러공간상에 주어진 많은 유용한 특징들을 활용하지 못한다. 본 논문에서는 강인한 컬러특정정보를 포함하는 확장된 SURF알고리즘을 제안한다. 제안하는 방법의 우수성은 다양한 조명환경과 시점변화에 따른 영상을 SIFT와 SURF 그리고 제안하는 컬러정보를 적용한 SURF알고리즘과 비교 실험을 통해 입증하였다.

  • PDF

Planar-Object Position Estimation by using Scale & Affine Invariant Features (불변하는 스케일-아핀 특징 점을 이용한 평면객체의 위치 추정)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.795-800
    • /
    • 2008
  • 카메라로 입력되는 영상에서 객체를 인식하기 위한 노력은, 기존의 컴퓨터 비전분야에서 좋은 이슈로 연구되고 있다. 영상 내부에 등장하는 객체를 인식하고 해당 객체를 포함하고 있는 전체 이미지에서 현재 영상의 위치를 인식하기 위해서는, 영상 내에 등장할 객체에 대한 트레이닝이 필요하다. 본 논문에서는 영상에 등장할 객체에 대해서, 특징 점을 검출(feature detection)하고 각 점들이 가지는 픽셀 그라디언트 방향의 벡터 값들을 그 이웃하는 벡터 값들과 함께 DoG(difference-of-Gaussian)함수를 이용하여 정형화 한다. 이는 추후에 입력되는 영상에서 검출되는 특징 점들과 그 이웃들 간의 거리나 스케일의 비율 등의 파리미터를 이용하여 비교함으로써, 현재 특징 점들의 위치를 추정하는 정보로 사용된다. 본 논문에서는 광역의 시설 단지를 촬영한 인공위성 영상을 활용하여 시설물 내부에 존재는 건물들에 대한 초기 특징 점들을 검출하고 데이터베이스로 저장한다. 트레이닝이 마친 후에는, 프린트된 인공위성 영상내부의 특정 건물을 카메라를 이용하여 촬영하고, 이 때 입력된 영상의 특징 점을 해석하여 기존에 구축된 데이터베이스 내의 특징 점과 비교하는 과정을 거친다. 매칭되는 특징 점들은 DoG로 정형화된 벡터 값들을 이용하여 해당 건물에 대한 위치를 추정하고, 3차원으로 기 모델링 된 건물을 증강현실 기법을 이용하여 영상에 정합한 후 가시화 한다.

  • PDF

Linear Regression-based 1D Invariant Image for Shadow Detection and Removal in Single Natural Image (단일 자연 영상에서 그림자 검출 및 제거를 위한 선형 회귀 기반의 1D 불변 영상)

  • Park, Ki-Hong
    • Journal of Digital Contents Society
    • /
    • v.19 no.9
    • /
    • pp.1787-1793
    • /
    • 2018
  • Shadow is a common phenomenon observed in natural scenes, but it has a negative influence on image analysis such as object recognition, feature detection and scene analysis. Therefore, the process of detecting and removing shadows included in digital images must be considered as a pre-processing process of image analysis. In this paper, the existing methods for acquiring 1D invariant images, one of the feature elements for detecting and removing shadows contained in a single natural image, are described, and a method for obtaining 1D invariant images based on linear regression has been proposed. The proposed method calculates the log of the band-ratio between each channel of the RGB color image, and obtains the grayscale image line by linear regression. The final 1D invariant images were obtained by projecting the log image of the band-ratio onto the estimated grayscale image line. Experimental results show that the proposed method has lower computational complexity than the existing projection method using entropy minimization, and shadow detection and removal based on 1D invariant images are performed effectively.

View invariant image matching using SURF (SURF(speed up robust feature)를 이용한 시점변화에 강인한 영상 매칭)

  • Son, Jong-In;Kang, Minsung;Sohn, Kwanghoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.222-225
    • /
    • 2011
  • 영상 매칭은 컴퓨터 비전에서 중요한 기초 기술 중에 하나이다. 하지만 스케일, 회전, 조명, 시점변화에 강인한 대응점을 찾는 것은 쉬운 작업이 아니다. 이러한 문제점을 보안하기 위해서 스케일 불변 특징 변환(Scale Invariant Feature Transform) 고속의 강인한 특징 추출(Speeded up robust features) 알고리즘등에 제안되었지만, 시점 변화에 있어서 취약한 문제점을 나타냈다. 본 논문에서는 이런 문제점을 해결하기 위해서 시점 변화에 강인한 알고리즘을 제안하였다. 시점 변화에 강인한 영상매칭을 위해서 원본 영상과 질의 영상간 유사도 높은 특징점들의 호모그래피 변환을 이용해서 질의 영상을 원본 영상과 유사하게 보정한 뒤에 매칭을 통해서 시점 변화에 강인한 알고리즘을 구현하였다. 시점이 변화된 여러 영상을 통해서 기존 SIFT,SURF와 성능과 수행 시간을 비교 함으로서, 본 논문에서 제안한 알고리즘의 우수성을 입증 하였다.

  • PDF

Image Similarity Retrieval using an Scale and Rotation Invariant Region Feature (크기 및 회전 불변 영역 특징을 이용한 이미지 유사성 검색)

  • Yu, Seung-Hoon;Kim, Hyun-Soo;Lee, Seok-Lyong;Lim, Myung-Kwan;Kim, Deok-Hwan
    • Journal of KIISE:Databases
    • /
    • v.36 no.6
    • /
    • pp.446-454
    • /
    • 2009
  • Among various region detector and shape feature extraction method, MSER(Maximally Stable Extremal Region) and SIFT and its variant methods are popularly used in computer vision application. However, since SIFT is sensitive to the illumination change and MSER is sensitive to the scale change, it is not easy to apply the image similarity retrieval. In this paper, we present a Scale and Rotation Invariant Region Feature(SRIRF) descriptor using scale pyramid, MSER and affine normalization. The proposed SRIRF method is robust to scale, rotation, illumination change of image since it uses the affine normalization and the scale pyramid. We have tested the SRIRF method on various images. Experimental results demonstrate that the retrieval performance of the SRIRF method is about 20%, 38%, 11%, 24% better than those of traditional SIFT, PCA-SIFT, CE-SIFT and SURF, respectively.

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

A Study on Fisheye Lens based Features on the Ceiling for Self-Localization (실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구)

  • Choi, Chul-Hee;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.442-448
    • /
    • 2011
  • There are many research results about a self-localization technique of mobile robot. In this paper we present a self-localization technique based on the features of ceiling vision using a fisheye lens. The features obtained by SIFT(Scale Invariant Feature Transform) can be used to be matched between the previous image and the current image and then its optimal function is derived. The fisheye lens causes some distortion on its images naturally. So it must be calibrated by some algorithm. We here propose some methods for calibration of distorted images and design of a geometric fitness model. The proposed method is applied to laboratory and aile environment. We show its feasibility at some indoor environment.

A Study on Scale-Invariant Features Extraction and Distance Measurement for Localization of Mobile Robot (이동로봇의 위치 추정을 위한 스케일 불변 특징점 추출 및 거리 측정에 관한 연구)

  • Jung, Dae-Seop;Jang, Mun-Suk;Ryu, Je-Goon;Lee, Eung-Hyuk;Shim, Jae-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.625-627
    • /
    • 2005
  • Existent distance measurement that use camera is method that use both Stereo Camera and Monocular Camera, There is shortcoming that method that use Stereo Camera is sensitive in effect of a lot of expenses and environment variables, and method that use Monocular Camera are big computational complexity and error. In this study, reduce expense and error using Monocular Camera and I suggest algorithm that measure distance, Extract features using scale Invariant features Transform(SIFT) for distance measurement, and this measures distance through features matching and geometrical analysis, Proposed method proves measuring distance with wall by geometrical analysis free wall through feature point abstraction and matching.

  • PDF

GAN-based Image-to-image Translation using Multi-scale Images (다중 스케일 영상을 이용한 GAN 기반 영상 간 변환 기법)

  • Chung, Soyoung;Chung, Min Gyo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.767-776
    • /
    • 2020
  • GcGAN is a deep learning model to translate styles between images under geometric consistency constraint. However, GcGAN has a disadvantage that it does not properly maintain detailed content of an image, since it preserves the content of the image through limited geometric transformation such as rotation or flip. Therefore, in this study, we propose a new image-to-image translation method, MSGcGAN(Multi-Scale GcGAN), which improves this disadvantage. MSGcGAN, an extended model of GcGAN, performs style translation between images in a direction to reduce semantic distortion of images and maintain detailed content by learning multi-scale images simultaneously and extracting scale-invariant features. The experimental results showed that MSGcGAN was better than GcGAN in both quantitative and qualitative aspects, and it translated the style more naturally while maintaining the overall content of the image.