• Title/Summary/Keyword: SIFT feature

Search Result 231, Processing Time 0.025 seconds

Illumination invariant image matching using histogram equalization (히스토그램 평활화를 이용한 조명변화에 강인한 영상 매칭)

  • Oh, Changbeom;Kang, Minsung;Sohn, Kwanghoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.161-164
    • /
    • 2011
  • 영상 매칭은 컴퓨터 비전에서 기초적인 기술로써 영상 추적, 물체인식 등 다양한 분양에서 많이 사용되고 있다. 하지만 스케일, 시점변화, 조명 변화에 강인한 매칭점을 찾는 것은 어려운 일이다. 이러한 문제점을 보완하기 위해 SURF(Scale Invariant Feature Transform), SIFT(Speed up Robust Features) 등의 알고리즘이 제안 되었지만, 여전히 조명변화에 불안정하고 정확하지 못한 성능을 보인다. 본 논문에서는 이러한 조명변화에 대한 문제점을 해결하기 위해 히스토그램 평활화를 이용하여 영상을 보정 후, SURF를 통한 영상 매칭을 하였다. 열악한 조명환경 내에서 촬영된 영상에서 SURF를 이용하여 표현자(Descriptor)를 생성 할 때 특징점이 잘 추출되지 않는 문제점을 해결하기 위하여 히스토그램 평활화를 이용하였고, 보정 후 특징점 개수가 많이 증가하는 것을 보여 확인하였다. 기존의 SURF와 개량된 SURF를 조명이 서로 다른 영상간의 매칭 성능을 비교함으로써 제안한 알고리즘의 우수성을 확인하였다

  • PDF

Sensor Fusion-Based Semantic Map Building (센서융합을 통한 시맨틱 지도의 작성)

  • Park, Joong-Tae;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.277-282
    • /
    • 2011
  • This paper describes a sensor fusion-based semantic map building which can improve the capabilities of a mobile robot in various domains including localization, path-planning and mapping. To build a semantic map, various environmental information, such as doors and cliff areas, should be extracted autonomously. Therefore, we propose a method to detect doors, cliff areas and robust visual features using a laser scanner and a vision sensor. The GHT (General Hough Transform) based recognition of door handles and the geometrical features of a door are used to detect doors. To detect the cliff area and robust visual features, the tilting laser scanner and SIFT features are used, respectively. The proposed method was verified by various experiments and showed that the robot could build a semantic map autonomously in various indoor environments.

Design and Implementation of a Mobile Search Method based on Images (이미지 기반 모바일 검색 방법의 설계 및 구현)

  • Song, Jeo;Jeon, Jin-Hwan;Song, Un-Kyung;Lee, Sang-Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.01a
    • /
    • pp.33-35
    • /
    • 2016
  • 본 논문에서는 모바일 디바이스를 이용하여 촬영한 이미지 또는 이미 모바일 디바이스에 저장된 이미지를 사용자가 검색을 위한 질의어로 사용할 수 있는 방법에 대하여 제안한다. 기존의 모바일 검색엔진을 그대로 활용하기 위해 이미지 어노테이션에 기반한 태깅 키워드를 검색 이미지와 매칭하여 질의하는 방식으로 구현하며, 이 과정에서 이미지의 분석과 분류를 위한 SVM(Support Vector Machine)과 SIFT(Scale Invariant Feature Transform) 알고리즘을 사용하였으며, 이미지 어노테이션 태깅에 대한 키워드 매칭을 위해 빅데이터에서의 MapReduce를 응용하였다.

  • PDF

Accurate Camera Self-Calibration based on Image Quality Assessment

  • Fayyaz, Rabia;Rhee, Eun Joo
    • Journal of Information Technology Applications and Management
    • /
    • v.25 no.2
    • /
    • pp.41-52
    • /
    • 2018
  • This paper presents a method for accurate camera self-calibration based on SIFT Feature Detection and image quality assessment. We performed image quality assessment to select high quality images for the camera self-calibration process. We defined high quality images as those that contain little or no blur, and have maximum contrast among images captured within a short period. The image quality assessment includes blur detection and contrast assessment. Blur detection is based on the statistical analysis of energy and standard deviation of high frequency components of the images using Discrete Cosine Transform. Contrast assessment is based on contrast measurement and selection of the high contrast images among some images captured in a short period. Experimental results show little or no distortion in the perspective view of the images. Thus, the suggested method achieves camera self-calibration accuracy of approximately 93%.

Panoramic Image Stitching using SURF

  • You, Meng;Lim, Jong-Seok;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.1
    • /
    • pp.26-32
    • /
    • 2011
  • This paper proposes a new method to process panoramic image stitching using SURF(Speeded Up Robust Features). Panoramic image stitching is considered a problem of the correspondence matching. In computer vision, it is difficult to find corresponding points in variable environment where a scale, rotation, view point and illumination are changed. However, SURF algorithm have been widely used to solve the problem of the correspondence matching because it is faster than SIFT(Scale Invariant Feature Transform). In this work, we also describe an efficient approach to decreasing computation time through the homography estimation using RANSAC(random sample consensus). RANSAC is a robust estimation procedure that uses a minimal set of randomly sampled correspondences to estimate image transformation parameters. Experimental results show that our method is robust to rotation, zoom, Gaussian noise and illumination change of the input images and computation time is greatly reduced.

Metadata Processing Technique for Similar Image Search of Mobile Platform

  • Seo, Jung-Hee
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.36-41
    • /
    • 2021
  • Text-based image retrieval is not only cumbersome as it requires the manual input of keywords by the user, but is also limited in the semantic approach of keywords. However, content-based image retrieval enables visual processing by a computer to solve the problems of text retrieval more fundamentally. Vision applications such as extraction and mapping of image characteristics, require the processing of a large amount of data in a mobile environment, rendering efficient power consumption difficult. Hence, an effective image retrieval method on mobile platforms is proposed herein. To provide the visual meaning of keywords to be inserted into images, the efficiency of image retrieval is improved by extracting keywords of exchangeable image file format metadata from images retrieved through a content-based similar image retrieval method and then adding automatic keywords to images captured on mobile devices. Additionally, users can manually add or modify keywords to the image metadata.

Feature-based Non-rigid Registration between Pre- and Post-Contrast Lung CT Images (조영 전후의 폐 CT 영상 정합을 위한 특징 기반의 비강체 정합 기법)

  • Lee, Hyun-Joon;Hong, Young-Taek;Shim, Hack-Joon;Kwon, Dong-Jin;Yun, Il-Dong;Lee, Sang-Uk;Kim, Nam-Kug;Seo, Joon-Beom
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.237-244
    • /
    • 2011
  • In this paper, a feature-based registration technique is proposed for pre-contrast and post-contrast lung CT images. It utilizes three dimensional(3-D) features with their descriptors and estimates feature correspondences by nearest neighborhood matching in the feature space. We design a transformation model between the input image pairs using a free form deformation(FFD) which is based on B-splines. Registration is achieved by minimizing an energy function incorporating the smoothness of FFD and the correspondence information through a non-linear gradient conjugate method. To deal with outliers in feature matching, our energy model integrates a robust estimator which discards outliers effectively by iteratively reducing a radius of confidence in the minimization process. Performance evaluation was carried out in terms of accuracy and efficiency using seven pairs of lung CT images of clinical practice. For a quantitative assessment, a radiologist specialized in thorax manually placed landmarks on each CT image pair. In comparative evaluation to a conventional feature-based registration method, our algorithm showed improved performances in both accuracy and efficiency.

A Hybrid Proposed Framework for Object Detection and Classification

  • Aamir, Muhammad;Pu, Yi-Fei;Rahman, Ziaur;Abro, Waheed Ahmed;Naeem, Hamad;Ullah, Farhan;Badr, Aymen Mudheher
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1176-1194
    • /
    • 2018
  • The object classification using the images' contents is a big challenge in computer vision. The superpixels' information can be used to detect and classify objects in an image based on locations. In this paper, we proposed a methodology to detect and classify the image's pixels' locations using enhanced bag of words (BOW). It calculates the initial positions of each segment of an image using superpixels and then ranks it according to the region score. Further, this information is used to extract local and global features using a hybrid approach of Scale Invariant Feature Transform (SIFT) and GIST, respectively. To enhance the classification accuracy, the feature fusion technique is applied to combine local and global features vectors through weight parameter. The support vector machine classifier is a supervised algorithm is used for classification in order to analyze the proposed methodology. The Pascal Visual Object Classes Challenge 2007 (VOC2007) dataset is used in the experiment to test the results. The proposed approach gave the results in high-quality class for independent objects' locations with a mean average best overlap (MABO) of 0.833 at 1,500 locations resulting in a better detection rate. The results are compared with previous approaches and it is proved that it gave the better classification results for the non-rigid classes.

On-Road Car Detection System Using VD-GMM 2.0 (차량검출 GMM 2.0을 적용한 도로 위의 차량 검출 시스템 구축)

  • Lee, Okmin;Won, Insu;Lee, Sangmin;Kwon, Jangwoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2291-2297
    • /
    • 2015
  • This paper presents a vehicle detection system using the video as a input image what has moving of vehicles.. Input image has constraints. it has to get fixed view and downward view obliquely from top of the road. Road detection is required to use only the road area in the input image. In introduction, we suggest the experiment result and the critical point of motion history image extraction method, SIFT(Scale_Invariant Feature Transform) algorithm and histogram analysis to detect vehicles. To solve these problem, we propose using applied Gaussian Mixture Model(GMM) that is the Vehicle Detection GMM(VDGMM). In addition, we optimize VDGMM to detect vehicles more and named VDGMM 2.0. In result of experiment, each precision, recall and F1 rate is 9%, 53%, 15% for GMM without road detection and 85%, 77%, 80% for VDGMM2.0 with road detection.

Multi-view Image Generation from Stereoscopic Image Features and the Occlusion Region Extraction (가려짐 영역 검출 및 스테레오 영상 내의 특징들을 이용한 다시점 영상 생성)

  • Lee, Wang-Ro;Ko, Min-Soo;Um, Gi-Mun;Cheong, Won-Sik;Hur, Nam-Ho;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.17 no.5
    • /
    • pp.838-850
    • /
    • 2012
  • In this paper, we propose a novel algorithm that generates multi-view images by using various image features obtained from the given stereoscopic images. In the proposed algorithm, we first create an intensity gradient saliency map from the given stereo images. And then we calculate a block-based optical flow that represents the relative movement(disparity) of each block with certain size between left and right images. And we also obtain the disparities of feature points that are extracted by SIFT(scale-invariant We then create a disparity saliency map by combining these extracted disparity features. Disparity saliency map is refined through the occlusion detection and removal of false disparities. Thirdly, we extract straight line segments in order to minimize the distortion of straight lines during the image warping. Finally, we generate multi-view images by grid mesh-based image warping algorithm. Extracted image features are used as constraints during grid mesh-based image warping. The experimental results show that the proposed algorithm performs better than the conventional DIBR algorithm in terms of visual quality.