• Title/Summary/Keyword: Scale-invariant Feature

Search Result 234, Processing Time 0.028 seconds

Design and Implementation Stereo Camera based Twin Camera Module System (스테레오 카메라 기반 트윈 카메라 모듈 시스템 설계 및 구현)

  • Kim, Tae-Yeun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.537-546
    • /
    • 2019
  • The paper actualizes the twin camera module system that is portable and very useful for the production of 3D contents. The suggested twin camera module system is a system to be able to display the 3D image after converting the inputted image from 2D stereo camera. To evaluate the performance of the twin camera module suggested in this paper, I assessed the correction of Rotation and Tilt created depending on the visual difference between the left and right stereoscopic image shot by the left and right lenses by using the Test Platform. In addition, I verified the efficiency of the twin camera module system through verifying Depth Error of 3D stereoscopic image by means of Scale Invariant Feature Transform(SIFT) algorithm. I think that if the user utilizes the suggested twin camera module system in displaying the image to the external after converting the shot image into the 3D stereoscopic image and the preparation image, it is possible to display the image in a matched way with an output device fit respectively for different 3D image production methods and if the user utilizes the system in displaying the created image in the form of the 3D stereoscopic image and the preparation image via different channels, it is possible to produce 3D image contents easily and conveniently with applying to lots of products.

Analysis of Shadow Effect on High Resolution Satellite Image Matching in Urban Area (도심지역의 고해상도 위성영상 정합에 대한 그림자 영향 분석)

  • Yeom, Jun Ho;Han, You Kyung;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.93-98
    • /
    • 2013
  • Multi-temporal high resolution satellite images are essential data for efficient city analysis and monitoring. Yet even when acquired from the same location, identical sensors as well as different sensors, these multi-temporal images have a geometric inconsistency. Matching points between images, therefore, must be extracted to match the images. With images of an urban area, however, it is difficult to extract matching points accurately because buildings, trees, bridges, and other artificial objects cause shadows over a wide area, which have different intensities and directions in multi-temporal images. In this study, we analyze a shadow effect on image matching of high resolution satellite images in urban area using Scale-Invariant Feature Transform(SIFT), the representative matching points extraction method, and automatic shadow extraction method. The shadow segments are extracted using spatial and spectral attributes derived from the image segmentation. Also, we consider information of shadow adjacency with the building edge buffer. SIFT matching points extracted from shadow segments are eliminated from matching point pairs and then image matching is performed. Finally, we evaluate the quality of matching points and image matching results, visually and quantitatively, for the analysis of shadow effect on image matching of high resolution satellite image.

A Study on the Improvement of Geometric Quality of KOMPSAT-3/3A Imagery Using Planetscope Imagery (Planetscope 영상을 이용한 KOMPSAT-3/3A 영상의 기하품질 향상 방안 연구)

  • Jung, Minyoung;Kang, Wonbin;Song, Ahram;Kim, Yongil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.327-343
    • /
    • 2020
  • This study proposes a method to improve the geometric quality of KOMPSAT (Korea Multi-Purpose Satellite)-3/3A Level 1R imagery, particularly for efficient disaster damage analysis. The proposed method applies a novel grid-based SIFT (Scale Invariant Feature Transform) method to the Planetscope ortho-imagery, which solves the inherent limitations in acquiring appropriate optical satellite imagery over disaster areas, and the KOMPSAT-3/3A imagery to extract GCPs (Ground Control Points) required for the RPC (Rational Polynomial Coefficient) bias compensation. In order to validate its effectiveness, the proposed method was applied to the KOMPSAT-3 multispectral image of Gangnueng which includes the April 2019 wildfire, and the KOMPSAT-3A image of Daejeon, which was additionally selected in consideration of the diverse land cover types. The proposed method improved the geometric quality of KOMPSAT-3/3A images by reducing the positioning errors(RMSE: Root Mean Square Error) of the two images from 6.62 pixels to 1.25 pixels for KOMPSAT-3, and from 7.03 pixels to 1.66 pixels for KOMPSAT-3A. Through a visual comparison of the post-disaster KOMPSAT-3 ortho-image of Gangneung and the pre-disaster Planetscope ortho-image, the result showed appropriate geometric quality for wildfire damage analysis. This paper demonstrated the possibility of using Planetscope ortho-images as an alternative to obtain the GCPs for geometric calibration. Furthermore, the proposed method can be applied to various KOMPSAT-3/3A research studies where Planetscope ortho-images can be provided.

Automatic Co-registration of Cloud-covered High-resolution Multi-temporal Imagery (구름이 포함된 고해상도 다시기 위성영상의 자동 상호등록)

  • Han, You Kyung;Kim, Yong Il;Lee, Won Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.101-107
    • /
    • 2013
  • Generally the commercial high-resolution images have their coordinates, but the locations are locally different according to the pose of sensors at the acquisition time and relief displacement of terrain. Therefore, a process of image co-registration has to be applied to use the multi-temporal images together. However, co-registration is interrupted especially when images include the cloud-covered regions because of the difficulties of extracting matching points and lots of false-matched points. This paper proposes an automatic co-registration method for the cloud-covered high-resolution images. A scale-invariant feature transform (SIFT), which is one of the representative feature-based matching method, is used, and only features of the target (cloud-covered) images within a circular buffer from each feature of reference image are used for the candidate of the matching process. Study sites composed of multi-temporal KOMPSAT-2 images including cloud-covered regions were employed to apply the proposed algorithm. The result showed that the proposed method presented a higher correct-match rate than original SIFT method and acceptable registration accuracies in all sites.

Traffic Object Tracking Based on an Adaptive Fusion Framework for Discriminative Attributes (차별적인 영상특징들에 적응 가능한 융합구조에 의한 도로상의 물체추적)

  • Kim Sam-Yong;Oh Se-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.5 s.311
    • /
    • pp.1-9
    • /
    • 2006
  • Because most applications of vision-based object tracking demonstrate satisfactory operations only under very constrained environments that have simplifying assumptions or specific visual attributes, these approaches can't track target objects for the highly variable, unstructured, and dynamic environments like a traffic scene. An adaptive fusion framework is essential that takes advantage of the richness of visual information such as color, appearance shape and so on, especially at cluttered and dynamically changing scenes with partial occlusion[1]. This paper develops a particle filter based adaptive fusion framework and improves the robustness and adaptation of this framework by adding a new distinctive visual attribute, an image feature descriptor using SIFT (Scale Invariant Feature Transform)[2] and adding an automatic teaming scheme of the SIFT feature library according to viewpoint, illumination, and background change. The proposed algorithm is applied to track various traffic objects like vehicles, pedestrians, and bikes in a driver assistance system as an important component of the Intelligent Transportation System.

A Hybrid Proposed Framework for Object Detection and Classification

  • Aamir, Muhammad;Pu, Yi-Fei;Rahman, Ziaur;Abro, Waheed Ahmed;Naeem, Hamad;Ullah, Farhan;Badr, Aymen Mudheher
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1176-1194
    • /
    • 2018
  • The object classification using the images' contents is a big challenge in computer vision. The superpixels' information can be used to detect and classify objects in an image based on locations. In this paper, we proposed a methodology to detect and classify the image's pixels' locations using enhanced bag of words (BOW). It calculates the initial positions of each segment of an image using superpixels and then ranks it according to the region score. Further, this information is used to extract local and global features using a hybrid approach of Scale Invariant Feature Transform (SIFT) and GIST, respectively. To enhance the classification accuracy, the feature fusion technique is applied to combine local and global features vectors through weight parameter. The support vector machine classifier is a supervised algorithm is used for classification in order to analyze the proposed methodology. The Pascal Visual Object Classes Challenge 2007 (VOC2007) dataset is used in the experiment to test the results. The proposed approach gave the results in high-quality class for independent objects' locations with a mean average best overlap (MABO) of 0.833 at 1,500 locations resulting in a better detection rate. The results are compared with previous approaches and it is proved that it gave the better classification results for the non-rigid classes.

3D Object Recognition Using Appearance Model Space of Feature Point (특징점 Appearance Model Space를 이용한 3차원 물체 인식)

  • Joo, Seong Moon;Lee, Chil Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.2
    • /
    • pp.93-100
    • /
    • 2014
  • 3D object recognition using only 2D images is a difficult work because each images are generated different to according to the view direction of cameras. Because SIFT algorithm defines the local features of the projected images, recognition result is particularly limited in case of input images with strong perspective transformation. In this paper, we propose the object recognition method that improves SIFT algorithm by using several sequential images captured from rotating 3D object around a rotation axis. We use the geometric relationship between adjacent images and merge several images into a generated feature space during recognizing object. To clarify effectiveness of the proposed algorithm, we keep constantly the camera position and illumination conditions. This method can recognize the appearance of 3D objects that previous approach can not recognize with usually SIFT algorithm.

Face Recognition Robust to Brightness, Contrast, Scale, Rotation and Translation (밝기, 명암도, 크기, 회전, 위치 변화에 강인한 얼굴 인식)

  • 이형지;정재호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.149-156
    • /
    • 2003
  • This paper proposes a face recognition method based on modified Otsu binarization, Hu moment and linear discriminant analysis (LDA). Proposed method is robust to brightness, contrast, scale, rotation, and translation changes. Modified Otsu binarization can make binary images that have the invariant characteristic in brightness and contrast changes. From edge and multi-level binary images obtained by the threshold method, we compute the 17 dimensional Hu moment and then extract feature vector using LDA algorithm. Especially, our face recognition system is robust to scale, rotation, and translation changes because of using Hu moment. Experimental results showed that our method had almost a superior performance compared with the conventional well-known principal component analysis (PCA) and the method combined PCA and LDA in the perspective of brightness, contrast, scale, rotation, and translation changes with Olivetti Research Laboratory (ORL) database and the AR database.

Feature-based Non-rigid Registration between Pre- and Post-Contrast Lung CT Images (조영 전후의 폐 CT 영상 정합을 위한 특징 기반의 비강체 정합 기법)

  • Lee, Hyun-Joon;Hong, Young-Taek;Shim, Hack-Joon;Kwon, Dong-Jin;Yun, Il-Dong;Lee, Sang-Uk;Kim, Nam-Kug;Seo, Joon-Beom
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.237-244
    • /
    • 2011
  • In this paper, a feature-based registration technique is proposed for pre-contrast and post-contrast lung CT images. It utilizes three dimensional(3-D) features with their descriptors and estimates feature correspondences by nearest neighborhood matching in the feature space. We design a transformation model between the input image pairs using a free form deformation(FFD) which is based on B-splines. Registration is achieved by minimizing an energy function incorporating the smoothness of FFD and the correspondence information through a non-linear gradient conjugate method. To deal with outliers in feature matching, our energy model integrates a robust estimator which discards outliers effectively by iteratively reducing a radius of confidence in the minimization process. Performance evaluation was carried out in terms of accuracy and efficiency using seven pairs of lung CT images of clinical practice. For a quantitative assessment, a radiologist specialized in thorax manually placed landmarks on each CT image pair. In comparative evaluation to a conventional feature-based registration method, our algorithm showed improved performances in both accuracy and efficiency.

Invariant Image Matching using Linear Features (선형특징을 사용한 불변 영상정합 기법)

  • Park, Se-Je;Park, Young-Tae
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.55-62
    • /
    • 1998
  • Matching two images is an essential step for many computer vision applications. A new approach to the scale and rotation invariant scene matching, using linear features, is presented. Scene or model images are described by a set of linear features approximating edge information, which can be obtained by the conventional edge detection, thinning, and piecewise linear approximation. A set of candidate parameters are hypothesized by mapping the angular difference and a new distance measure to the Hough space and by detecting maximally consistent points. These hypotheses are verified by a fast linear feature matching algorithm composed of a single-step relaxation and a Hough technique. The proposed method is shown to be much faster than the conventional one where the relaxation process is repeated until convergence, while providing matching performance robust to the random alteration of the linear features, without a priori information on the geometrical transformation parameters.

  • PDF