• Title/Summary/Keyword: invariant pixels

Search Result 42, Processing Time 0.023 seconds

Rotation Invariant Histogram of Oriented Gradients

  • Cheon, Min-Kyu;Lee, Won-Ju;Hyun, Chang-Ho;Park, Mignon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.293-298
    • /
    • 2011
  • In this paper, we propose a new image descriptor, that is, a rotation invariant histogram of oriented gradients (RIHOG). RIHOG overcomes a disadvantage of the histogram of oriented gradients (HOG), which is very sensitive to image rotation. The HOG only uses magnitude values of a pixel without considering neighboring pixels. The RIHOG uses the accumulated relative magnitude values of corresponding relative orientation calculated with neighboring pixels, which has an effect on reducing the sensitivity to image rotation. The performance of RIHOG is verified via the index of classification and classification of Brodatz texture data.

Estimation of Real Boundary with Subpixel Accuracy in Digital Imagery (디지털 영상에서 부화소 정밀도의 실제 경계 추정)

  • Kim, Tae-Hyeon;Moon, Young-Shik;Han, Chang-Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.8
    • /
    • pp.16-22
    • /
    • 1999
  • In this paper, an efficient algorithm for estimating real edge locations to subpixel values is described. Digital images are acquired by projection into image plane and sampling process. However, most of real edge locations are lost in this process, which causes low measurement accuracy. For accurate measurement, we propose an algorithm which estimates the real boundary between two adjacent pixels in digital imagery, with subpixel accuracy. We first define 1D edge operator based on the moment invariant. To extend it to 2D data, the edge orientation of each pixel is estimated by the LSE(Least Squares Error)line/circle fitting of a set of pixels around edge boundary. Then, using the pixels along the line perpendicular to the estimated edge orientation the real boundary is calculated with subpixel accuracy. Experimental results using real images show that the proposed method is robust in local noise, while maintaining low measurement error.

  • PDF

MEGH: A New Affine Invariant Descriptor

  • Dong, Xiaojie;Liu, Erqi;Yang, Jie;Wu, Qiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1690-1704
    • /
    • 2013
  • An affine invariant descriptor is proposed, which is able to well represent the affine covariant regions. Estimating main orientation is still problematic in many existing method, such as SIFT (scale invariant feature transform) and SURF (speeded up robust features). Instead of aligning the estimated main orientation, in this paper ellipse orientation is directly used. According to ellipse orientation, affine covariant regions are firstly divided into 4 sub-regions with equal angles. Since affine covariant regions are divided from the ellipse orientation, the divided sub-regions are rotation invariant regardless the rotation, if any, of ellipse. Meanwhile, the affine covariant regions are normalized into a circular region. In the end, the gradients of pixels in the circular region are calculated and the partition-based descriptor is created by using the gradients. Compared with the existing descriptors including MROGH, SIFT, GLOH, PCA-SIFT and spin images, the proposed descriptor demonstrates superior performance according to extensive experiments.

Iris recognition robust to noises

  • Kim, Jaemin;Jungwoo Won;Seongwon Cho
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.42-45
    • /
    • 2003
  • This paper describes a new iris recognition method using shift-invariant subbands. First an iris image is preprocessed to compensate the variation of the iris image. Then, the preprocessed iris image is decomposed into multiple subbands using a shift invariant wavelet transform. The best subband among them, which have rich information for various iris pattern and robust to noises, is selected for iris recognition. The quantized pixels of the best subband yield the feature representation. Experimentally, we show that the proposed method produced superb performance in iris recognition.

  • PDF

A PSRI Feature Extraction and Automatic Target Recognition Using a Cooperative Network and an MLP. (Cooperative network와 MLP를 이용한 PSRI 특징추출 및 자동표적인식)

  • 전준형;김진호;최흥문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.6
    • /
    • pp.198-207
    • /
    • 1996
  • A PSRI (position, scale, and rotation invariant ) feature extraction and automatic target recognition system using a cooperative network and an MLP is proposed. We can extract position invarient features by obtaining the target center using the projection and the moment in preprocessing stage. The scale and rotation invariant features are extracted from the contour projection of the number of edge pixels on each of the concentric circles, which is input to the cooperative network. By extracting the representative PSRI features form the features and their differentiations using max-net and min-net, we can rdduce the number of input neurons of the MLP, and make the resulted automatic target recognition system less sensitive to input variances. Experiments are conduted on various complex images which are shifted, rotated, or scaled, and the results show that the proposed system is very efficient for PSRI feature extractions and automatic target recognitions.

  • PDF

Robust 2-D Object Recognition Using Bispectrum and LVQ Neural Classifier

  • HanSoowhan;woon, Woo-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.255-262
    • /
    • 1998
  • This paper presents a translation, rotation and scale invariant methodology for the recognition of closed planar shape images using the bispectrum of a contour sequence and the learning vector quantization(LVQ) neural classifier. The contour sequences obtained from the closed planar images represent the Euclidean distance between the centroid and all boundary pixels of the shape, and are related to the overall shape of the images. The higher order spectra based on third order cumulants is applied to tihs contour sample to extract fifteen bispectral feature vectors for each planar image. There feature vector, which are invariant to shape translation, rotation and scale transformation, can be used to represent two0dimensional planar images and are fed into a neural network classifier. The LVQ architecture is chosen as a neural classifier because the network is easy and fast to train, the structure is relatively simple. The experimental recognition processes with eight different hapes of aircraft images are presented to illustrate the high performance of this proposed method even the target images are significantly corrupted by noise.

  • PDF

Texture Classification Using Local Neighbor Differences (지역 근처 차이를 이용한 텍스쳐 분류에 관한 연구)

  • Saipullah, Khairul Muzzammil;Peng, Shao-Hu;Park, Min-Wook;Kim, Deok-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.377-380
    • /
    • 2010
  • This paper proposes texture descriptor for texture classification called Local Neighbor Differences (LND). LND is a high discriminating texture descriptor and also robust to illumination changes. The proposed descriptor utilizes the sign of differences between surrounding pixels in a local neighborhood. The differences of those pixels are thresholded to form an 8-bit binary codeword. The decimal values of these 8-bit code words are computed and they are called LND values. A histogram of the resulting LND values is created and used as feature to describe the texture information of an image. Experimental results, with respect to texture classification accuracies using OUTEX_TC_00001 test suite has been performed. The results show that LND outperforms LBP method, with average classification accuracies of 92.3% whereas that of local binary patterns (LBP) is 90.7%.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

A Study on the Improvement of Geometric Quality of KOMPSAT-3/3A Imagery Using Planetscope Imagery (Planetscope 영상을 이용한 KOMPSAT-3/3A 영상의 기하품질 향상 방안 연구)

  • Jung, Minyoung;Kang, Wonbin;Song, Ahram;Kim, Yongil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.327-343
    • /
    • 2020
  • This study proposes a method to improve the geometric quality of KOMPSAT (Korea Multi-Purpose Satellite)-3/3A Level 1R imagery, particularly for efficient disaster damage analysis. The proposed method applies a novel grid-based SIFT (Scale Invariant Feature Transform) method to the Planetscope ortho-imagery, which solves the inherent limitations in acquiring appropriate optical satellite imagery over disaster areas, and the KOMPSAT-3/3A imagery to extract GCPs (Ground Control Points) required for the RPC (Rational Polynomial Coefficient) bias compensation. In order to validate its effectiveness, the proposed method was applied to the KOMPSAT-3 multispectral image of Gangnueng which includes the April 2019 wildfire, and the KOMPSAT-3A image of Daejeon, which was additionally selected in consideration of the diverse land cover types. The proposed method improved the geometric quality of KOMPSAT-3/3A images by reducing the positioning errors(RMSE: Root Mean Square Error) of the two images from 6.62 pixels to 1.25 pixels for KOMPSAT-3, and from 7.03 pixels to 1.66 pixels for KOMPSAT-3A. Through a visual comparison of the post-disaster KOMPSAT-3 ortho-image of Gangneung and the pre-disaster Planetscope ortho-image, the result showed appropriate geometric quality for wildfire damage analysis. This paper demonstrated the possibility of using Planetscope ortho-images as an alternative to obtain the GCPs for geometric calibration. Furthermore, the proposed method can be applied to various KOMPSAT-3/3A research studies where Planetscope ortho-images can be provided.

A Study on Automatic Coregistration and Band Selection of Hyperion Hyperspectral Images for Change Detection (변화탐지를 위한 Hyperion 초분광 영상의 자동 기하보정과 밴드선택에 관한 연구)

  • Kim, Dae-Sung;Kim, Yong-Il;Eo, Yang-Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.5
    • /
    • pp.383-392
    • /
    • 2007
  • This study focuses on co-registration and band selection, which are one of the pre-processing steps to apply the change detection technique using hyperspectral images. We carried out automatic co-registration by using the SIFT algorithm which performance was already established in the computer vision fields, and selected the bands fur change detection by estimating the noise of image through the PIFs reflecting the radiometric consistency. The EM algorithm was also applied to select the band objectively. Hyperion images were used for the proposed techniques, and non-calibrated bands and striping noises contained in Hyperion image were removed. Throughout the results, we could develop the reliable co-registration procedure which coincided with accuracy within 0.2 pixels (RMSE) for change detection, and verified that band selection depending on the visual inspection could be objective by extracting the PIFs.