• Title/Summary/Keyword: scale-invariant feature

Search Result 235, Processing Time 0.025 seconds

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • v.18 no.6
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

Affine Local Descriptors for Viewpoint Invariant Face Recognition

  • Gao, Yongbin;Lee, Hyo Jong
    • Annual Conference of KIPS
    • /
    • 2014.04a
    • /
    • pp.781-784
    • /
    • 2014
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we use Affine SIFT to detect affine invariant local descriptors for face recognition under large viewpoint change. Affine SIFT is an extension of SIFT algorithm. SIFT algorithm is scale and rotation invariant, which is powerful for small viewpoint changes in face recognition, but it fails when large viewpoint change exists. In our scheme, Affine SIFT is used for both gallery face and probe face, which generates a series of different viewpoints using affine transformation. Therefore, Affine SIFT allows viewpoint difference between gallery face and probe face. Experiment results show our framework achieves better recognition accuracy than SIFT algorithm on FERET database.

Feature Extraction for Endoscopic Image by using the Scale Invariant Feature Transform(SIFT) (SIFT를 이용한 내시경 영상에서의 특징점 추출)

  • Oh, J.S.;Kim, H.C.;Kim, H.R.;Koo, J.M.;Kim, M.G.
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.6-8
    • /
    • 2005
  • Study that uses geometrical information in computer vision is lively. Problem that should be preceded is matching problem before studying. Feature point should be extracted for well matching. There are a lot of methods that extract feature point from former days are studied. Because problem does not exist algorithm that is applied for all images, it is a hot water. Specially, it is not easy to find feature point in endoscope image. The big problem can not decide easily a point that is predicted feature point as can know even if see endoscope image as eyes. Also, accuracy of matching problem can be decided after number of feature points is enough and also distributed on whole image. In this paper studied algorithm that can apply to endoscope image. SIFT method displayed excellent performance when compared with alternative way (Affine invariant point detector etc.) in general image but SIFT parameter that used in general image can't apply to endoscope image. The gual of this paper is abstraction of feature point on endoscope image that controlled by contrast threshold and curvature threshold among the parameters for applying SIFT method on endoscope image. Studied about method that feature points can have good distribution and control number of feature point than traditional alternative way by controlling the parameters on experiment result.

  • PDF

Classification of Feature Points Required for Multi-Frame Based Building Recognition (멀티 프레임 기반 건물 인식에 필요한 특징점 분류)

  • Park, Si-young;An, Ha-eun;Lee, Gyu-cheol;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.3
    • /
    • pp.317-327
    • /
    • 2016
  • The extraction of significant feature points from a video is directly associated with the suggested method's function. In particular, the occlusion regions in trees or people, or feature points extracted from the background and not from objects such as the sky or mountains are insignificant and can become the cause of undermined matching or recognition function. This paper classifies the feature points required for building recognition by using multi-frames in order to improve the recognition function(algorithm). First, through SIFT(scale invariant feature transform), the primary feature points are extracted and the mismatching feature points are removed. To categorize the feature points in occlusion regions, RANSAC(random sample consensus) is applied. Since the classified feature points were acquired through the matching method, for one feature point there are multiple descriptors and therefore a process that compiles all of them is also suggested. Experiments have verified that the suggested method is competent in its algorithm.

Image Similarity Retrieval using an Scale and Rotation Invariant Region Feature (크기 및 회전 불변 영역 특징을 이용한 이미지 유사성 검색)

  • Yu, Seung-Hoon;Kim, Hyun-Soo;Lee, Seok-Lyong;Lim, Myung-Kwan;Kim, Deok-Hwan
    • Journal of KIISE:Databases
    • /
    • v.36 no.6
    • /
    • pp.446-454
    • /
    • 2009
  • Among various region detector and shape feature extraction method, MSER(Maximally Stable Extremal Region) and SIFT and its variant methods are popularly used in computer vision application. However, since SIFT is sensitive to the illumination change and MSER is sensitive to the scale change, it is not easy to apply the image similarity retrieval. In this paper, we present a Scale and Rotation Invariant Region Feature(SRIRF) descriptor using scale pyramid, MSER and affine normalization. The proposed SRIRF method is robust to scale, rotation, illumination change of image since it uses the affine normalization and the scale pyramid. We have tested the SRIRF method on various images. Experimental results demonstrate that the retrieval performance of the SRIRF method is about 20%, 38%, 11%, 24% better than those of traditional SIFT, PCA-SIFT, CE-SIFT and SURF, respectively.

Fuzzy Mean Method with Bispectral Features for Robust 2D Shape Classification

  • Woo, Young-Woon;Han, Soo-Whan
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.313-320
    • /
    • 1999
  • In this paper, a translation, rotation and scale invariant system for the classification of closed 2D images using the bispectrum of a contour sequence and the weighted fuzzy mean method is derived and compared with the classification process using one of the competitive neural algorithm, called a LVQ(Learning Vector Quantization). The bispectrun based on third order cumulants is applied to the contour sequences of the images to extract fifteen feature vectors for each planar image. These bispectral feature vectors, which are invariant to shape translation, rotation and scale transformation, can be used to represent two-dimensional planar images and are fed into an classifier using weighted fuzzy mean method. The experimental processes with eight different shapes of aircraft images are presented to illustrate the high performance of the proposed classifier.

  • PDF

Model-based 3-D object recognition using hopfield neural network (Hopfield 신경회로망을 이용한 모델 기반형 3차원 물체 인식)

  • 정우상;송호근;김태은;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.5
    • /
    • pp.60-72
    • /
    • 1996
  • In this paper, a enw model-base three-dimensional (3-D) object recognition mehtod using hopfield network is proposed. To minimize deformation of feature values on 3-D rotation, we select 3-D shape features and 3-D relational features which have rotational invariant characteristics. Then these feature values are normalized to have scale invariant characteristics, also. The input features are matched with model features by optimization process of hopjfield network in the form of two dimensional arrayed neurons. Experimental results on object classification and object matching with the 3-D rotated, scale changed, an dpartial oculued objects show good performance of proposed method.

  • PDF

Image Stabilization Scheme for Arbitrary Disturbance (임의의 외란에 대한 영상 안정화)

  • Kwak, Hwy-Kuen
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5750-5757
    • /
    • 2014
  • This paper proposes an image stabilization method for arbitrary disturbances, such as rotation, translation and zoom movement, using the SIFT (Scale Invariant Feature Transform). In addition, image stabilization was carried out using the image division and merge technique when moving objects appear on the scene. Finally, the experimental results showed that the suggested image stabilization scheme produced superior performance compared to the previous ones.

A Vehicle Model Recognition using Car's Headlights Features and Homogeneity Information (차량 헤드라이트 특징과 동질성 정보를 이용한 차종 인식)

  • Kim, Mih-Ho;Choi, Doo-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.10
    • /
    • pp.1243-1251
    • /
    • 2011
  • This paper proposes a new vehicle model recognition using scale invariant feature transform to car's headlights image. Proposed vehicle model recognition raises the accuracy using "homogeneity" calculated from the distribution of features. In the experiment with 400 test images taken from 54 different vehicles, proposed method has 90% recognition rate and 16.45 homogeneity.

Robust 2-D Object Recognition Using Bispectrum and LVQ Neural Classifier

  • HanSoowhan;woon, Woo-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.255-262
    • /
    • 1998
  • This paper presents a translation, rotation and scale invariant methodology for the recognition of closed planar shape images using the bispectrum of a contour sequence and the learning vector quantization(LVQ) neural classifier. The contour sequences obtained from the closed planar images represent the Euclidean distance between the centroid and all boundary pixels of the shape, and are related to the overall shape of the images. The higher order spectra based on third order cumulants is applied to tihs contour sample to extract fifteen bispectral feature vectors for each planar image. There feature vector, which are invariant to shape translation, rotation and scale transformation, can be used to represent two0dimensional planar images and are fed into a neural network classifier. The LVQ architecture is chosen as a neural classifier because the network is easy and fast to train, the structure is relatively simple. The experimental recognition processes with eight different hapes of aircraft images are presented to illustrate the high performance of this proposed method even the target images are significantly corrupted by noise.

  • PDF