• Title/Summary/Keyword: 영상 디스크립터

Search Result 21, Processing Time 0.031 seconds

Real Object Recognition Based Mobile Augmented Reality Game (현실 객체 인식 기반 모바일 증강현실 게임)

  • Lee, Dong-Chun;Lee, Hun-Joo
    • Journal of Korea Game Society
    • /
    • v.17 no.4
    • /
    • pp.17-24
    • /
    • 2017
  • This paper describes the general process of making augmented reality game for real objects without markers. In this paper, point cloud data created by using slam technology is edited using a separate editing tool to optimize performance in mobile environment. Also, in the game execution stage, a lot of load is generated due to the extraction of feature points and the matching of descriptors. In order to reduce this, optical flow is used to track the matched feature points in the previous input image.

Image Identifier based on Local Feature's Histogram and Acceleration Technique using GPU (지역 특징 히스토그램 기반 영상식별자와 GPU 가속화)

  • Jeon, Hyeok-June;Seo, Yong-Seok;Hwang, Chi-Jung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.9
    • /
    • pp.889-897
    • /
    • 2010
  • Recently, a cutting-edge large-scale image database system has demanded these attributes: search with alarming speed, performs with high accuracy, archives efficiently and much more. An image identifier (descriptor) is for measuring the similarity of two images which plays an important role in this system. The extraction method of an image identifier can be roughly classified into two methods: a local and global method. In this paper, the proposed image identifier, LFH(Local Feature's Histogram), is obtained by a histogram of robust and distinctive local descriptors (features) constrained by a district sub-division of a local region. Furthermore, LFH has not only the properties of a local and global descriptor, but also can perform calculations at a magnificent clip to determine distance with pinpoint accuracy. Additionally, we suggested a way to extract LFH via GPU (OpenGL and GLSL). In this experiment, we have compared the LFH with SIFT (local method) and EHD (global method) via storage capacity, extraction and retrieval time along with accuracy.

Fast Stitching Algorithm by using Feature Tracking (특징점 추적을 통한 다수 영상의 고속 스티칭 기법)

  • Park, Siyoung;Kim, Jongho;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.728-737
    • /
    • 2015
  • Stitching algorithm obtain a descriptor of the feature points extracted from multiple images, and create a single image through the matching process between the each of the feature points. In this paper, a feature extraction and matching techniques for the creation of a high-speed panorama using video input is proposed. Features from Accelerated Segment Test(FAST) is used for the feature extraction at high speed. A new feature point matching process, different from the conventional method is proposed. In the matching process, by tracking region containing the feature point through the Mean shift vector required for matching is obtained. Obtained vector is used to match the extracted feature points. In order to remove the outlier, the RANdom Sample Consensus(RANSAC) method is used. By obtaining a homography transformation matrix of the two input images, a single panoramic image is generated. Through experimental results, we show that the proposed algorithm improve of speed panoramic image generation compared to than the existing method.

Recognition and Pose Estimation of 3-D Objects for Visual Servoing (Visual Servoing을 위한 3차원 물체의 인식 및 자세 추정)

  • Yang, Jae-Ho;Jeong, Moon-Ho;Park, Mig-Non
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1931-1932
    • /
    • 2006
  • 로봇이 어떤 물체를 인지하고 그 물체에 대해 어떤 작업을 하고자 할 때 특정 물체의 인식 문제, 3차원 정보를 획득하는 문제, 자세를 추정하는 문제 등 해결해야 될 문제들이 있다. 물체를 인식하는 과정에서는 주위 배경과 물체의 크기의 변화, 회전, 가려짐 등으로 인해 물체 인식을 어렵게 만드는 요소들이 있다. 2차원 이미지를 통해 3차원 정보를 추출하는 과정은 일반적으로 두 대의 카메라를 이용하여 스테레오 이미지를 통해 얻는다. 이 때 좌우 영상간의 매칭의 과정이 필요하다. 자세 추정의 문제는 카메라 좌표와 물체의 좌표간의 관계를 알아야 한다. Visual Servoing을 어렵게 만드는 많은 요인들이 있으며 본 논문에서는 물체의 크기, 회전, 이동에 불변인 디스크립터(descriptor)를 사용하는 SIFT(Scale Invariant Feature Transform)를 통해 3차원 물체의 인식과 자세를 추정하는 방법을 제시한다. 또한 자세 추정을 위해 2차원 Keypoint들의 매칭을 3차원 정보를 통해 검증하는 방법을 제시한다. (SIFT에 의해 추출된 point를 Keypoint라 명한다.)

  • PDF

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

Classification of Feature Points Required for Multi-Frame Based Building Recognition (멀티 프레임 기반 건물 인식에 필요한 특징점 분류)

  • Park, Si-young;An, Ha-eun;Lee, Gyu-cheol;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.3
    • /
    • pp.317-327
    • /
    • 2016
  • The extraction of significant feature points from a video is directly associated with the suggested method's function. In particular, the occlusion regions in trees or people, or feature points extracted from the background and not from objects such as the sky or mountains are insignificant and can become the cause of undermined matching or recognition function. This paper classifies the feature points required for building recognition by using multi-frames in order to improve the recognition function(algorithm). First, through SIFT(scale invariant feature transform), the primary feature points are extracted and the mismatching feature points are removed. To categorize the feature points in occlusion regions, RANSAC(random sample consensus) is applied. Since the classified feature points were acquired through the matching method, for one feature point there are multiple descriptors and therefore a process that compiles all of them is also suggested. Experiments have verified that the suggested method is competent in its algorithm.

Study on the Hand Gesture Recognition System and Algorithm based on Millimeter Wave Radar (밀리미터파 레이더 기반 손동작 인식 시스템 및 알고리즘에 관한 연구)

  • Lee, Youngseok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.251-256
    • /
    • 2019
  • In this paper we proposed system and algorithm to recognize hand gestures based on the millimeter wave that is in 65GHz bandwidth. The proposed system is composed of millimeter wave radar board, analog to data conversion and data capture board and notebook to perform gesture recognition algorithms. As feature vectors in proposed algorithm. we used global and local zernike moment descriptor which are robust to distort by rotation of scaling of 2D data. As Experimental result, performance of the proposed algorithm is evaluated and compared with those of algorithms using single global or local zernike descriptor as feature vectors. In analysis of confusion matrix of algorithms, the proposed algorithm shows the better performance in comparison of precision, accuracy and sensitivity, subsequently total performance index of our method is 95.6% comparing with another two mehods in 88.4% and 84%.

Invariant Classification and Detection for Cloth Searching (의류 검색용 회전 및 스케일 불변 이미지 분류 및 검색 기술)

  • Hwang, Inseong;Cho, Beobkeun;Jeon, Seungwoo;Choe, Yunsik
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.396-404
    • /
    • 2014
  • The field of searching clothing, which is very difficult due to the nature of the informal sector, has been in an effort to reduce the recognition error and computational complexity. However, there is no concrete examples of the whole progress of learning and recognizing for cloth, and the related technologies are still showing many limitations. In this paper, the whole process including identifying both the person and cloth in an image and analyzing both its color and texture pattern is specifically shown for classification. Especially, deformable search descriptor, LBPROT_35 is proposed for identifying the pattern of clothing. The proposed method is scale and rotation invariant, so we can obtain even higher detection rate even though the scale and angle of the image changes. In addition, the color classifier with the color space quantization is proposed not to loose color similarity. In simulation, we build database by training a total of 810 images from the clothing images on the internet, and test some of them. As a result, the proposed method shows a good performance as it has 94.4% matching rate while the former Dense-SIFT method has 63.9%.

A study on image region analysis and image enhancement using detail descriptor (디테일 디스크립터를 이용한 이미지 영역 분석과 개선에 관한 연구)

  • Lim, Jae Sung;Jeong, Young-Tak;Lee, Ji-Hyeok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.6
    • /
    • pp.728-735
    • /
    • 2017
  • With the proliferation of digital devices, the devices have generated considerable additive white Gaussian noise while acquiring digital images. The most well-known denoising methods focused on eliminating the noise, so detailed components that include image information were removed proportionally while eliminating the image noise. The proposed algorithm provides a method that preserves the details and effectively removes the noise. In this proposed method, the goal is to separate meaningful detail information in image noise environment using the edge strength and edge connectivity. Consequently, even as the noise level increases, it shows denoising results better than the other benchmark methods because proposed method extracts the connected detail component information. In addition, the proposed method effectively eliminated the noise for various noise levels; compared to the benchmark algorithms, the proposed algorithm shows a highly structural similarity index(SSIM) value and peak signal-to-noise ratio(PSNR) value, respectively. As shown the result of high SSIMs, it was confirmed that the SSIMs of the denoising results includes a human visual system(HVS).