• Title/Summary/Keyword: SIFT 알고리즘

Search Result 125, Processing Time 0.027 seconds

Image Mosaicking Using Feature Points Based on Color-invariant (칼라 불변 기반의 특징점을 이용한 영상 모자이킹)

  • Kwon, Oh-Seol;Lee, Dong-Chang;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.89-98
    • /
    • 2009
  • In the field of computer vision, image mosaicking is a common method for effectively increasing restricted the field of view of a camera by combining a set of separate images into a single seamless image. Image mosaicking based on feature points has recently been a focus of research because of simple estimation for geometric transformation regardless distortions and differences of intensity generating by motion of a camera in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

A Hybrid Feature Selection Method using Univariate Analysis and LVF Algorithm (단변량 분석과 LVF 알고리즘을 결합한 하이브리드 속성선정 방법)

  • Lee, Jae-Sik;Jeong, Mi-Kyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.4
    • /
    • pp.179-200
    • /
    • 2008
  • We develop a feature selection method that can improve both the efficiency and the effectiveness of classification technique. In this research, we employ case-based reasoning as a classification technique. Basically, this research integrates the two existing feature selection methods, i.e., the univariate analysis and the LVF algorithm. First, we sift some predictive features from the whole set of features using the univariate analysis. Then, we generate all possible subsets of features from these predictive features and measure the inconsistency rate of each subset using the LVF algorithm. Finally, the subset having the lowest inconsistency rate is selected as the best subset of features. We measure the performances of our feature selection method using the data obtained from UCI Machine Learning Repository, and compare them with those of existing methods. The number of selected features and the accuracy of our feature selection method are so satisfactory that the improvements both in efficiency and effectiveness are achieved.

  • PDF

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.

A Study on the Compensating of the Dead-reckoning Based on SLAM Using the Inertial Sensor (관성센서를 이용한 SLAM 기반의 위치 추정 보정 기법에 관한 연구)

  • Kang, Shin-Hyuk;Yeom, Moon-Jin;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.85-86
    • /
    • 2008
  • 로봇은 오도메터리 정보를 이용해 위치추정을 할 수 있다. 그러나 주행하는 동안 발생되는 슬립현상에 의해 오도메터러 정보만으로는 로봇의 정확한 위치추정을 할 수 없다. 정확한 위치추정을 위해서 관성센서를 이용하여 오도메터리 정보를 보정한 위치추정 방법이 있다. 실내 이동로봇에 적용하려면 관성센서는 소형이어야 하는데, 그에 따라 노이즈는 심해지고, 정확성도 낮아지는 문제가 있나. 그래서 현재까지는 이런 문제를 갖고 있는 관성센서를 실내 이동로봇의 위치추정의 정확성을 높이기 위해 비관성센서 또는 카메라 영상을 조합하는 연구들을 하고 있다. 그러나 이러한 연구들은 대부분 관성센서 성능 실험과 시뮬레이션에 결론을 내리고 있어 실제 실험에 따른 정확성을 확인할 수 없다. 또한 최근 영상 SIFT 알고리즘을 적용한 SLAM 연구에서도 나타나는 문제는 이동로봇의 위치추정의 부정확성이다. 따라서 본 논문은 SLAM에서 문제가 되는 위치추정의 부정확성을 최소화하기 위해 자이로와 가속도계를 이용하여 정학한 위치추정을 하고자 한다.

  • PDF

Improved Similarity Detection Algorithm of the Video Scene (개선된 비디오 장면 유사도 검출 알고리즘)

  • Yu, Ju-Won;Kim, Jong-Weon;Choi, Jong-Uk;Bae, Kyoung-Yul
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.2
    • /
    • pp.43-50
    • /
    • 2009
  • We proposed similarity detection method of the video frame data that extracts the feature data of own video frame and creates the 1-D signal in this paper. We get the similar frame boundary and make the representative frames within the frame boundary to extract the similarity extraction between video. Representative frames make blurring frames and extract the feature data using DOG values. Finally, we convert the feature data into the 1-D signal and compare the contents similarity. The experimental results show that the proposed algorithm get over 0.9 similarity value against noise addition, rotation change, size change, frame delete, frame cutting.

Localization Algorithm for Lunar Rover using IMU Sensor and Vision System (IMU 센서와 비전 시스템을 활용한 달 탐사 로버의 위치추정 알고리즘)

  • Kang, Hosun;An, Jongwoo;Lim, Hyunsoo;Hwang, Seulwoo;Cheon, Yuyeong;Kim, Eunhan;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • In this paper, we propose an algorithm that estimates the location of lunar rover using IMU and vision system instead of the dead-reckoning method using IMU and encoder, which is difficult to estimate the exact distance due to the accumulated error and slip. First, in the lunar environment, magnetic fields are not uniform, unlike the Earth, so only acceleration and gyro sensor data were used for the localization. These data were applied to extended kalman filter to estimate Roll, Pitch, Yaw Euler angles of the exploration rover. Also, the lunar module has special color which can not be seen in the lunar environment. Therefore, the lunar module were correctly recognized by applying the HSV color filter to the stereo image taken by lunar rover. Then, the distance between the exploration rover and the lunar module was estimated through SIFT feature point matching algorithm and geometry. Finally, the estimated Euler angles and distances were used to estimate the current position of the rover from the lunar module. The performance of the proposed algorithm was been compared to the conventional algorithm to show the superiority of the proposed algorithm.

Framework Implementation of Image-Based Indoor Localization System Using Parallel Distributed Computing (병렬 분산 처리를 이용한 영상 기반 실내 위치인식 시스템의 프레임워크 구현)

  • Kwon, Beom;Jeon, Donghyun;Kim, Jongyoo;Kim, Junghwan;Kim, Doyoung;Song, Hyewon;Lee, Sanghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1490-1501
    • /
    • 2016
  • In this paper, we propose an image-based indoor localization system using parallel distributed computing. In order to reduce computation time for indoor localization, an scale invariant feature transform (SIFT) algorithm is performed in parallel by using Apache Spark. Toward this goal, we propose a novel image processing interface of Apache Spark. The experimental results show that the speed of the proposed system is about 3.6 times better than that of the conventional system.

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.

Multi-Object Detection Using Image Segmentation and Salient Points (영상 분할 및 주요 특징 점을 이용한 다중 객체 검출)

  • Lee, Jeong-Ho;Kim, Ji-Hun;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.48-55
    • /
    • 2008
  • In this paper we propose a novel method for image retrieval system using image segmentation and salient points. The proposed method consists of four steps. In the first step, images are segmented into several regions by JSEG algorithm. In the second step, for the segmented regions, dominant colors and the corresponding color histogram are constructed. By using dominant colors and color histogram, we identify candidate regions where objects may exist. In the third step, real object regions are detected from candidate regions by SIFT matching. In the final step, we measure the similarity between the query image and DB image by using the color correlogram technique. Color correlogram is computed in the query image and object region of DB image. By experimental results, it has been shown that the proposed method detects multi-object very well and it provides better retrieval performance compared with object-based retrieval systems.

Semi-automatic 3D Building Reconstruction from Uncalibrated Images (비교정 영상에서의 반자동 3차원 건물 모델링)

  • Jang, Kyung-Ho;Jang, Jae-Seok;Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1217-1232
    • /
    • 2009
  • In this paper, we propose a semi-automatic 3D building reconstruction method using uncalibrated images which includes the facade of target building. First, we extract feature points in all images and find corresponding points between each pair of images. Second, we extract lines on each image and estimate the vanishing points. Extracted lines are grouped with respect to their corresponding vanishing points. The adjacency graph is used to organize the image sequence based on the number of corresponding points between image pairs and camera calibration is performed. The initial solid model can be generated by some user interactions using grouped lines and camera pose information. From initial solid model, a detailed building model is reconstructed by a combination of predefined basic Euler operators on half-edge data structure. Automatically computed geometric information is visualized to help user's interaction during the detail modeling process. The proposed system allow the user to get a 3D building model with less user interaction by augmenting various automatically generated geometric information.

  • PDF