• Title/Summary/Keyword: Scene Matching

Search Result 157, Processing Time 0.026 seconds

Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic (3차원 공간 맵핑을 통한 로봇의 경로 구현)

  • Son, Eun-Ho;Kim, Young-Chul;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

Feature-based Image Analysis for Object Recognition on Satellite Photograph (인공위성 영상의 객체인식을 위한 영상 특징 분석)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.35-43
    • /
    • 2007
  • This paper presents a system for image matching and recognition based on image feature detection and description techniques from artificial satellite photographs. We propose some kind of parameters from the varied environmental elements happen by image handling process. The essential point of this experiment is analyzes that affects match rate and recognition accuracy when to change of state of each parameter. The proposed system is basically inspired by Lowe's SIFT(Scale-Invariant Transform Feature) algorithm. The descriptors extracted from local affine invariant regions are saved into database, which are defined by k-means performed on the 128-dimensional descriptor vectors on an artificial satellite photographs from Google earth. And then, a label is attached to each cluster of the feature database and acts as guidance for an appeared building's information in the scene from camera. This experiment shows the various parameters and compares the affected results by changing parameters for the process of image matching and recognition. Finally, the implementation and the experimental results for several requests are shown.

  • PDF

Research for Generation of Accurate DEM using High Resolution Satellite Image and Analysis of Accuracy (고해상도 위성영상을 이용한 정밀 DEM 생성 및 정확도 분석에 관한 연구)

  • Jeong, Jae-Hoon;Lee, Tae-Yoon;Kim, Tae-Jung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.359-365
    • /
    • 2008
  • This paper focused on generation of more accurate DEM and analysis of accuracy. For this, we applied suitable sensor modeling technique for each satellite image and automatic pyramid matching using image pyramid was applied. Matching algorithm based on epipolarity and scene geometry also was applied for stereo matching. IKONOS, Quickbird, SPOT-5, Kompsat-2 were used for experiments. In particular, we applied orbit-attitude sensor modeling technique for Kompsat-2 and performed DEM generation successfully. All DEM generated show good quality. Assessment was carried out using USGS DTED and we also compared between DEM generated in this research and DEM generated from common software. All DEM had $9m{\sim}12m$ Mean Absolute Error and $13m{\sim}16m$ RMS Error. Experimental results show that the DEMs of good performance which is similar to or better than result of DEMs generated from common software.

ORMN: A Deep Neural Network Model for Referring Expression Comprehension (ORMN: 참조 표현 이해를 위한 심층 신경망 모델)

  • Shin, Donghyeop;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.69-76
    • /
    • 2018
  • Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a new deep neural network model for referring expression comprehension. The proposed model finds out the region of the referred object in the given image by making use of the rich information about the referred object itself, the context object, and the relationship with the context object mentioned in the referring expression. In the proposed model, the object matching score and the relationship matching score are combined to compute the fitness score of each candidate region according to the structure of the referring expression sentence. Therefore, the proposed model consists of four different sub-networks: Language Representation Network(LRN), Object Matching Network (OMN), Relationship Matching Network(RMN), and Weighted Composition Network(WCN). We demonstrate that our model achieves state-of-the-art results for comprehension on three referring expression datasets.

Effective Marker Placement Method By De Bruijn Sequence for Corresponding Points Matching (드 브루인 수열을 이용한 효과적인 위치 인식 마커 구성)

  • Park, Gyeong-Mi;Kim, Sung-Hwan;Cho, Hwan-Gue
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.6
    • /
    • pp.9-20
    • /
    • 2012
  • In computer vision, it is very important to obtain reliable corresponding feature points. However, we know it is not easy to find the corresponding feature points exactly considering by scaling, lighting, viewpoints, etc. Lots of SIFT methods applies the invariant to image scale and rotation and change in illumination, which is due to the feature vector extracted from corners or edges of object. However, SIFT could not find feature points, if edges do not exist in the area when we extract feature points along edges. In this paper, we present a new placement method of marker to improve the performance of SIFT feature detection and matching between different view of an object or scene. The shape of the markers used in the proposed method is formed in a semicircle to detect dominant direction vector by SIFT algorithm depending on direction placement of marker. We applied De Bruijn sequence for the markers direction placement to improve the matching performance. The experimental results show that the proposed method is more accurate and effective comparing to the current method.

AUTOMATIC PRECISION CORRECTION OF SATELLITE IMAGES

  • Im, Yong-Jo;Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.40-44
    • /
    • 2002
  • Precision correction is the process of geometrically aligning images to a reference coordinate system using GCPs(Ground Control Points). Many applications of remote sensing data, such as change detection, mapping and environmental monitoring, rely on the accuracy of precision correction. However it is a very time consuming and laborious process. It requires GCP collection, the identification of image points and their corresponding reference coordinates. At typical satellite ground stations, GCP collection requires most of man-powers in processing satellite images. A method of automatic registration of satellite images is demanding. In this paper, we propose a new algorithm for automatic precision correction by GCP chips and RANSAC(Random Sample Consensus). The algorithm is divided into two major steps. The first one is the automated generation of ground control points. An automated stereo matching based on normalized cross correlation will be used. We have improved the accuracy of stereo matching by determining the size and shape of match windows according to incidence angle and scene orientation from ancillary data. The second one is the robust estimation of mapping function from control points. We used the RANSAC algorithm for this step and effectively removed the outliers of matching results. We carried out experiments with SPOT images over three test sites which were taken at different time and look-angle with each other. Left image was used to select UP chipsets and right image to match against GCP chipsets and perform automatic registration. In result, we could show that our approach of automated matching and robust estimation worked well for automated registration.

  • PDF

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

Automated Satellite Image Co-Registration using Pre-Qualified Area Matching and Studentized Outlier Detection (사전검수영역기반정합법과 't-분포 과대오차검출법'을 이용한 위성영상의 '자동 영상좌표 상호등록')

  • Kim, Jong Hong;Heo, Joon;Sohn, Hong Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4D
    • /
    • pp.687-693
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea. showed: (1) average RMSE error of the approach was 0.435 pixel; (2) the average number of matching points was over 25,573; (3) the average processing time was 4.2 min per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

On the Recognition of the Occluded Objects Using Matching Probability (정합확률을 이용한 겹쳐진 물체의 인식에 대하여)

  • Nam, Ki-Gon;lee, Soo-Dong;Lee, Ryang-Sung
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.1
    • /
    • pp.20-28
    • /
    • 1989
  • The recognition of partially occluded objects is of prime importance for industrial machine vision applications and to solve real provlems in factory automation. This paper describes a method tc solve the problem of occlusion in a two dimensional scene. The technique consists of three steps: searching of border, extracting of line segments and clustering of hypotheses by matching probability. Computer simulation results have been tested for 20 scenes contained the 80 models, and have obtained 95% of properly correct recognition rate on the average.

  • PDF