• Title/Summary/Keyword: Scene Matching

Search Result 156, Processing Time 0.027 seconds

High Speed Construction Method of Panoramic Images Using Scene Shot Guider (촬영 장면 가이더를 이용한 고속 파노라마 영상 생성 방법)

  • Kim, Tae-Woo;Yoo, Hyeon-Joong;Sohn, Kyu-Seek
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.6
    • /
    • pp.1449-1457
    • /
    • 2007
  • A panorama image is constructed by merging several overlapped images to a big one. There are two kinds of methods, feature based and direct method, in the construction. Feature based one has a merit of processing speed faster than direct one. But, it is difficult to be implemented under slower processing environments such as mobile device. This paper proposed high speed construction method of a panorama image. The algorithm extremely improved matching speed by reducing the number of matching parameters using scene shot guider, and additionally adapted local matching technique to reduce matching error caused by the fewer matching parameters. In the experiments, it was shown that the proposed method required about 0.078 second in processing time, about 17 times shorter than the feature based one, for 24-bit color images of $320{\times}240$ size.

  • PDF

Analysis of Quantization Error in Stereo Vision (스테레오 비젼의 양자화 오차분석)

  • 김동현;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.54-63
    • /
    • 1993
  • Quantization error, generated by the quantization process of an image, is inherent in computer vision. Because, especially in stereo vision, the quantization error in a 2-D image results in position errors in the reconstructed 3-D scene, it is necessary to analyze it mathematically. In this paper, the analysis of the probability density function (pdf) of quantization error for a line-based stereo matching scheme is presented. We show that the theoretical pdf of quantization error in the reconstructed 3-D position information has more general form than the conventional analysis for pixel-based stereo matching schemes. Computer simulation is observed to surpport the theoretical distribution.

  • PDF

Feature based Object Tracking from an Active Camera (능동카메라 환경에서의 특징기반의 이동물체 추적)

  • 오종안;정영기
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.141-144
    • /
    • 2002
  • This paper describes a new feature based tracking system that can track moving objects with a pan-tilt camera. We extract corner features of the scene and tracks the features using filtering, The global motion energy caused by camera movement is eliminated by finding the maximal matching position between consecutive frames using Pyramidal template matching. The region of moving object is segmented by clustering the motion trajectories and command the pan-tilt controller to follow the object such that the object will always lie at the center of the camera. The proposed system has demonstrated good performance for several video sequences.

  • PDF

A Rate Control Algorithm of MPEG-2 Video Encoding Based Target Bit Matching at Scene Changes (장면전환 발생시 예상 비트 조정을 통한 MPEG-2 비디오 부호화 비트율 제어 알고리즘)

  • Moon Ho-seok;Park Sang-sung;Sohn Myung-ho;Jang Dong-sik
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.12
    • /
    • pp.1621-1627
    • /
    • 2004
  • The decrease of visual quality at scene change occurs when the difference between the amount of target bits and actual coding is high. Especially, scene change at the P-Picture can lead to severely degrade visual qualities at itself and the pictures referencing it. In this paper, under the occurrence of scene change, we propose a new method, based on the analysis of existing inaccurate bits allocation, to improve the visual qualities of scene-changed and following pictures. The method allocates extra bits to scene-changed Picture and changes them upto the level of the complexity of intra picture. Also, the method changes target bits of following pictures upto the complexity of picture prior to the scene change. Computer simulation shows that the proposed method has improved 0.5-1.2dB higher than TM5 method in terms of PSNR.

Embedding of Objects Using SFM Analysis in Synthetic Image Sequences (합성영상에서의 이동물체의 SFM분석을 통한 물체의 삽입)

  • 최경업;김용철
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.181-184
    • /
    • 2000
  • This paper presents an experimental system, where an object extracted from an image sequence is embedded into the desired position in a scene. First, a moving object is detected and the 3-D structure is obtained by SFM analysis of comer trajectories. We constrained the motion to translational motion only. Extracted objects are classified by matching with 3-D models and then the structure of the occluded part is restored. Camera calibration is performed for the background scene which will embed the object. Finally, the object is embedded into the scene. In the experiments, we used synthetic image sequences generated with OpenGL library for easy evaluation of the 3-D structure estimation.

  • PDF

Fast Scene Change Detection Algorithm

  • Khvan, Dmitriy;Ng, Teck Sheng;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.11a
    • /
    • pp.259-262
    • /
    • 2012
  • In this paper, we propose a new fast algorithm for effective scene change detection. The proposed algorithm exploits Otsu threshold matching technique, which was proposed earlier. In this method, the current and the reference frames are divided into square blocks of particular size. After doing so, the pixel histogram of each block is generated. According to Otsu method, every histogram distribution is assumed to be bimodal, i.e. pixel distribution can be divided into two groups, based on within-group variance value. The pixel value that minimizes the within-group variance is said to be Otsu threshold. After Otsu threshold is found, the same procedure is performed at the reference frame. If the difference between Otsu threshold of a block in the current frame and co-located block in the reference frame is larger than predefined threshold, then a scene change between those two blocks is detected.

  • PDF

Automatic partial shape recognition system using adaptive resonance theory (적응공명이론에 의한 자동 부분형상 인식시스템)

  • 박영태;양진성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.3
    • /
    • pp.79-87
    • /
    • 1996
  • A new method for recognizing and locating partially occluded or overlapped two-dimensional objects regardless of their size, translation, and rotation, is presented. Dominant points approximating occuluding contoures of objects are generated by finding local maxima of smoothed k-cosine function, and then used to guide the contour segment matching procedure. Primitives between the dominant points are produced by projecting the local contours onto the line between the dominant points. Robust classification of primitives. Which is crucial for reliable partial shape matching, is performed using adaptive resonance theory (ART2). The matched primitives having similar scale factors and rotation angles are detected in the hough space to identify the presence of the given model in the object scene. Finally the translation vector is estimated by minimizing the mean squred error of the matched contur segment pairs. This model-based matching algorithm may be used in diveerse factory automation applications since models can be added or changed simply by training ART2 adaptively without modifying the matching algorithm.

  • PDF

A NEW LANDSAT IMAGE CO-REGISTRATION AND OUTLIER REMOVAL TECHNIQUES

  • Kim, Jong-Hong;Heo, Joon;Sohn, Hong-Gyoo
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.594-597
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene. One of which is a reference image, while the other (sensed image) is geometrically transformed to the one. Numerous methods were developed for the automated image co-registration and it is known as a time-consuming and/or computation-intensive procedure. In order to improve efficiency and effectiveness of the co-registration of satellite imagery, this paper proposes a pre-qualified area matching, which is composed of feature extraction with Laplacian filter and area matching algorithm using correlation coefficient. Moreover, to improve the accuracy of co-registration, the outliers in the initial matching point should be removed. For this, two outlier detection techniques of studentized residual and modified RANSAC algorithm are used in this study. Three pairs of Landsat images were used for performance test, and the results were compared and evaluated in terms of robustness and efficiency.

  • PDF

A New Landsat Image Co-Registration and Outlier Removal Techniques

  • Kim, Jong-Hong;Heo, Joon;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.5
    • /
    • pp.439-443
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene. One of which is a reference image, while the other (sensed image) is geometrically transformed to the one. Numerous methods were developed for the automated image co-registration and it is known as a timeconsuming and/or computation-intensive procedure. In order to improve efficiency and effectiveness of the co-registration of satellite imagery, this paper proposes a pre-qualified area matching, which is composed of feature extraction with Laplacian filter and area matching algorithm using correlation coefficient. Moreover, to improve the accuracy of co-registration, the outliers in the initial matching point should be removed. For this, two outlier detection techniques of studentized residual and modified RANSAC algorithm are used in this study. Three pairs of Landsat images were used for performance test, and the results were compared and evaluated in terms of robustness and efficiency.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.