• Title/Summary/Keyword: Image Matching.

Search Result 2,167, Processing Time 0.025 seconds

Low Complexity Motion Estimation Based on Spatio - Temporal Correlations (시간적-공간적 상관성을 이용한 저 복잡도 움직임 추정)

  • Yoon Hyo-Sun;Kim Mi-Young;Lee Guee-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1142-1149
    • /
    • 2004
  • Motion Estimation(ME) has been developed to reduce temporal redundancy in digital video signals and increase data compression ratio. ME is an Important part of video encoding systems, since it can significantly affect the output quality of encoded sequences. However, ME requires high computational complexity, it is difficult to apply to real time video transmission. for this reason, motion estimation algorithms with low computational complexity are viable solutions. In this paper, we present an efficient method with low computational complexity based on spatial and temporal correlations of motion vectors. The proposed method uses temporally and spatially correlated motion information, the motion vector of the block with the same coordinate in the reference frame and the motion vectors of neighboring blocks around the current block in the current frame, to decide the search pattern and the location of search starting point adaptively. Experiments show that the image quality improvement of the proposed method over MVFAST (Motion Vector Field Adaptive Search Technique) and PMVFAST (Predictive Motion Vector Field Adaptive Search Technique) is 0.01~0.3(dB) better and the speedup improvement is about 1.12~l.33 times faster which resulted from lower computational complexity.

Virtual core point detection and ROI extraction for finger vein recognition (지정맥 인식을 위한 가상 코어점 검출 및 ROI 추출)

  • Lee, Ju-Won;Lee, Byeong-Ro
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.3
    • /
    • pp.249-255
    • /
    • 2017
  • The finger vein recognition technology is a method to acquire a finger vein image by illuminating infrared light to the finger and to authenticate a person through processes such as feature extraction and matching. In order to recognize a finger vein, a 2D mask-based two-dimensional convolution method can be used to detect a finger edge but it takes too much computation time when it is applied to a low cost micro-processor or micro-controller. To solve this problem and improve the recognition rate, this study proposed an extraction method for the region of interest based on virtual core points and moving average filtering based on the threshold and absolute value of difference between pixels without using 2D convolution and 2D masks. To evaluate the performance of the proposed method, 600 finger vein images were used to compare the edge extraction speed and accuracy of ROI extraction between the proposed method and existing methods. The comparison result showed that a processing speed of the proposed method was at least twice faster than those of the existing methods and the accuracy of ROI extraction was 6% higher than those of the existing methods. From the results, the proposed method is expected to have high processing speed and high recognition rate when it is applied to inexpensive microprocessors.

Evaluation on Tie Point Extraction Methods of WorldView-2 Stereo Images to Analyze Height Information of Buildings (건물의 높이 정보 분석을 위한 WorldView-2 스테레오 영상의 정합점 추출방법 평가)

  • Yeji, Kim;Yongil, Kim
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.407-414
    • /
    • 2015
  • Interest points are generally located at the pixels where height changes occur. So, interest points can be the significant pixels for DSM generation, and these have the important role to generate accurate and reliable matching results. Manual operation is widely used to extract the interest points and to match stereo satellite images using these for generating height information, but it causes economic and time consuming problems. Thus, a tie point extraction method using Harris-affine technique and SIFT(Scale Invariant Feature Transform) descriptors was suggested to analyze height information of buildings in this study. Interest points on buildings were extracted by Harris-affine technique, and tie points were collected efficiently by SIFT descriptors, which is invariant for scale. Searching window for each interest points was used, and direction of tie points pairs were considered for more efficient tie point extraction method. Tie point pairs estimated by proposed method was used to analyze height information of buildings. The result had RMSE values less than 2m comparing to the height information estimated by manual method.

Motion Linearity-based Frame Rate Up Conversion Method (선형 움직임 기반 프레임률 향상 기법)

  • Kim, Donghyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.734-740
    • /
    • 2017
  • A frame rate up-conversion scheme is needed when moving pictures with a low frame rate is played on appliances with a high frame rate. Frame rate up-conversion methods interpolate the frame with two consecutive frames of the original source. This can be divided into the frame repetition method and motion estimation-based the frame interpolation one. Frame repetition has very low complexity, but it can yield jerky artifacts. The interpolation method based on a motion estimation and compensation can be divided into pixel or block interpolation methods. In the case of pixel interpolation, the interpolated frame was classified into four areas, which were interpolated using different methods. The block interpolation method has relatively low complexity, but it can yield blocking artifacts. The proposed method is the frame rate up-conversion method based on a block motion estimation and compensation using the linearity of motion. This method uses two previous frames and one next frame for motion estimation and compensation. The simulation results show that the proposed algorithm effectively enhances the objective quality, particularly in a high resolution image. In addition, the proposed method has similar or higher subjective quality than other conventional approaches.

A Study on the Integrated System Implementation of Close Range Digital Photogrammetry Procedures (근거리 수치사진측량 과정의 단일 통합환경 구축에 관한 연구)

  • Yeu, Bock-Mo;Lee, Suk-Kun;Choi, Song-Wook;Kim, Eui-Myoung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.1 s.13
    • /
    • pp.53-63
    • /
    • 1999
  • For the close range digital photogrammetry, multi-step procedures should be embodied in an integrated system. However, it is hard to construct an Integrated system through conventional procedural processing. Using Object Oriented Programming(OOP), photogrammetric processings can be classified with corresponding subjects and it is easy to construct an integrated system lot digital photogrammetry as well as to add the newly developed classes. In this study, the equation of 3-dimensional mathematic model is developed to make an immediate calibration of the CCD camera, the focus distance of which varies according to the distance of the object. Classes for the input and output of images are also generated to carry out the close range digital photogrammetric procedures by OOP. Image matching, coordinate transformation, dirct linear transformation and bundle adjustment are performed by producing classes corresponding to each part of data processing. The bundle adjustment, which adds the principle coordinate and focal length term to the non-photogrammetric CCD camera, is found to increase usability of the CCD camera and the accuracy of object positioning. In conclusion, classes and their hierarchies in the digital photogrammetry are designed to manage multi-step procedures using OOP and close range digital photogrammetric process is implemented using CCD camera in an integrated System.

  • PDF

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Development and Manufacture of W-band MMIC Chip and manufacture of Transceiver (W-대역 MMIC 칩 국내 개발 및 송수신기 제작)

  • Kim, Wansik;Jung, Jooyong;Kim, Younggon;Kim, Jongpil;Seo, Mihui;Kim, Sosu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.175-181
    • /
    • 2019
  • For the purpose of Application to the small radar sensor, the MMIC Chip, which is the core component of the W-band, was designed in Korea according to the characteristics of the transceiver and manufactured by 0.1㎛ GaAs pHEMT process, and compared with the MMIC chip purchased overseas. The noise figure of low noise amplifier, insertion loss of the switch and image rejection performance of the down-converted mixer MMIC chip showed better characteristics than those of commercial chips. The MMIC chip developed in domestic was applied to the transmitter and receiver through W-band waveguide low loss transition structure design and impedance matching to verify the performance after the fabrication is 9.17 dB, which is close to the analysis result. As a result, it is judged that the transceiver can be applied to the small radar sensor better than the MMIC chip purchased overseas.

Localization of A Moving Vehicle using Backward-looking Camera and 3D Road Map (후방 카메라 영상과 3차원 도로지도를 이용한 이동차량의 위치인식)

  • Choi, Sung-In;Park, Soon-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.3
    • /
    • pp.160-173
    • /
    • 2013
  • In this paper, we propose a new visual odometry technique by combining a forward-looking stereo camera and a backward-looking monocular camera. The main goal of the proposed technique is to identify the location of a moving vehicle which travels long distance and comes back to the initial position in urban road environments. While the vehicle is moving to the destination, a global 3D map is updated continuously by a stereo visual odometry technique using a graph theorem. Once the vehicle reaches the destination and begins to come back to the initial position, a map-based monocular visual odometry technqieu is used. To estimate the position of the returning vehicle accurately, 2D features in the backward-looking camera image and the global map are matched. In addition, we utilize the previous matching nodes to limit the search ranges of the next vehicle position in the global map. Through two navigation paths, we analyze the accuracy of the proposed method.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

A Robust Algorithm for Tracking Feature Points with Incomplete Trajectories (불완전한 궤적을 고려한 강건한 특징점 추적 알고리즘)

  • Jeong, Jong-Myeon;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.6
    • /
    • pp.25-37
    • /
    • 2000
  • The trajectories of feature points can be defined by the correspondences between points in consecutive frames. The correspondence problem is known to be difficult to solve because false positives and false negatives almost always exist in real image sequences. In this paper, we propose a robust feature tracking algorithm considering incomplete trajectories such as entering and/or vanishing trajectories. The trajectories of feature points are determined by calculating the matching measure, which is defined as the minimum weighted Euclidean distance between two feature points. The weights are automatically updated in order to properly reflect the motion characteristics. We solve the correspondence problem as an optimal graph search problem, considering that the existence of false feature points may have serious effect on the correspondence search. The proposed algorithm finds a local optimal correspondence so that the effect of false feature point can be minimized in the decision process. The time complexity of the proposed graph search algorithm is given by O(mn) in the best case and O($m^2n$) in the worst case, where m and n arc the number of feature points in two consecutive frames. By considering false feature points and by properly reflecting motion characteristics, the proposed algorithm can find trajectories correctly and robustly, which has been shown by experimental results.

  • PDF