• Title/Summary/Keyword: Affine transform

Search Result 80, Processing Time 0.025 seconds

Soccer Image Sequences Mosaicing Using Reverse Affine Transform

  • Yoon, Ho-Sub;Jung Soh;Min, Byung-Woo;Yang, Young-Kyu
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.877-880
    • /
    • 2000
  • In this paper, we develop an algorithm of soccer image sequences mosaicing using reverse affine transform. The continuous mosaic images of soccer ground field allows the user/viewer to view a “wide picture” of the player’s actions The first step of our algorithm is to automatic detection and tracking player, ball and some lines such as center circle, sideline, penalty line and so on. For this purpose, we use the ground field extraction algorithm using color information and player and line detection algorithm using four P-rules and two L-rules. The second step is Affine transform to map the points from image to model coordinate using predefined and pre-detected four points. General Affine transformation has many holes in target image. In order to delete these holes, we use reverse Affine transform. We tested our method in real image sequence and the experimental results are given.

  • PDF

Viewpoint Unconstrained Face Recognition Based on Affine Local Descriptors and Probabilistic Similarity

  • Gao, Yongbin;Lee, Hyo Jong
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.643-654
    • /
    • 2015
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we propose using the combination of Affine Scale Invariant Feature Transform (SIFT) and Probabilistic Similarity for face recognition under a large viewpoint change. Affine SIFT is an extension of SIFT algorithm to detect affine invariant local descriptors. Affine SIFT generates a series of different viewpoints using affine transformation. In this way, it allows for a viewpoint difference between the gallery face and probe face. However, the human face is not planar as it contains significant 3D depth. Affine SIFT does not work well for significant change in pose. To complement this, we combined it with probabilistic similarity, which gets the log likelihood between the probe and gallery face based on sum of squared difference (SSD) distribution in an offline learning process. Our experiment results show that our framework achieves impressive better recognition accuracy than other algorithms compared on the FERET database.

Adaptive Error Concealment Method Using Affine Transform in the Video Decoder (비디오 복호기에서의 어파인 변환을 이용한 적응적 에러은닉 기법)

  • Kim, Dong-Hyung;Kim, Seung-Jong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.9C
    • /
    • pp.712-719
    • /
    • 2008
  • Temporal error concealment indicates the algorithm that restores the lost video data using temporal correlation between previous frame and current frame with lost data. It can be categorized into the methods of block-based and pixel-based concealment. The proposed method in this paper is for pixel-based temporal error concealment using affine transform. It outperforms especially when the object or background in lost block has geometric transform which can be modeled using affine transform, that is, rotation, magnification, reduction, etc. Furthermore, in order to maintain good performance even though one or more motion vector represents the motion of different objects, we defines a cost function. According to cost from the cost function, the proposed method adopts affine error concealment adaptively. Simulation results show that the proposed method yields better performance up to 1.9 dB than the method embedded in reference software of H.264/AVC.

Motion Compensation by Affine Transform using Polygonal Matching Algorithm (다각형 정합 알고리듬을 이용한 affine 변환 움직임 보상)

  • Park, Hyo-Seok;Hwang, Chan-Sik
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.1
    • /
    • pp.60-69
    • /
    • 1999
  • Motion compensation by affine transform has been proposed as a solution to the artifact problems in very low bit rate video coding and a HMA(Hexagoanl Matching Algorithm) has been proposed for refine motions estimation. When dividing images with an affine transform, as image objects do not necessarily conform to triangle patterns. In this paper we propose a method that first divides an image into triangular patches according to its edge information and then further divides the image into more detailed triangular patches where more complicated edge information occurs. We image propose a PMA(Polygona Matching Algorithm) for refine motion estimation because of the different triangle pattern types of neighboring blocks and its performance is compared with H.263.

  • PDF

A LOCALIZED GLOBAL DEFORMATION MODEL TO TRACK MYOCARDIAL MOTION USING ECHOCARDIOGRAPHY

  • Ahn, Chi Young
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.18 no.2
    • /
    • pp.181-192
    • /
    • 2014
  • In this paper, we propose a robust real-time myocardial border tracking algorithm for echocardiography. Commonly, after an initial contour of LV border is traced at one or two frame from the entire cardiac cycle, LV contour tracking is performed over the remaining frames. Among a variety of tracking techniques, optical flow method is the most widely used for motion estimation of moving objects. However, when echocardiography data is heavily corrupted in some local regions, the errors bring the tracking point out of the endocardial border, resulting in distorted LV contours. This shape distortion often occurs in practice since the data acquisition is affected by ultrasound artifacts, dropout or shadowing phenomena of cardiac walls. The proposed method deals with this shape distortion problem and reflects the motion realistic LV shape by applying global deformation modeled as affine transform partitively to the contour. We partition the tracking points on the contour into a few groups and determine each affine transform governing the motion of the partitioned contour points. To compute the coefficients of each affine transform, we use the least squares method with equality constraints that are given by the relationship between the coefficients and a few contour points showing good tracking results. Many real experiments show that the proposed method supports better performance than existing methods.

Content Based Mesh Motion Estimation in Moving Pictures (동영상에서의 내용기반 메쉬를 이용한 모션 예측)

  • 김형진;이동규;이두수
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.35-38
    • /
    • 2000
  • The method of Content-based Triangular Mesh Image representation in moving pictures makes better performance in prediction error ratio and visual efficiency than that of classical block matching. Specially if background and objects can be separated from image, the objects are designed by Irregular mesh. In this case this irregular mesh design has an advantage of increasing video coding efficiency. This paper presents the techniques of mesh generation, motion estimation using these mesh, uses image warping transform such as Affine transform for image reconstruction, and evaluates the content based mesh design through computer simulation.

  • PDF

Affine Transform Coding for Image Transmission (영상 전송을 위한 어핀변환 부호화)

  • 김정일
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.2
    • /
    • pp.135-140
    • /
    • 1999
  • This paper describes a affine transform coding which is reducing long time to take on image encoding by using scaling method and limited search area technique. For evaluating its performance, the proposed algorithm compare with Jacquin's method using traditional affine transform coding methods. Simulation results, the proposed algorithm considerably reduces encoding time by using scaling method and limited search area method. Also, the proposed algorithm provides much shorter encoding time with a little degradation of the decoded image quality than Jacquin's method.

  • PDF

Improvement of ASIFT for Object Matching Based on Optimized Random Sampling

  • Phan, Dung;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.9 no.2
    • /
    • pp.1-7
    • /
    • 2013
  • This paper proposes an efficient matching algorithm based on ASIFT (Affine Scale-Invariant Feature Transform) which is fully invariant to affine transformation. In our approach, we proposed a method of reducing similar measure matching cost and the number of outliers. First, we combined the Manhattan and Chessboard metrics replacing the Euclidean metric by a linear combination for measuring the similarity of keypoints. These two metrics are simple but really efficient. Using our method the computation time for matching step was saved and also the number of correct matches was increased. By applying an Optimized Random Sampling Algorithm (ORSA), we can remove most of the outlier matches to make the result meaningful. This method was experimented on various combinations of affine transform. The experimental result shows that our method is superior to SIFT and ASIFT.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.2
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

Affine Local Descriptors for Viewpoint Invariant Face Recognition

  • Gao, Yongbin;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.781-784
    • /
    • 2014
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we use Affine SIFT to detect affine invariant local descriptors for face recognition under large viewpoint change. Affine SIFT is an extension of SIFT algorithm. SIFT algorithm is scale and rotation invariant, which is powerful for small viewpoint changes in face recognition, but it fails when large viewpoint change exists. In our scheme, Affine SIFT is used for both gallery face and probe face, which generates a series of different viewpoints using affine transformation. Therefore, Affine SIFT allows viewpoint difference between gallery face and probe face. Experiment results show our framework achieves better recognition accuracy than SIFT algorithm on FERET database.