• Title/Summary/Keyword: 광류 추정 방법

Search Result 14, Processing Time 0.024 seconds

Algorithm for Arbitrary Point Tracking using Pyramidal Optical Flow (피라미드 기반 광류 추정을 이용한 영상 내의 임의의 점 추적 알고리즘)

  • Lee, Jae-Kwang;Park, Chang-Joon
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1407-1416
    • /
    • 2007
  • This paper describes an algorithm for arbitrary point tracking using pyramidal optical flow. The optical flow is calculated based on the Lucas-Kanade's optical flow estimation in this paper. The image pyramid is employed to calculate a big motion while being sensitive to a small motion. Furthermore, a rectification process is proposed to reduce the error which is increased as it goes down to the lower level of the image pyramid. The accuracy of the optical flow estimation was increased by using some constraints and sub-pixel interpolation of the optical flow and this makes our algorithm to track points in which they do not have features such as edges or corners. The proposed algorithm is implemented and primary results are shown in this paper.

  • PDF

An Adaptive Block Matching Motion Estimation Method Using Optical Flow (광류를 이용한 적응적인 블록 정합 움직임 추정 기법)

  • Kim, Kyoung-Kyoo;Park, Kyung-Nam
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.1
    • /
    • pp.57-67
    • /
    • 2008
  • In this paper, we present an adaptive block matching motion estimation using optical flow. In the proposed algorithm, we calculate the temporal and spatial gradient value for each pixel value from tile differential filter, and estimate the optical flow which is used to decide the location and the size of the search region from the gradient values by least square optical flow algorithm. In particular, the proposed algorithm showed a excellent performance with fast and complex motion sequences. From the computer simulation for various motion characteristic sequences. The proposed algorithm shows a significant enhancement of PSNR over previous blocking matching algorithms.

  • PDF

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

A Study on the Estimation of Smartphone Movement Distance using Optical Flow Technology on a Limited Screen (제한된 화면에 광류 기술을 적용한 스마트폰 이동 거리 추정에 관한 연구)

  • Jung, Keunyoung;Oh, Jongtaek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.4
    • /
    • pp.71-76
    • /
    • 2019
  • Research on indoor location tracking technology using smartphone is actively being carried out. Especially, the movement distance of the smartphone should be accurately measured and the movement route of the user should be displayed on the map. Location tracking technology using sensors mounted on smart phones has been used for a long time, but accuracy is not good enough to measure the moving distance of the user using only the sensor. Therefore, when the user moves the smartphone in a certain posture, it must research and develop an appropriate algorithm to measure the distance accurately. In this paper, we propose a method to reduce moving distance estimation error by removing user 's foot shape by limiting the screen of smartphone in pyramid - based optical flow estimation method.

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.

Fast Natural Feature Tracking Using Optical Flow (광류를 사용한 빠른 자연특징 추적)

  • Bae, Byung-Jo;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.345-354
    • /
    • 2010
  • Visual tracking techniques for Augmented Reality are classified as either a marker tracking approach or a natural feature tracking approach. Marker-based tracking algorithms can be efficiently implemented sufficient to work in real-time on mobile devices. On the other hand, natural feature tracking methods require a lot of computationally expensive procedures. Most previous natural feature tracking methods include heavy feature extraction and pattern matching procedures for each of the input image frame. It is difficult to implement real-time augmented reality applications including the capability of natural feature tracking on low performance devices. The required computational time cost is also in proportion to the number of patterns to be matched. To speed up the natural feature tracking process, we propose a novel fast tracking method based on optical flow. We implemented the proposed method on mobile devices to run in real-time and be appropriately used with mobile augmented reality applications. Moreover, during tracking, we keep up the total number of feature points by inserting new feature points proportional to the number of vanished feature points. Experimental results showed that the proposed method reduces the computational cost and also stabilizes the camera pose estimation results.

Using play-back image sequence to detect a vehicle cutting in a line automatically (역방향 영상재생을 이용한 끼어들기 차량 자동추적)

  • Rheu, Jee-Hyung;Kim, Young-Mo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.95-101
    • /
    • 2014
  • This paper explains effective tracking method for a vehicle cutting in a line on the road automatically. The method employs KLT based on optical flow using play-back image sequence. Main contribution of this paper is play-back image sequence that is in order image frames for rewind direction from a reference point in time. The moment when recognizing camera can read a license plate very well can usually be the reference point in time. The biggest images of object traced can usually be obtained at this moment also. When optic flow is applied, the bigger image of the object traced can be obtained, the more feature points can be obtained. More many feature points bring good result of tracking object. After the recognizing cameras read a license plate on the vehicle suspected of cut-in-line violation, and then the system extracts the play-back image sequence from the tracking cameras for watching wide range. This paper compares using play-back image sequence as normal method for tracking to using play-forward image sequence as suggested method on the results of the experiment and also shows the suggested algorithm has a good performance that can be applied to the unmanned system for watching cut-in-line violation.

Robust Estimation of Camera Motion Using A Local Phase Based Affine Model (국소적 위상기반 어파인 모델을 이용한 강인한 카메라 움직임 추정)

  • Jang, Suk-Yoon;Yoon, Chang-Yong;Park, Mig-Non
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.128-135
    • /
    • 2009
  • Techniques for tracking the same region of physical space with the temporal sequences of images by matching the contours of constant phase show robust and stable performance in relative to the tracking techniques using or assuming the constant intensity. Using this property, we describe an algorithm for obtaining the robust motion parameters caused by the global camera motion. First, we obtain the optical flow based on the phase of spacially filtered sequential images on the region in a direction orthogonal to orientation of each component of gabor filter bank. And then, we apply the least squares method to the optical flow to determine the affine motion parameters. We demonstrate hat proposed method can be applied to the vision based pointing device which estimate its motion using the image including the display device which cause lighting condition varieties and noise.

An User-Friendly Method of Image Warping for Traffic Monitoring System (실시간 교통상황 모니터링 시스템을 위한 유저 친화적인 영상 변형 방법)

  • Yi, Chuho;Cho, Jungwon
    • Journal of Digital Convergence
    • /
    • v.14 no.12
    • /
    • pp.231-236
    • /
    • 2016
  • Currently, a traffic monitoring service using a surveillance camera is provided through internet. In general, if the user points a certain location on a map, then this service shows the real-time image of the camera where it is mounted. In this paper, we proposed the intuitive surveillance monitoring system which displays a real-time camera image on the map by warping with bird's-eye view and with the top of image as the north. In order to robustly estimate the road plane using camera image, we used the motion vectors which can be detected to changes in brightness. We applied a re-adjustment process to have the same directivity with a map and presented a user-friendly interface that can be displayed on the map. In the experiment, the proposed method was presented as the result of warping image that the user can easily perceive like a map.