• Title/Summary/Keyword: Natural Feature Tracking

Search Result 20, Processing Time 0.028 seconds

Fast Natural Feature Tracking Using Optical Flow (광류를 사용한 빠른 자연특징 추적)

  • Bae, Byung-Jo;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.345-354
    • /
    • 2010
  • Visual tracking techniques for Augmented Reality are classified as either a marker tracking approach or a natural feature tracking approach. Marker-based tracking algorithms can be efficiently implemented sufficient to work in real-time on mobile devices. On the other hand, natural feature tracking methods require a lot of computationally expensive procedures. Most previous natural feature tracking methods include heavy feature extraction and pattern matching procedures for each of the input image frame. It is difficult to implement real-time augmented reality applications including the capability of natural feature tracking on low performance devices. The required computational time cost is also in proportion to the number of patterns to be matched. To speed up the natural feature tracking process, we propose a novel fast tracking method based on optical flow. We implemented the proposed method on mobile devices to run in real-time and be appropriately used with mobile augmented reality applications. Moreover, during tracking, we keep up the total number of feature points by inserting new feature points proportional to the number of vanished feature points. Experimental results showed that the proposed method reduces the computational cost and also stabilizes the camera pose estimation results.

Facial Feature Tracking Using Adaptive Particle Filter and Active Appearance Model (Adaptive Particle Filter와 Active Appearance Model을 이용한 얼굴 특징 추적)

  • Cho, Durkhyun;Lee, Sanghoon;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.2
    • /
    • pp.104-115
    • /
    • 2013
  • For natural human-robot interaction, we need to know location and shape of facial feature in real environment. In order to track facial feature robustly, we can use the method combining particle filter and active appearance model. However, processing speed of this method is too slow. In this paper, we propose two ideas to improve efficiency of this method. The first idea is changing the number of particles situationally. And the second idea is switching the prediction model situationally. Experimental results is presented to show that the proposed method is about three times faster than the method combining particle filter and active appearance model, whereas the performance of the proposed method is maintained.

A study on the eye Location for Video-Conferencing Interface (화상 회의 인터페이스를 위한 눈 위치 검출에 관한 연구)

  • Jung, Jo-Nam;Gang, Jang-Mook;Bang, Kee-Chun
    • Journal of Digital Contents Society
    • /
    • v.7 no.1
    • /
    • pp.67-74
    • /
    • 2006
  • In current video-conferencing systems. user's face movements are restricted by fixed camera, therefore it is inconvenient to users. To solve this problem, tracking of face movements is needed. Tracking using whole face needs much computing time and whole face is difficult to define as an one feature. Thus, using several feature points in face is more desirable to track face movements efficiently. This paper addresses an effective eye location algorithm which is essential process of automatic human face tracking system for natural video-conferencing. The location of eye is very important information for face tracking, as eye has most clear and simplest attribute in face. The proposed algorithm is applied to candidate face regions from the face region extraction. It is not sensitive to lighting conditions and has no restriction on face size and face with glasses. The proposed algorithm shows very encouraging results from experiments on video-conferencing environments.

  • PDF

Optimizations of Air-trap Locations in the Speaker Encloser of Mobile Phone by Injection Molding Simulations (사출성형 시뮬레이션에 의한 휴대폰 스피커 인클로저의 에어트랩 위치 최적화)

  • Park, Ki-Yoon;Park, Jong-Cheon
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.10 no.5
    • /
    • pp.85-90
    • /
    • 2011
  • In this paper a design procedure via computer-aided molding simulation is presented to optimize the air-trap locations in a speaker encloser of mobile phone. The molding flow simulation reveals that the race-tracking phenomenon is the dominant feature in the current mold design. In obtaining an optimal filling pattern, the local modifications of the wall thickness such as in a flow leader attachment are considered as the primary control factor, and both the gate position and the filling time become the secondary control factor. In the one-at-a-time approach, the last location to be filled in the mold cavity could be successfully moved to the extremities of the part, allowing a natural ventilation of entrapped air through the mold parting plane.

Eye Location Algorithm For Natural Video-Conferencing (화상 회의 인터페이스를 위한 눈 위치 검출)

  • Lee, Jae-Jun;Choi, Jung-Il;Lee, Phill-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3211-3218
    • /
    • 1997
  • This paper addresses an eye location algorithm which is essential process of human face tracking system for natural video-conferencing. In current video-conferencing systems, user's facial movements are restricted by fixed camera, therefore it is inconvenient to users. We Propose an eye location algorithm for automatic face tracking. Because, locations of other facial features guessed from locations of eye and scale of face in the image can be calculated using inter-ocular distance. Most previous feature extraction methods for face recognition system are approached under assumption that approximative face region or location of each facial feature is known. The proposed algorithm in this paper uses no prior information on the given image. It is not sensitive to backgrounds and lighting conditions. The proposed algorithm uses the valley representation as major information to locate eyes. The experiments have been performed for 213 frames of 17 people and show very encouraging results.

  • PDF

Context Aware Feature Selection Model for Salient Feature Detection from Mobile Video Devices (모바일 비디오기기 위에서의 중요한 객체탐색을 위한 문맥인식 특성벡터 선택 모델)

  • Lee, Jaeho;Shin, Hyunkyung
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.117-124
    • /
    • 2014
  • Cluttered background is a major obstacle in developing salient object detection and tracking system for mobile device captured natural scene video frames. In this paper we propose a context aware feature vector selection model to provide an efficient noise filtering by machine learning based classifiers. Since the context awareness for feature selection is achieved by searching nearest neighborhoods, known as NP hard problem, we apply a fast approximation method with complexity analysis in details. Separability enhancement in feature vector space by adding the context aware feature subsets is studied rigorously using principal component analysis (PCA). Overall performance enhancement is quantified by the statistical measures in terms of the various machine learning models including MLP, SVM, Naïve Bayesian, CART. Summary of computational costs and performance enhancement is also presented.

Augmented Reality System using Planar Natural Feature Detection and Its Tracking (동일 평면상의 자연 특징점 검출 및 추적을 이용한 증강현실 시스템)

  • Lee, A-Hyun;Lee, Jae-Young;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.49-58
    • /
    • 2011
  • Typically, vision-based AR systems operate on the basis of prior knowledge of the environment such as a square marker. The traditional marker-based AR system has a limitation that the marker has to be located in the sensing range. Therefore, there have been considerable research efforts for the techniques known as real-time camera tracking, in which the system attempts to add unknown 3D features to its feature map, and these then provide registration even when the reference map is out of the sensing range. In this paper, we describe a real-time camera tracking framework specifically designed to track a monocular camera in a desktop workspace. Basic idea of the proposed scheme is that a real-time camera tracking is achieved on the basis of a plane tracking algorithm. Also we suggest a method for re-detecting features to maintain registration of virtual objects. The proposed method can cope with the problem that the features cannot be tracked, when they go out of the sensing range. The main advantage of the proposed system are not only low computational cost but also convenient. It can be applicable to an augmented reality system for mobile computing environment.

A Hybrid Positioning System for Indoor Navigation on Mobile Phones using Panoramic Images

  • Nguyen, Van Vinh;Lee, Jong-Weon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.3
    • /
    • pp.835-854
    • /
    • 2012
  • In this paper, we propose a novel positioning system for indoor navigation which helps a user navigate easily to desired destinations in an unfamiliar indoor environment using his mobile phone. The system requires only the user's mobile phone with its basic equipped sensors such as a camera and a compass. The system tracks user's positions and orientations using a vision-based approach that utilizes $360^{\circ}$ panoramic images captured in the environment. To improve the robustness of the vision-based method, we exploit a digital compass that is widely installed on modern mobile phones. This hybrid solution outperforms existing mobile phone positioning methods by reducing the error of position estimation to around 0.7 meters. In addition, to enable the proposed system working independently on mobile phone without the requirement of additional hardware or external infrastructure, we employ a modified version of a fast and robust feature matching scheme using Histogrammed Intensity Patch. The experiments show that the proposed positioning system achieves good performance while running on a mobile phone with a responding time of around 1 second.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Natural Feature Tracking Using Optical Flow On Mobile Devices (광류 추적 기법을 사용한 모바일 기기에서의 자연 특징 추적)

  • Bae, Byeong-Jo;Park, Jong-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.562-565
    • /
    • 2010
  • 시각기반 증강현실 시스템의 구현을 위해서는 입력되는 카메라영상의 프레임을 매번 특징점을 추출하고 패턴 매칭 과정을 반복하는 것은 저 사양의 모바일 기기에서는 적합하지 않다. 본 논문에서는 이러한 문제점을 해결 하고자 카메라영상에서 패턴이 한번 인식되게 되면 그 이후의 영상에 대해서는 패턴 인식과정을 생략하고 이전 영상에서 매칭된 특징점을 광류 기반 추적기법을 사용하여 추적하도록 한다. 또한 패턴 추적 절차의 수행 중 추적이 실패하여 생기는 특징점 소실 문제는 정확한 호모그래피 행렬과 카메라 자세 추정을 어렵게 하는데 이러한 문제를 해결하도록 하는 패턴 추적의 성공 또는 실패는 판단하는 기준을 세워 모바일 기기에서 빠르게 동작하도록 하는 광류 추적 기법을 사용한 자연 특징 추적 기반 증강현실 시스템을 제안한다.