• Title/Summary/Keyword: mean shift 추적 알고리즘

Search Result 50, Processing Time 0.04 seconds

Interface Implementation using Facial Feature Tracking (얼굴 특징 추적을 이용한 인터페이스 구현)

  • Shin Yun-Hee;Kang Sin-Kuk;Kim Eun-Yi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.274-276
    • /
    • 2006
  • 본 논문은 얼굴 특징 추적을 이용한 새로운 인터페이스를 제안한다. 눈의 움직임만으로 구현된 기존의 시스템은 마우스 클릭 이벤트에 걸리는 waiting time으로 인해 속도 개선이 필요했다. 이를 위해서 본 논문에서는 눈의 움직임 뿐 아니라 입의 움직임도 인식하여 사용자의 요구를 처리할 수 있는 시스템을 개발한다. 제안된 시스템은 얼굴 검출 모듈, 눈 검출 모들, 입 검출 모듈, 얼굴 특징 추적 모듈, 마우스 제어모듈의 5 가지 모듈로 구성되어 있다. 먼저, 피부색 모델과 연결 성분 분석을 이용하여 얼굴을 검출하고 신경망 기반의 분류기와 에지 검출기를 이용하여 검출된 얼굴 영역에서 눈과 입을 찾는다. 이후 프레임에서는 mean-shift 알고리즘과 템플릿 매칭을 이용하여 눈과 입이 정확하게 추적되어 눈의 움직임으로 마우스의 포인트를 움직이고 입의 움직임으로 메뉴나 아이콘을 클릭하게 된다. 제안된 시스템의 효율성을 검증하기 위해서 웹 브라우저의 인터페이스로 활용하였다. 25명의 사용자에 대해 실험한 결과는 제안된 시스템이 보다 편리하고 친숙한 인터페이스로 활용될 수 있다는 것을 보여주었다.

  • PDF

Context Driven Real-Time Laser Pointer Detection and Tracking (상황 기반의 실시간 레이저 포인터 검출과 추적)

  • Kang, Sung-Kwan;Chung, Kyung-Yong;Park, Yang-Jae;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.10 no.2
    • /
    • pp.211-216
    • /
    • 2012
  • There are two kinks of processes could detect the laser pointer. One is the process which detects the location of the pointer. the other one is a possibility of dividing with the process which converts the coordinate of the laser pointer which is input in coordinate of the monitor. The previous Mean-Shift algorithm is not appropriately for real-time video image to calculate many quantity. In this paper, we proposed the context driven real-time laser pointer detection and tracking. The proposed method is a possibility of getting the result which is fixed from the situation which the background and the background which are complicated dynamically move. In the actual environment, we can get to give constant results when the object come in, when going out at forecast boundary. Ultimately, this paper suggests empirical application to verify the adequacy and the validity with the proposed method. Accordingly, the accuracy and the quality of image recognition will be improved the surveillance system.

Design of Pedestrian Detection and Tracking System Using HOG-PCA and Object Tracking Algorithm (HOG-PCA와 객체 추적 알고리즘을 이용한 보행자 검출 및 추적 시스템 설계)

  • Jeon, Pil-Han;Park, Chan-Jun;Kim, Jin-Yul;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.4
    • /
    • pp.682-691
    • /
    • 2017
  • In this paper, we propose the fusion design methodology of both pedestrian detection and object tracking system realized with the aid of HOG-PCA based RBFNN pattern classifier. The proposed system includes detection and tracking parts. In the detection part, HOG features are extracted from input images for pedestrian detection. Dimension reduction is also dealt with in order to improve detection performance as well as processing speed by using PCA which is known as a typical dimension reduction method. The reduced features can be used as the input of the FCM-based RBFNNs pattern classifier to carry out the pedestrian detection. FCM-based RBFNNs pattern classifier consists of condition, conclusion, and inference parts. FCM clustering algorithm is used as the activation function of hidden layer. In the conclusion part of network, polynomial functions such as constant, linear, quadratic and modified quadratic are regarded as connection weights and their coefficients of polynomial function are estimated by LSE-based learning. In the tracking part, object tracking algorithms such as mean shift(MS) and cam shift(CS) leads to trace one of the pedestrian candidates nominated in the detection part. Finally, INRIA person database is used in order to evaluate the performance of the pedestrian detection of the proposed system while MIT pedestrian video as well as indoor and outdoor videos obtained from IC&CI laboratory in Suwon University are exploited to evaluate the performance of tracking.

Vehicle Tracking System using HSV Color Space at nighttime (HSV 색 공간을 이용한 야간 차량 검출시스템)

  • Park, Ho-Sik
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.4
    • /
    • pp.270-274
    • /
    • 2015
  • We suggest that HSV Color Space may be used to detect a vehicle detecting system at nighttime. It is essential that a licence plate should be extracted when a vehicle is under surveillance. To do so, a licence plate may be enlarged to certain size after the aimed vehicle is taken picture from a distance by using Pan-Tilt-Zoom Camera. Either Mean-Shift or Optical Flow Algorithm is generally used for the purpose of a vehicle detection and trace, even though those algorithms have tendency to have difficulty in detection and trace a vehicle at night. By utilizing the fact that a headlight or taillight of a vehicle stands out when an input image is converted in to HSV Color Space, we are able to achieve improvement on those algorithms for the vehicle detection and trace. In this paper, we have shown that at night, the suggested method is efficient enough to detect a vehicle 93.9% from the front and 97.7% from the back.

Approximate Front Face Image Detection Using Facial Feature Points (얼굴 특징점들을 이용한 근사 정면 얼굴 영상 검출)

  • Kim, Su-jin;Jeong, Yong-seok;Oh, Jeong-su
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.675-678
    • /
    • 2018
  • Since the face has a unique property to identify human, the face recognition is actively used in a security area and an authentication area such as access control, criminal search, and CCTV. The frontal face image has the most face information. Therefore, it is necessary to acquire the front face image as much as possible for face recognition. In this study, the face region is detected using the Adaboost algorithm using Haar-like feature and tracks it using the mean-shifting algorithm. Then, the feature points of the facial elements such as the eyes and the mouth are extracted from the face region, and the ratio of the two eyes and degree of rotation of the face is calculated using their geographical information, and the approximate front face image is presented in real time.

  • PDF

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.

신체 장애우를 위한 얼굴 특징 추적을 이용한 실감형 게임 시스템 구현

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10a
    • /
    • pp.475-478
    • /
    • 2006
  • 실감형 게임은 사람의 신체 움직임 및 오감을 최대한 반영한 리얼리티를 추구하는 전문적인 게임이다. 현재 개발된 실감형 게임들은 비 장애우를 대상으로 만들어 졌기 때문에 많은 움직임을 필요로 한다. 하지만 신체적 불편함을 가진 장애우들은 이러한 게임들을 이용하는데 어려움이 있다. 따라서 본 논문에서는 PC상에서 최소의 얼굴 움직임을 사용하여 수행할 수 있는 실감형 게임 시스템을 제안한다. 제안된 실감형 게임 시스템은 웹 카메라로부터 얻어진 영상에서 신경망 기반의 텍스쳐 분류기를 이용하여 눈 영역을 추출한다. 추출된 눈 영역은 Mean-shift 알고리즘을 이용하여 실시간으로 추적되어지고, 그 결과로 마우스의 움직임이 제어된다. 구현된 flash게임과 연동하여 게임을 눈의 움직임으로 제어 할 수 있다. 제안된 시스템의 효율성을 검증하기 위하여 장애우와 비 장애우로 분류하여 성능을 평가 하였다. 그 결과 제안된 시스템이 보다 편리하고 친숙하게 신체 장애우 에게 활용 될 수 있으며 복잡한 환경에서도 확실한 얼굴 추적을 통하여 실감형 게임 시스템을 실행 할 수 있음이 증명되었다.

  • PDF

Implementation of Finger-Gesture Game Controller using CAMShift and Double Circle Tracing Method (CAMShift와 이중 원형 추적법을 이용한 손 동작 게임 컨트롤러 구현)

  • Lee, Woo-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.2
    • /
    • pp.42-47
    • /
    • 2014
  • A finger-gesture game controller using the single camera is implemented in this paper, which is based on the recognition of the number of fingers and the index finger moving direction. Proposed method uses the CAMShift algorithm to trace the end-point of index finger effectively. The number of finger is recognized by using a double circle tracing method. Then, HSI color mode transformation is performed for the CAMShift algorithm, and YCbCr color model is used in the double circle tracing method. Also, all processing tasks are implemented by using the Intel OpenCV library and C++ language. In order to evaluate the performance of the proposed method, we developed a shooting game simulator and validated the proposed method. The proposed method showed the average recognition ratio of more than 90% for each of the game command-mode.

Welfare Interface using Multiple Facial Features Tracking (다중 얼굴 특징 추적을 이용한 복지형 인터페이스)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.75-83
    • /
    • 2008
  • We propose a welfare interface using multiple fecial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial feature tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis(CCs). Thereafter the eye regions are localized using neutral network(NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 21 frame/sec on PC for the $320{\times}240$ size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

Design and Implementation of a Real-Time Lipreading System Using PCA & HMM (PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현)

  • Lee chi-geun;Lee eun-suk;Jung sung-tae;Lee sang-seol
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1597-1609
    • /
    • 2004
  • A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.

  • PDF