• 제목/요약/키워드: Optical flow algorithm

검색결과 189건 처리시간 0.027초

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Optical flow 이론을 이용한 움직이는 객체의 자동 추출에 관한 연구 (A study on automatic extraction of a moving object using optical flow)

  • 정철곤;김경수;김중규
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.50-53
    • /
    • 2000
  • In this work, the new algorithm that automatically extracts moving object of the video image is presented. In order to extract moving object, it is that velocity vectors correspond to each frame of the video image. Using the estimated velocity vector, the position of the object are determined. the value of the coordination of the object is initialized to the seed, and in the image plane, the moving object is automatically segmented by the region growing method. As the result of an application in sequential images, it is available to extract a moving object.

  • PDF

안정적인 실시간 얼굴 특징점 추적과 감정인식 응용 (Robust Real-time Tracking of Facial Features with Application to Emotion Recognition)

  • 안병태;김응희;손진훈;권인소
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Hand Gesture Recognition using Optical Flow Field Segmentation and Boundary Complexity Comparison based on Hidden Markov Models

  • Park, Sang-Yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제14권4호
    • /
    • pp.504-516
    • /
    • 2011
  • In this paper, we will present a method to detect human hand and recognize hand gesture. For detecting the hand region, we use the feature of human skin color and hand feature (with boundary complexity) to detect the hand region from the input image; and use algorithm of optical flow to track the hand movement. Hand gesture recognition is composed of two parts: 1. Posture recognition and 2. Motion recognition, for describing the hand posture feature, we employ the Fourier descriptor method because it's rotation invariant. And we employ PCA method to extract the feature among gesture frames sequences. The HMM method will finally be used to recognize these feature to make a final decision of a hand gesture. Through the experiment, we can see that our proposed method can achieve 99% recognition rate at environment with simple background and no face region together, and reduce to 89.5% at the environment with complex background and with face region. These results can illustrate that the proposed algorithm can be applied as a production.

옵티컬 플로우 분석을 통한 불법 유턴 차량 검지 (Detection of Illegal U-turn Vehicles by Optical Flow Analysis)

  • 송창호;이재성
    • 한국통신학회논문지
    • /
    • 제39C권10호
    • /
    • pp.948-956
    • /
    • 2014
  • 오늘날 지능형 영상 검지기 시스템(Intelligent Vehicle Detection System)이 추구하는 방향은 기존 시스템의 교통 소통 정보 습득을 넘어서 교통정체, 사고 등과 같은 부정적인 요인을 줄이는 것이다. 본 논문에서는 도로 교통법규 위반 상황 중에서 가장 치명적인 사고를 유발 할 수 있는 불법 유턴 차량을 검지하는 알고리즘을 제안한다. 영상의 옵티컬 플로우 벡터(Optical Flow Vector)를 구하고 이 벡터가 불법 유턴 경로 상에 나타난다면 불법 유턴차량에 의해 생긴 벡터일 확률이 높을 것이라는 점에 착안하여 연구를 진행했다. 옵티컬 플로우 벡터를 구하기 전에 연산량 절감을 위하여 코너(corner)와 같은 특징점을 선지정한 후 그 점들에 대해서만 추적하는 피라미드 루카스-카나데(pyramid Lucas-Kanade) 알고리즘을 사용했다. 이 알고리즘은 연산량이 매우 높기 때문에 먼저 컬러 정보와 진보된 확률적 허프 변환(progressive probabilistic hough transform)으로 중앙선을 검출하고 그 주위 영역에만 적용시켰다. 그리고 검출된 벡터들 중 불법 유턴 경로위의 벡터들을 선별하고 이 벡터들이 불법 유턴 차량에 의해 생긴 벡터들인지 확인하기 위해 신뢰도를 검증하여 불법 유턴 차량을 검지하였다. 최종적으로 알고리즘의 성능을 평가하기 위해 알고리즘별 처리시간을 측정하였으며 본 논문에서 제안한 알고리즘이 효율적임을 증명하였다.

Motion Field Estimation Using U-Disparity Map in Vehicle Environment

  • Seo, Seung-Woo;Lee, Gyu-Cheol;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권1호
    • /
    • pp.428-435
    • /
    • 2017
  • In this paper, we propose a novel motion field estimation algorithm for which a U-disparity map and forward-and-backward error removal are applied in a vehicular environment. Generally, a motion exists in an image obtained by a camera attached to a vehicle by vehicle movement; however, the obtained motion vector is inaccurate because of the surrounding environmental factors such as the illumination changes and vehicles shaking. It is, therefore, difficult to extract an accurate motion vector, especially on the road surface, due to the similarity of the adjacent-pixel values; therefore, the proposed algorithm first removes the road surface region in the obtained image by using a U-disparity map, and uses then the optical flow that represents the motion vector of the object in the remaining part of the image. The algorithm also uses a forward-backward error-removal technique to improve the motion-vector accuracy and a vehicle's movement is predicted through the application of the RANSAC (RANdom SAmple Consensus) to the previously obtained motion vectors, resulting in the generation of a motion field. Through experiment results, we show that the performance of the proposed algorithm is superior to that of an existing algorithm.

연속 영상에서의 얼굴표정 및 제스처 인식 (Recognizing Human Facial Expressions and Gesture from Image Sequence)

  • 한영환;홍승홍
    • 대한의용생체공학회:의공학회지
    • /
    • 제20권4호
    • /
    • pp.419-425
    • /
    • 1999
  • 본 논문에서는 흑백 동영상을 사용하여 얼굴 표정 및 제스처를 실시간으로 인식하는 시스템을 개발하였다. 얼굴 인식분야에서는 형판 정합법과 얼굴의 기하학적 고찰에 의한 사전지식을 바탕으로 한 방법을 혼합하여 사용하였다. 혼합 방법에 의해 입력영상에서 얼굴 부위만을 제한하였으며, 이 영역에 옵티컬 플로우를 적용하여 얼굴 표정을 인식하였다. 제스처 인식에서는 엔트로피를 분석하여 복잡한 배경영상으로부터 손 영역을 분리하는 방법을 제안하였으며 , 이 방법을 개선하여 손동작에 대한 제스처를 인식하였다. 실험 결과, 입력 영상의 배경에 크게 영향을 받지 않고서도 동일 영상에서 움직임이 큰 부위를 검출하여 얼굴의 표정 및 손 제스처를 실시간적으로 인식할 수 있었다.

  • PDF

어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘 (Localization using Ego Motion based on Fisheye Warping Image)

  • 최윤원;최경식;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

천정부착 랜드마크 위치와 에지 화소의 이동벡터 정보에 의한 이동로봇 위치 인식 (Mobile Robot Localization using Ceiling Landmark Positions and Edge Pixel Movement Vectors)

  • 진홍신;아디카리 써얌프;김성우;김형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.368-373
    • /
    • 2010
  • A new indoor mobile robot localization method is presented. Robot recognizes well designed single color landmarks on the ceiling by vision system, as reference to compute its precise position. The proposed likelihood prediction based method enables the robot to estimate its position based only on the orientation of landmark.The use of single color landmarks helps to reduce the complexity of the landmark structure and makes it easily detectable. Edge based optical flow is further used to compensate for some landmark recognition error. This technique is applicable for navigation in an unlimited sized indoor space. Prediction scheme and localization algorithm are proposed, and edge based optical flow and data fusing are presented. Experimental results show that the proposed method provides accurate estimation of the robot position with a localization error within a range of 5 cm and directional error less than 4 degrees.

A New Ocular Torsion Measurement Method Using Iterative Optical Flow

  • Lee InBum;Choi ByungHun;Kim SangSik;Park Kwang Suk
    • 대한의용생체공학회:의공학회지
    • /
    • 제26권3호
    • /
    • pp.133-138
    • /
    • 2005
  • This paper presents a new method for measuring ocular torsion using the optical flow. Images of the iris were cropped and transformed into rectangular images that were orientation invariant. Feature points of the iris region were selected from a reference and a target image, and the shift of each feature was calculated using the iterative Lucas-Kanade method. The feature points were selected according to the strength of the corners on the iris image. The accuracy of the algorithm was tested using printed eye images. In these images, torsion was measured with $0.15^{\circ}$ precision. The proposed method shows robustness even with the gaze directional changes and pupillary reflex environment of real-time processing.