DOI QR코드

DOI QR Code

Motion Field Estimation Using U-disparity Map and Forward-Backward Error Removal in Vehicle Environment

U-시차 지도와 정/역방향 에러 제거를 통한 자동차 환경에서의 모션 필드 예측

  • Received : 2015.07.20
  • Accepted : 2015.10.06
  • Published : 2015.12.30

Abstract

In this paper, we propose novel motion field estimation method using U-disparity map and forward-backward error removal in vehicles environment. Generally, in an image obtained from a camera attached in a vehicle, a motion vector occurs according to the movement of the vehicle. but this motion vector is less accurate by effect of surrounding environment. In particular, it is difficult to extract an accurate motion vector because of adjacent pixels which are similar each other on the road surface. Therefore, proposed method removes road surface by using U-disparity map and performs optical flow about remaining portion. forward-backward error removal method is used to improve the accuracy of the motion vector. Finally, we predict motion of the vehicle by applying RANSAC(RANdom SAmple Consensus) from acquired motion vector and then generate motion field. Through experimental results, we show that the proposed algorithm performs better than old schemes.

본 논문에서는 U-시차 지도(U-disparity map)와 정방향-역방향 에러 제거를 통하여 자동차 환경에서의 새로운 모션 필드 예측기법을 제안한다. 일반적으로 자동차에 장착된 카메라로 획득된 영상에서는 자동차의 움직임에 따라 모션 벡터가 발생하게 된다. 그러나 이러한 모션 벡터는 주변 환경에 영향을 받기 때문에 정확도가 떨어진다. 특히 도로면에서는 인접한 화소값이 유사하기 때문에 정확한 모션 벡터의 추출이 어렵다. 따라서 제안하는 기법에서는 U-시차 지도를 이용하여 도로면을 제거하고 나머지 부분에 대하여 옵티컬 플로우(optical flow)를 수행한다. 또한 모션 벡터의 정확도를 향상시키기 위해 정방향-역방향 에러 제거 방법을 활용한다. 최종적으로 획득한 모션 벡터에 RANSAC(RANdom SAmple Consensus)을 적용하여 차량의 움직임을 예측하고 모션 필드를 생성한다. 실험을 통해 제안하는 기법이 기존의 기법보다 성능이 우수한 것을 확인하였다.

Keywords

References

  1. Y. Zhang, M. Xie, and D. Tang, "A central sub-image based global motion estimation method for in-car video stabilization," in Proc. ACM SIGKDD KDD, Phuket, Thailand, pp. 204-207, Jan. 2010.
  2. K. Yamaguchi, T. Kato, and Y. Ninomiya, "Vehicle ego-motion estimation and moving object detection using a monocular camera," in Proc. Int. Conf. Pattern Recognition, Hong Kong, Hong Kong, pp. 610-613, Aug. 2006.
  3. O. Pink, F. Moosmann, and A. Bachmann, "Visual features for vehicle localization and ego-motion estimation," in Proc. IEEE Intell. Veh. Symp., Xi'an, China, pp. 254-260, Jun. 2009.
  4. G. Ligorio and A. M. Sabatini, "Extended kalman filter-based methods for pose estimation using visual, inertial and magnetic sensors: comparative analysis and performance evaluation," Sensors, vol. 13, no. 2, pp. 1919-1941, Feb. 2013. https://doi.org/10.3390/s130201919
  5. F. J. Sharifi and M. Marey, "A kalman-filterbased method for pose estimation in visual servoing," IEEE Trans. Robotics, vol. 26, no. 5, pp. 939-947, Oct. 2010. https://doi.org/10.1109/TRO.2010.2061290
  6. M. A. Flschier and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography," Commun. ACM, vol. 24, no. 6, pp. 381-395, Jun. 1981. https://doi.org/10.1145/358669.358692
  7. V. Lippiello, B. Siciliano, and L. Villani, "Adaptive extended kalman filtering for visual motion estimation of 3D objects," Control Eng. Practice, vol. 15, no. 1, pp. 123-134, Jan. 2007. https://doi.org/10.1016/j.conengprac.2006.05.006
  8. W. Jang, C Lee, and Y. Ho, "Efficient depth map generation for various stereo camera arrangements," J. KICS, vol. 37, no. 6, pp. 458-463, Jun. 2012. https://doi.org/10.7840/KICS.2012.37.6A.458
  9. E. Baek and Y. Ho, "Stereo image composition using poisson object editing," J. KICS, vol. 39, no. 8, pp. 453-458, Aug. 2014.
  10. Z. Hu and K. Uchimura, "U_V Disparity, an efficient algorithm for stereo vision based scene analysis," IEEE Intell. Veh. Symp., pp. 48-54, Las Vegas, USA, Jun. 2005.
  11. B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," in Proc. Int. Joint Conf. Artificial Intell., pp. 674-679, Vancouver, Canada, Aug. 1981.
  12. C. Song and J. Lee, "Detection of illegal u-turn vehicles by optical flow analysis," J. KICS, vol. 39, no. 10, pp. 948-956, Oct. 2014.
  13. Z. Kalal, K. Mikolajczyk, and J. Matas, "Forward-backward error: automatic detection of tracking failures," in Proc. Int. Conf. Pattern Recognition, pp. 23-26, Istanbul, Turkey, Aug. 2010.
  14. C. Harris and M. Stephens, "A combined corner and edge detector," in Proc. Alvey Vision Conf., pp. 147-151, Manchester, UK, Aug. 1988.
  15. B. K. P. Horn and B. Schunck, "Determining optical flow," Artificial Intell., vol. 17, no. 1-3, pp. 185-203, Aug. 1981. https://doi.org/10.1016/0004-3702(81)90024-2
  16. H. C. Longuet-Higgins and K. Prazdny, "The interpretation of a moving retinal image," The Royal Soc. London B, vol. 208, no. 1173, pp. 385-397, Jul. 1980. https://doi.org/10.1098/rspb.1980.0057
  17. C. Keller, M. Enzweiler, and D. M. Gavila, "A new benchmark for stereo-based pedestrian detection," in Proc. IEEE Intell. Veh. Symp., Baden-Baden, Germany, Jun. 2011.

Cited by

  1. Motion Field Estimation Using U-Disparity Map in Vehicle Environment vol.12, pp.1, 2015, https://doi.org/10.5370/jeet.2017.12.1.428