DOI QR코드

DOI QR Code

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei (School of Information Engineering, Zhejiang Agriculture and Forestry University) ;
  • Guan, Fang-li (State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University) ;
  • Xu, Ai-jun (School of Information Engineering, Zhejiang Agriculture and Forestry University)
  • Received : 2018.07.30
  • Accepted : 2018.12.30
  • Published : 2020.02.29

Abstract

Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

Keywords

References

  1. B. Hou, B. Khanal, A. Alansary, S. McDonagh, A. Davidson, M. Rutherford, J. V. Hajnal, D. Rueckert, B. Glocker, and B. Kainz, "3-D reconstruction in canonical co-ordinate space from arbitrarily oriented 2-D images," IEEE Transactions on Medical Imaging, vol. 37, no. 8, pp. 1737-1750, 2018. https://doi.org/10.1109/TMI.2018.2798801
  2. M. Waechter, M. Beljan, S. Fuhrmann, N. Moehrle, J. Kopf, and M. Goesele, "Virtual rephotography: novel view prediction error for 3D reconstruction," ACM Transactions on Graphics, vol. 36, no. 1, article no. 8, 2017.
  3. Y. I. Abdel-Aziz, H. M. Karara, and M. Hauck, "Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry," Photogrammetric Engineering & Remote Sensing, vol. 81, no. 2, pp. 103-107, 2015. https://doi.org/10.14358/PERS.81.2.103
  4. D. L. McKay, M. R. Wohlers, C. K. Chuang, J. S. Draper, and J. Walker, "Airborne validation of an IR passive TBM ranging sensor," in Proceedings of SPIE 3698: Infrared Technology and Applications XXV. Bellingham, WA: International Society for Optics and Photonics, 1999, pp. 491-500.
  5. R. C. Bradshaw, D. P. Schmidt, J. R. Rogers, K. F. Kelton, and R. W. Hyers, "Machine vision for highprecision volume measurement applied to levitated containerless material processing," Review of Scientific Instruments, vol. 76, no. 12, article no. 125108, 2015.
  6. M. Aki, T. Rojanaarpa, K. Nakano, Y. Suda, N. Takasuka, T. Isogai, and T. Kawai, "Road surface recognition using laser radar for automatic platooning," IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 10, pp. 2800-2810, 2016. https://doi.org/10.1109/TITS.2016.2528892
  7. R. Bajsy, "Active perception vs. passive perception," in Proceedings of the 3rd Workshop on Computer Vision: Representation and Control, Bellaire, MI, 1985, pp. 55-59.
  8. H. Zhang, H. Wei, H. Yang, and Y. Li, "Active laser ranging with frequency transfer using frequency comb," Applied Physics Letters, vol. 108, no. 18, article no. 181101, 2016.
  9. J. Yang, C. Yang, J. Liu, N. Zhu, L. Yu, and Y. Liu, "Visual passive ranging system based on target feature size," Optics and Precision Engineering, vol. 26, no. 1, pp. 245-252, 2018. https://doi.org/10.3788/OPE.20182601.0245
  10. F. Lin, X. Dong, B. M. Chen, K. Y. Lum, and T. H. Lee, "A robust real-time embedded vision system on an unmanned rotorcraft for ground target following," IEEE Transactions on Industrial Electronics, vol. 59, no. 2, pp. 1038-1049, 2011. https://doi.org/10.1109/TIE.2011.2161248
  11. J. Sun, G. Sun, P. Ma, T. Dong, and Y. Yang, "Laser target localization based on symmetrical wavelet denoising and asymmetric Gauss fitting," Chinese Journal of Lasers, vol. 44, no. 6, article no. 604001, 2017.
  12. R. Y. Takimoto, M. S. G. Tsuzuki, R. Vogelaar, T. de Castro Martins, A. K. Sato, Y. Iwao, T. Gotoh, and S. Kagei, "3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor," Mechatronics, vol. 35, pp. 11-22, 2016. https://doi.org/10.1016/j.mechatronics.2015.10.014
  13. J. Shi, Y. Li, G. Qi, and A. Sheng, "Machine vision based passive tracking algorithm with intermittent observations," Journal of Huazhong University of Science and Technology (Natural Science Edition), vol. 45, no. 6, pp. 33-37, 2017.
  14. C. Xu, D. Huang, and F. Kong, "Small UAV passive target localization approach and accuracy analysis," Chinese Journal of Scientific Instrument, vol. 36, no. 5, pp. 1115-1122, 2015.
  15. J. Mei, D. Zhang, and Y. Ding, "Monocular vision for pose estimation in space based on cone projection," Optical Engineering, vol. 56, no. 10, article no. 103108, 2017.
  16. R. Szeliski, Computer Vision: Algorithms and Applications. New York, NY: Springer, 2010.
  17. A. Ming, T. Wu, J. Ma, F. Sun, and Y. Zhou, "Monocular depth-ordering reasoning with occlusion edge detection and couple layers inference," IEEE Intelligent Systems, vol. 31, no. 2, pp. 54-65, 2015. https://doi.org/10.1109/MIS.2015.94
  18. E. Alexander, Q. Guo, S. Koppal, S. J. Gortler, and T. Zickler, "Focal flow: Velocity and depth from differential defocus through motion," International Journal of Computer Vision, vol. 126, pp. 1062-1083, 2018. https://doi.org/10.1007/s11263-017-1051-5
  19. C. S. Royden, D. Parsons, and J. Travatello, "The effect of monocular depth cues on the detection of moving objects by moving observers," Vision Research, vol. 124, pp. 7-14, 2016. https://doi.org/10.1016/j.visres.2016.05.002
  20. T. Liu, Y. Mo, G. Xu, X. Dai, X. Zhu, and J. Lu, "Depth estimation of monocular video using non-parametric fusion of multiple cues," Journal of Southeast University (Natural Science Edition), vol. 45, no. 5, pp. 834-839, 2015.
  21. Y. Seo, A. Heyden, and R. Cipolla, "A linear iterative method for auto-calibration using the DAC equation," in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, 2001.
  22. G. Wu and Z. Tang, "Distance measurement in visual navigation of monocular autonomous robots," Jiqiren (Robot), vol. 32, no. 6, pp. 828-832, 2010.
  23. J. Heikkila and O. Silven, "A four-step camera calibration procedure with implicit image correction," in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 1997, pp. 1106-1112.
  24. C. Wu, C. Lin, and C. Lee, "Applying a functional neurofuzzy network to real-time lane detection and frontvehicle distance measurement," IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 4, pp. 577-589, 2012. https://doi.org/10.1109/TSMCC.2011.2166067
  25. X. Huang, F. Gao, G. Xu, N. Ding, and L. Xing, "Depth information extraction of on-board monocular vision based on a single vertical target image," Journal of Beijing University of Aeronautics and Astronautics, vol. 41, no. 4, pp. 649-655, 2015.
  26. D. Scaramuzza, A. Martinelli, and R. Siegwart, "A toolbox for easily calibrating omnidirectional cameras," in Proceedings of 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2016, pp. 5695-5701.
  27. C. Harris and M. Stephens, "A combined corner and edge detector," in Proceedings of the British Machine Vision Conference, Manchester, UK, 1988, pp. 147-151.
  28. J. Shi, "Good features to track," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, 1994, pp. 593-600.
  29. A. Geiger, F. Moosmann, O. Car, and B. Schuster, "Automatic camera and range sensor calibration using a single shot," in Proceedings of 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, 2012, pp. 3936-3943.
  30. Z. Zhang, "Flexible camera calibration by viewing a plane from unknown orientations," in Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 1999, pp. 666-673.
  31. M. Sheng, H. Zhou, H. Huang, and H. Qin, "Study on an underwater binocular vision ranging method," Journal of Huazhong University of Science and Technology (Natural Science Edition), vol. 46, no. 8, pp. 93-98, 2018.
  32. B. Zou and Y. Yuan, "High precision distance measurement based on monocular vision for intelligent traffic," Journal of Transportation on Systems Engineering and Information Technology, vol. 18, no. 4, pp. 46-53,60, 2018.