DOI QR코드

DOI QR Code

A kinect-based parking assistance system

  • Bellone, Mauro (Department of Engineering for Innovation, Universita del Salento) ;
  • Pascali, Luca (Department of Engineering for Innovation, Universita del Salento) ;
  • Reina, Giulio (Department of Engineering for Innovation, Universita del Salento)
  • Received : 2013.08.01
  • Accepted : 2014.02.01
  • Published : 2014.04.25

Abstract

This work presents an IR-based system for parking assistance and obstacle detection in the automotive field that employs the Microsoft Kinect camera for fast 3D point cloud reconstruction. In contrast to previous research that attempts to explicitly identify obstacles, the proposed system aims to detect "reachable regions" of the environment, i.e., those regions where the vehicle can drive to from its current position. A user-friendly 2D traversability grid of cells is generated and used as a visual aid for parking assistance. Given a raw 3D point cloud, first each point is mapped into individual cells, then, the elevation information is used within a graph-based algorithm to label a given cell as traversable or non-traversable. Following this rationale, positive and negative obstacles, as well as unknown regions can be implicitly detected. Additionally, no flat-world assumption is required. Experimental results, obtained from the system in typical parking scenarios, are presented showing its effectiveness for scene interpretation and detection of several types of obstacle.

Keywords

References

  1. Bellone, M., Reina, G., Giannoccaro, N.I. and Spedicato, L. (2013), "Unevenness point descriptor for terrain analysis in mobile robot applications", Int. J. Adv. Robot. Syst., 10, 284. https://doi.org/10.5772/56240
  2. Bouguet, J. "Camera Calibration Toolbox for Matlab". http://www.vision.caltech.edu/bouguetj/calib doc/
  3. Choi, J., Kim, D., Yoo, H. and Sohn, K. (2012), "Rear obstacle detection system based on depth from Kinect", 15th International IEEE Conference on Intelligent Transportation Systems, Anchorage, Alaska, USA.
  4. Cousins, S. and Rusu, R.B. (2011), "3D is here: Point Cloud Library (PCL)", Robotics and Automation (ICRA) - IEEE International Conference on R&A, Shanghai, China.
  5. Gennery, D. (1999), "Traversability analysis and path planning for a planetary rover", Auto. Robot., 6, 131-146. https://doi.org/10.1023/A:1008831426966
  6. Hartley, R. and Zisserman, A. (2004), Multiple View Geometry in Computer Vision, Cambridge University Press, USA.
  7. Hatano, H., Yamazato, T. and Katayama, M. (2007), "Automotive ultrasonic array emitter for short-range targets detection", IEEE International Symposium on Wireless Comunication and Systems, Trondheim, Norway.
  8. Hsu, C.M., Lian, F.L., Ting, J.A., Liang, J.A. and Chen, B.C. (2011), "Road detection based on bread-rirst search in urban traffic scenes", Asian Control Conference (ASCC) Kaohsiung, Taiwan.
  9. Donald, K.E. (1997), The Art of Computer Programming, Addison-Wesley, Boston.
  10. Lalonde, J., Laganiere, R. and Martel, L. (2012), "Single-view obstacle detection for smart back-up camera system", IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Ottawa, Canada.
  11. Lovegrove, S., Davison, A.J. and Ibanez-Guzman, J. (2011), "Accurate visual odometry from a rear parking camera", IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany.
  12. Marton, Z.C., Blodow, N., Dolha, M., Beetz, M. and Rusu, R.B. (2008), "Toward 3D point cloud based object maps for household environments", Robot. Auto. Syst., 56, 927-941. https://doi.org/10.1016/j.robot.2008.08.005
  13. Milella, A., Reina, G. and Siegwart, R. (2006), "Computer vision methods for improved mobile robot state estimation in challenging terrains", J. Multimedia, 1(7), 49-61.
  14. Reina, G., Ishigami, G., Nagatani, K. and Yoshida, K. (2010), "Odometry correction using visual slip-angle estimation for planetary exploration rovers", Adv. Robot., 24(3), 359-385. https://doi.org/10.1163/016918609X12619993300548
  15. Reina, G., Milella, A. and Underwood, J. (2012), "Self-learning classification of radar features for scene understanding", Robot. Auto. Syst., 60(11), 1377-1388. https://doi.org/10.1016/j.robot.2012.03.002
  16. Reina, G. and Milella, A. (2012), "Towards autonomous agriculture: automatic ground detection using trinocular stereovision", Sensors, 12(9), 12405-12423. https://doi.org/10.3390/s120912405
  17. Reisman, P., Mano, O., Avidan, S. and Shashua, A. (2004), "Crowd detection in video sequences", IEEE Intelligent Vehicles Symposium, Parma, Italy.
  18. Spedicato, L., Giannoccaro, N.I., Reina, G. and Bellone, M. (2013), "Clustering and PCA for reconstructing two perpendicular planes using ultrasonic sensors", Int. J. Adv. Robot. Syst., 10, 210. https://doi.org/10.5772/55606
  19. Vandapel, N., Huber, D., Kapuria, A. and Hebert, M. (2004), "Natural terrain classification using 3D ladar data", IEEE International Conference on Robotics and Automation, 1, 5117-5122.
  20. Vestri, C., Bougnoux, S., Bendahan, R., Fintzel, K., Wybo, S., Abad, F. and Kakinami, T. (2005), "Evaluation of a vision-based parking assistance system", IEEE Conference on Intelligent Transportation Systems, Vienna, Austria.
  21. Zhang, Z. (2000), "A flexible new technique for camera calibration", IEEE Tran. Pattern Anal. Mach. Intel., 22(11), 1330-1334. https://doi.org/10.1109/34.888718

Cited by

  1. Reducing the minimum range of a RGB-depth sensor to aid navigation in visually impaired individuals vol.57, pp.11, 2014, https://doi.org/10.1364/ao.57.002809