DOI QR코드

DOI QR Code

Point Cloud Data Driven Level of detail Generation in Low Level GPU Devices

Low Level GPU에서 Point Cloud를 이용한 Level of detail 생성에 대한 연구

  • Kam, JungWon (The 1st Research and Department of Electronic Engineering, Changwon National University) ;
  • Gu, BonWoo (Department of M&S 1, SIMNET Coperation) ;
  • Jin, KyoHong (The 1st Research and Department of Electronic Engineering, Changwon National University)
  • Received : 2020.08.27
  • Accepted : 2020.11.13
  • Published : 2020.12.05

Abstract

Virtual world and simulation need large scale map rendering. However, rendering too many vertices is a computationally complex and time-consuming process. Some game development companies have developed 3D LOD objects for high-speed rendering based on distance between camera and 3D object. Terrain physics simulation researchers need a way to recognize the original object shape from 3D LOD objects. In this paper, we proposed simply automatic LOD framework using point cloud data (PCD). This PCD was created using a 6-direct orthographic ray. Various experiments are performed to validate the effectiveness of the proposed method. We hope the proposed automatic LOD generation framework can play an important role in game development and terrain physic simulation.

Keywords

References

  1. Schneider et al., "GPU-Friendly High-Quality Terrain Rendering," 2006.
  2. WEN, et al., Real-Time Rendering of Large Terrain on Mobile Device, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, p. 37, 2008.
  3. Zhou et al., A Hybrid Level-of-Detail Representation for Large-Scale Urban Scenes Rendering, Computer Animation and Virtual Worlds, 25(3-4), pp. 243-253, 2014. https://doi.org/10.1002/cav.1582
  4. Zienkiewicz et al., Monocular, Real-Time Surface Reconstruction using Dynamic Level of Detail, In 2016 Fourth International Conference on 3D Vision (3DV), IEEE, pp. 37-46, 2016.
  5. Burt and Adelson, The Laplacian PYRAMID as a Compact Image Code, IEEE Transactions on Communications, 31(4), pp. 532-540, 1983. https://doi.org/10.1109/TCOM.1983.1095851
  6. Ghiasi and Fowlkes, Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation, In European Conference on Computer Vision, pp. 519-534, 2016.
  7. He et al. A System for Rapid, Automatic Shader Level-of-Detail, ACM Transactions on Graphics (TOG), 34(6), p. 187, 2015.
  8. Shreiner et al., OpenGL Programming Guide: The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. Pearson Education, 2009.
  9. Rost et al., OpenGL Shading Language, Pearson Education, 2009.
  10. Sanz-Pastor et al., Volumetric Three-Dimensional Fog Rendering Technique, U.S. Patent 6,268,861, 2001.
  11. Decaudin and Neyret, Volumetric Billboards, In Computer Graphics Forum, Oxford, UK: Blackwell Publishing Ltd., Vol. 28, No. 8, pp. 2079-2089, December 2009.
  12. Mantler et al., Displacement Mapped Billboard Clouds, In Proceedings of Symposium on Interactive 3D Graphics and Games, Citeseer, January 2007.
  13. Vichitvejpaisal and Kanongchaiyos, Enhanced Billboards for Model Simplification, 2006.
  14. Lorensen et al., Marching Cubes: A High Resolution 3D Surface Construction Algorithm, In ACM Siggraph Computer Graphics, ACM, Vol. 21, No. 4, pp. 163-169, August 1987. https://doi.org/10.1145/37402.37422
  15. Jablonski, S. and Martyn, Real-Time Voxel Rendering Algorithm based on Screen Space Billboard Voxel Buffer with Sparse Lookup Textures, 2016.
  16. Baert et al., Out-of-Core Construction of Sparse Voxel Octrees, In Proceedings of the 5th High-Performance Graphics Conference, ACM, pp. 27-32, July 2013.
  17. Laine and Karras, Efficient Sparse Voxel Octrees, IEEE Transactions on Visualization and Computer Graphics, 17(8), pp. 1048-1059, 2011. https://doi.org/10.1109/TVCG.2010.240
  18. Ginsburg et al., OpenGL ES 3.0 Programming Guide, Addison-Wesley Professional, 2014.
  19. Brothaler, OpenGL ES 2 for Android: A Quick-Start Guide, Pragmatic Bookshelf, 2013.
  20. Linsen, Point Cloud Representation, Technical Report, Faculty of Computer Science, University of Karlsruhe: Univ., Fak. fur Informatik, Bibliothek, 2001.
  21. Zhao and Zhu, Image Parsing with Stochastic Scene Grammar, In Advances in Neural Information Processing Systems, pp. 73-81, 2011.
  22. Zheng et al., Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3127-3134, 2013.
  23. Yan et al., 3D Point Cloud Map Construction based on Line Segments with Two Mutually Perpendicular Laser Sensors, In 2013 13th International Conference on Control, Automation and Systems(ICCAS 2013), IEEE, pp. 1114-1116, October 2013.
  24. Eckart et al., Accelerated Generative Models for 3D Point Cloud Data, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5497-5505, 2016.
  25. Belton et al., Processing Tree Point Clouds using Gaussian Mixture Models, Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Antalya, Turkey, pp. 11-13, 2013.
  26. Arce and Gonzalo, Nonlinear Signal Processing: A Statistical Approach, John Wiley & Sons, 2005.
  27. Camplani and Salgado, Efficient Spatio-Temporal Hole Filling Strategy for Kinect Depth Maps, In Three-Dimensional Image Processing(3DIP) and Applications Ii(Vol. 8290, p. 82900E), International Society for Optics and Photonics, January 2012.
  28. Touma et al., 3D Mouse and Game Controller based on Spherical Coordinates System and System for use, U.S. Patent 7,683,883, 2010.