DOI QR코드

DOI QR Code

Refinement of Projection Map Based on Artificial Neural Networks to Represent Noise-Reduced Foam Effects

노이즈가 완화된 거품 효과를 표현하기 위한 인공신경망 기반의 투영맵 정제

  • Received : 2021.07.01
  • Accepted : 2021.08.27
  • Published : 2021.09.01

Abstract

In this paper, we propose an artificial neural network framework that can represent the foam effects expressed in liquid simulation in detail without noise. The position and advection of foam particles are calculated using the existing screen projection method, and the noise problem that appears in this process is solved through an proposed artificial neural network. The important thing in the screen projection approach is the projection map, but noise occurs in the projection map in the process of projecting momentum into the discretized screen space, and we efficiently solve this problem by using an artificial neural network-based denoising network. When the foam generating area is selected through the projection map, 2D is inversely transformed into 3D space to generate foam particles. We solve the existing denoising network problem in which small-scaled foam particles disappear. In addition, by integrating the proposed algorithm with the screen-space projection framework, all the advantages of this approach can be accommodated. As a result, it shows through various experiments whether it is possible to stably represent not only the clean foam effects but also the foam particles lost due to the denoising process.

본 논문에서는 액체 시뮬레이션에서 표현되는 거품 효과(Foam effects)를 노이즈 없이 디테일하게 표현할 수 있는 인공신경망 프레임워크를 제안한다. 거품 입자의 생성 위치와 이류는 기존의 스크린 투영법을 활용하여 계산되며, 이 과정에서 나타나는 노이즈 문제를 인공신경망을 통해 풀어낸다. 스크린 투영 접근법에서 중요한 것은 투영맵이지만 이산화된 스크린 공간에 운동량을 투영하는 과정에서 투영맵에 노이즈가 발생하며, 우리는 인공신경망 기반의 디노이징(Denoising) 네트워크를 활용하여 이 문제를 효율적으로 풀어낸다. 투영맵을 통해 거품 생성 영역이 선별되면 2D를 3D 공간으로 역변환하여 거품 입자를 생성한다. 우리는 작은 크기의 거품들이 소실되는 기존의 디노이징 네트워크 문제를 해결하였다. 뿐만 아니라, 제안하는 알고리즘을 스크린 공간 투영 프레임워크와 통합함으로써 이 접근법이 갖는 모든 장점을 그대로 수용할 수 있다. 결과적으로 깔끔한 거품 효과 뿐만 아니라, 디노이징 과정으로 인해 소실된 거품을 안정적으로 표현할 수 있는지 다양한 실험을 통해 보여준다.

Keywords

References

  1. N. Chentanez and M. Muller, "Real-time eulerian water simulation using a restricted tall cell grid," in ACM Siggraph 2011 Papers, 2011, pp. 1-10.
  2. C. Jiang, C. Schroeder, A. Selle, J. Teran, and A. Stomakhin, "The affine particle-in-cell method," ACM Transactions on Graphics (TOG), vol. 34, no. 4, pp. 1-10, 2015.
  3. M. Cha, J. Lee, B. Choi, H. Lee, and S. Han, "A data-driven visual simulation of fire phenomena," in SIGGRAPH'09: Posters, 2009, pp. 1-1.
  4. P. Beaudoin, S. Paquet, and P. Poulin, "Realistic and controllable fire simulation," in Graphics Interface, vol. 2001, 2001, pp. 159-166.
  5. D. Q. Nguyen, R. Fedkiw, and H. W. Jensen, "Physically based modeling and animation of fire," in Proceedings of the 29th annual conference on Computer graphics and interactive techniques, 2002, pp. 721-728.
  6. N. Rasmussen, D. Q. Nguyen, W. Geiger, and R. Fedkiw, "Smoke simulation for large scale phenomena," in ACM SIGGRAPH 2003 Papers, 2003, pp. 703-707.
  7. R. Setaluri, M. Aanjaneya, S. Bauer, and E. Sifakis, "Spgrid: A sparse paged grid structure applied to adaptive smoke simulation," ACM Transactions on Graphics (TOG), vol. 33, no. 6, pp. 1-12, 2014.
  8. R. Fattal and D. Lischinski, "Target-driven smoke animation," in ACM SIGGRAPH 2004 Papers, 2004, pp. 441-448.
  9. T. Kim, E. Hong, J. Im, D. Yang, Y. Kim, and C.-H. Kim, "Visual simulation of fire-flakes synchronized with flame," The Visual Computer, vol. 33, no. 6, pp. 1029-1038, 2017. https://doi.org/10.1007/s00371-017-1374-9
  10. J.-H. Kim and J. Lee, "Fire sprite animation using fire-flake texture and artificial motion blur," IEEE Access, vol. 7, pp. 110002-110011, 2019. https://doi.org/10.1109/ACCESS.2019.2934163
  11. M. Choi, J. A. Wi, T. Kim, Y. Kim, and C.-H. Kim, "Learning representation of secondary effects for fire-flake animation," IEEE Access, vol. 9, pp. 17620-17630, 2021. https://doi.org/10.1109/ACCESS.2021.3054061
  12. B. Kim, Y. Liu, I. Llamas, X. Jiao, and J. Rossignac, "Simulation of bubbles in foam with the volume control method," ACM Transactions on Graphics (TOG), vol. 26, no. 3, p. 98, 2007. https://doi.org/10.1145/1276377.1276500
  13. O. Busaryev, T. K. Dey, H. Wang, and Z. Ren, "Animating bubble interactions in a liquid foam," ACM Transactions on Graphics (TOG), vol. 31, no. 4, pp. 1-8, 2012.
  14. J.-M. Hong, H.-Y. Lee, J.-C. Yoon, and C.-H. Kim, "Bubbles alive," ACM Transactions on Graphics (TOG), vol. 27, no. 3, pp. 1-4, 2008.
  15. D. Kim, O.-y. Song, and H.-S. Ko, "A practical simulation of dispersed bubble flow," in ACM SIGGRAPH 2010 papers, 2010, pp. 1-5.
  16. F. Dagenais, J. Gagnon, and E. Paquette, "An efficient layered simulation workflow for snow imprints," The visual computer, vol. 32, no. 6, pp. 881-890, 2016. https://doi.org/10.1007/s00371-016-1261-9
  17. M. B. Nielsen and O. Osterby, "A two-continua approach to eulerian simulation of water spray," ACM Transactions on Graphics (TOG), vol. 32, no. 4, pp. 1-10, 2013.
  18. J. Kim, D. Cha, B. Chang, B. Koo, and I. Ihm, "Practical animation of turbulent splashing water," in Symposium on Computer Animation, 2006, pp. 335-344.
  19. M. Ihmsen, N. Akinci, G. Akinci, and M. Teschner, "Unified spray, foam and air bubbles for particle-based fluids," The Visual Computer, vol. 28, no. 6, pp. 669-677, 2012. https://doi.org/10.1007/s00371-012-0697-9
  20. W. J. van der Laan, S. Green, and M. Sainz, "Screen space fluid rendering with curvature flow," in Proceedings of the 2009 symposium on Interactive 3D graphics and games, 2009, pp. 91-98.
  21. F. Bagar, D. Scherzer, and M. Wimmer, "A layered particle-based fluid model for real-time rendering of water," in Computer Graphics Forum, vol. 29, no. 4, 2010, pp. 1383-1389.
  22. J.-H. Kim and J. Lee, "Synthesizing large-scale fluid simulations with surface and wave foams via sharp wave pattern and cloudy foam," Computer Animation and Virtual Worlds, vol. 32, no. 2, p. e1984, 2021.
  23. J.-H. Kim, J. Lee, S. Cha, and C.-H. Kim, "Efficient representation of detailed foam waves by incorporating projective space," IEEE transactions on visualization and computer graphics, vol. 23, no. 9, pp. 2056-2068, 2016. https://doi.org/10.1109/TVCG.2016.2609429
  24. T. Takahashi, H. Fujii, A. Kunimatsu, K. Hiwada, T. Saito, K. Tanaka, and H. Ueki, "Realistic animation of fluid with splash and foam," in Computer Graphics Forum, vol. 22, no. 3, 2003, pp. 391-400.
  25. W. Geiger, M. Leo, N. Rasmussen, F. Losasso, and R. Fedkiw, "So real it'll make you wet," in ACM SIGGRAPH 2006 Sketches, 2006, p. 20.
  26. F. Losasso, J. Talton, N. Kwatra, and R. Fedkiw, "Two-way coupled sph and particle level set fluid simulation," IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 4, pp. 797-804, 2008. https://doi.org/10.1109/TVCG.2008.37
  27. V. Mihalef, D. Metaxas, and M. Sussman, "Simulation of two-phase flow with sub-scale droplet and bubble effects," in Computer Graphics Forum, vol. 28, no. 2, 2009, pp. 229-238.
  28. C.-b. Wang, Q. Zhang, F.-l. Kong, and H. Qin, "Hybrid particle-grid fluid animation with enhanced details," The Visual Computer, vol. 29, no. 9, pp. 937-947, 2013. https://doi.org/10.1007/s00371-013-0849-6
  29. N. Truong and C. Yuksel, "A narrow-range filter for screen-space fluid rendering," Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 1, no. 1, pp. 1-15, 2018.
  30. N. Akinci, A. Dippel, G. Akinci, and M. Teschner, "Screen space foam rendering," 2013.
  31. M. Muller, S. Schirm, and S. Duthaler, "Screen space meshes," in Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation, 2007, pp. 9-15.
  32. B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross, and B. Solenthaler, "Deep fluids: A generative network for parameterized fluid simulations," in Computer Graphics Forum, vol. 38, no. 2, 2019, pp. 59-70.
  33. Y. Xie, E. Franz, M. Chu, and N. Thuerey, "tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow," ACM Transactions on Graphics (TOG), vol. 37, no. 4, pp. 1-15, 2018.
  34. M. Chu and N. Thuerey, "Data-driven synthesis of smoke flows with cnn-based feature descriptors," ACM Transactions on Graphics (TOG), vol. 36, no. 4, pp. 1-14, 2017. https://doi.org/10.1145/2956233
  35. M. Werhahn, Y. Xie, M. Chu, and N. Thuerey, "A multi-pass gan for fluid flow super-resolution," Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 2, no. 2, pp. 1-21, 2019.
  36. J. Tompson, K. Schlachter, P. Sprechmann, and K. Perlin, "Accelerating eulerian fluid simulation with convolutional networks," in International Conference on Machine Learning, 2017, pp. 3424-3433.
  37. X. Xiao, Y. Zhou, H. Wang, and X. Yang, "A novel cnn-based poisson solver for fluid simulation," IEEE transactions on visualization and computer graphics, vol. 26, no. 3, pp. 1454-1465, 2018. https://doi.org/10.1109/tvcg.2018.2873375
  38. C. Dong, C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks," IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 2, pp. 295-307, 2015. https://doi.org/10.1109/TPAMI.2015.2439281
  39. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., "Photo-realistic single image super-resolution using a generative adversarial network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681-4690.
  40. M. Chu, Y. Xie, L. Leal-Taixe, and N. Thuerey, "Temporally coherent gans for video super-resolution (tecogan)," arXiv preprint arXiv:1811.09393, vol. 1, no. 2, p. 3, 2018.
  41. K. Bai, W. Li, M. Desbrun, and X. Liu, "Dynamic upsampling of smoke through dictionary-based learning," arXiv preprint arXiv:1910.09166, 2019.
  42. V. Jain and S. Seung, "Natural image denoising with convolutional networks," Advances in neural information processing systems, vol. 21, 2008.
  43. F. Agostinelli, M. R. Anderson, and H. Lee, "Adaptive multicolumn deep neural networks with application to robust image denoising," in Advances in neural information processing systems, 2013, pp. 1493-1501.
  44. J. Xie, L. Xu, and E. Chen, "Image denoising and inpainting with deep neural networks," in Advances in neural information processing systems, 2012, pp. 341-349.
  45. H. C. Burger, C. J. Schuler, and S. Harmeling, "Image denoising: Can plain neural networks compete with bm3d?" in 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012, pp. 2392-2399.
  46. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image denoising by sparse 3-d transform-domain collaborative filtering," IEEE Transactions on image processing, vol. 16, no. 8, pp. 2080-2095, 2007. https://doi.org/10.1109/TIP.2007.901238
  47. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, "Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising," IEEE transactions on image processing, vol. 26, no. 7, pp. 3142-3155, 2017. https://doi.org/10.1109/TIP.2017.2662206
  48. X. Mao, C. Shen, and Y.-B. Yang, "Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections," Advances in neural information processing systems, vol. 29, pp. 2802-2810, 2016.
  49. Y. Tai, J. Yang, X. Liu, and C. Xu, "Memnet: A persistent memory network for image restoration," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4539-4547.
  50. D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang, "Nonlocal recurrent network for image restoration," arXiv preprint arXiv:1806.02919, 2018.
  51. T. Plotz and S. Roth, "Neural nearest neighbors networks," arXiv preprint arXiv:1810.12575, 2018.
  52. S. Lefkimmiatis, "Universal denoising networks: a novel cnn architecture for image denoising," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3204-3213.
  53. K. Zhang, W. Zuo, and L. Zhang, "Ffdnet: Toward a fast and flexible solution for cnn-based image denoising," IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4608-4622, 2018. https://doi.org/10.1109/TIP.2018.2839891
  54. S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, "Toward convolutional blind denoising of real photographs," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 1712-1722.
  55. T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron, "Unprocessing images for learned raw denoising," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11036-11045.
  56. Y. Zhu and R. Bridson, "Animating sand as a fluid," ACM Transactions on Graphics (TOG), vol. 24, no. 3, pp. 965-972, 2005. https://doi.org/10.1145/1073204.1073298
  57. R. Li and Y. Saad, "Gpu-accelerated preconditioned iterative linear solvers," The Journal of Supercomputing, vol. 63, no. 2, pp. 443-466, 2013. https://doi.org/10.1007/s11227-012-0825-3
  58. F. H. Harlow and J. E. Welch, "Numerical calculation of time-dependent viscous incompressible flow of fluid with free surface," The physics of fluids, vol. 8, no. 12, pp. 2182-2189, 1965. https://doi.org/10.1063/1.1761178
  59. N. Akinci, J. Cornelis, G. Akinci, and M. Teschner, "Coupling elastic solids with smoothed particle hydrodynamics fluids," Computer Animation and Virtual Worlds, vol. 24, no. 3-4, pp. 195-203, 2013. https://doi.org/10.1002/cav.1499
  60. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, "TensorFlow: Large-scale machine learning on heterogeneous systems," 2015, software available from tensorflow.org. [Online]. Available: http://tensorflow.org/
  61. Z. Yue, H. Yong, Q. Zhao, L. Zhang, and D. Meng, "Variational denoising network: Toward blind noise modeling and removal," arXiv preprint arXiv:1908.11314, 2019.
  62. A. Frasson, T. A. Engel, and C. T. Pozzer, "Efficient screen-space rendering of vector features on virtual terrains," in Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2018, pp. 1-10.
  63. M. Rapp and S. Spielmann, "Real-time hair rendering with screen space adaptive level of detail."