Acknowledgement
이 기고문은 삼성전자 미래기술육성센터의 지원을 받아 수행된 연구임(SRFC-IT1702-54). 이 기고문은 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원의 지원을 받아 수행된 연구임(2020-0-01389, 인공지능융합연구 센터지원(인하대학교)).
References
- Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa, pixelNeRF: Neural Radiance Fields from One or Few Images, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021), pp. 4578-4587.
- Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Niessner, ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2017), pp. 5828-5839.
- Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu, Shapenet: An Information-Rich 3D Model Repository, arXiv preprint arXiv:1512.03012(2015).
- Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun, Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction, ACM Trans. on Graphics (2017), Vol. 36, No. 4, pp. 1-13.
- Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar, Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines, ACM Trans. on Graphics (2019), Vol. 38, No. 4, pp. 1-14.
- Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng, Nerf: Representing Scenes as Neural Radiance Fields for View Synthesis, Proc. European Conference on Computer Vision (2020), pp. 405-421.
- C. Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, and Richard Szeliski. High-quality Video View Interpolation Using a Layered Representation. ACM Trans. on Graphics (2004), Vol. 23, No. 3, pp. 600-608. https://doi.org/10.1145/1015706.1015766
- Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis, Depth Synthesis and Local Warps for Plausible Image-based Navigation. ACM Trans. on Graphics (2013), Vol. 32, No. 3, pp. 1-12.
- Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, and Jan Kautz, Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 5336-5345.
- John Flynn, Ivan Neulander, James Philbin, and Noah Snavely, DeepStereo: Learning to Predict New Views from the World's Imagery, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2016), pp. 5515-5524.
- John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker, DeepView: View Synthesis with Learned Gradient Descent, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 2367-2376.
- Jonathan Shade, Steven Gortler, Li-wei He, and Richard Szeliski, Layered Depth Images, ACM Trans. on Graphics (1998), pp. 231-242. https://doi.org/10.1145/1015706.1015708
- Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt, Neural Sparse Voxel Fields, Proc. Advances in Neural Information Processing Systems (2020).
- Marc Levoy, and Pat Hanrahan, Light Field Rendering, ACM Trans. on Graphics (1996), pp. 31-42. https://doi.org/10.1145/1276377.1276416
- Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox, Multi-view 3D Models from Single Images with a Convolutional Network, Proc. European Conference on Computer Vision (2016), pp. 322-337.
- Miaomiao Liu, Xuming He, and Mathieu Salzmann, Geometry-Aware Deep Network for Single-Image Novel View Synthesis, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), pp. 4616-4624.
- Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. Learning-based View Synthesis for Light Field Cameras, ACM Trans. on Graphics (2016), Vol. 35, No. 6, pp. 1-10.
- Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson, SynSin: End-to-end View Synthesis from a Single Image, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 7467-7477.
- Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik, Modeling and Rendering Architecture from Photographs: a Hybrid Geometry- and Image-based Approach, ACM Trans. on Graphics (1996), pp. 11-20.
- Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow, Deep Blending for Free-Viewpoint Image-based Rendering. ACM Trans. on Graphics (2018), pp. 1-15.
- Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng, Learning to Synthesize a 4D RGBD Light Field from a Single Image, Proc. IEEE/CVF International Conference on Computer Vision (2017), pp. 2243-2251.
- Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely, Pushing the Boundaries of View Extrapolation with Multiplane Images, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 175-184.
- Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth, Nerf in the wild: Neural Radiance Fields for Unconstrained Photo Collections, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021), pp. 7210-7219.
- Richard Tucker, Noah Snavely, Single-View View Synthesis with Multiplane Images, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 551-560.
- Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh, Neural Volumes: Learning Dynamic Renderable Volumes from Images, ACM Trans. on Graphics (2019), Vol. 38, No. 4, pp. 1-14.
- Shenchang Eric Chen, and Lance Williams, View Interpolation for Image Synthesis, ACM Trans. on Graphics (1993), pp. 279-288.
- Steven M. Seitz, and Charles R. Dyer, View Morphing, ACM Trans. on Graphics (1996), pp. 21-30.
- Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn, NeX: Real-Time View Synthesis with Neural Basis Expansion, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021), pp. 8534-8543.
- Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A. Efros, View Synthesis by Appearance Flow, Proc. European Conference on Computer Vision (2016), pp. 286-301.
- Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely, Stereo Magnification: Learning View Synthesis Using Multiplane Images, ACM Trans. on Graphics (2018), Vol. 37, No. 4, pp. 1-12.
- Vincent Sitzmann, Michael Zollhoefer, Gordon Wetzstein, Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations, Proc. Advances in Neural Information Processing Systems (2019).
- Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim, Space-time Neural Irradiance Fields for Free-Viewpoint Video, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021), pp. 9421-9431.
- Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely, Crowdsampling the Plenoptic Function, Proc. European Conference on Computer Vision (2020), pp. 178-196.