참고문헌
- Bradley, A., Klivington, J., Triscari, J., & van der Merwe, R. (2021). Cinematic-L1 video stabilization with a log-homography model. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1041-1049).
- Chen, Y. T., Tseng, K. W., Lee, Y. C., Chen, C. Y., & Hung, Y. P. (2021, September). Pixstabnet: Fast multi-scale deep online video stabilization with pixel-based warping. In 2021 IEEE International Conference on Image Processing (ICIP) (pp. 1929-1933). IEEE.
- Choi, J., & Kweon, I. S. (2020). Deep iterative frame interpolation for full-frame video stabilization. ACM Transactions on Graphics (TOG), 39(1), 1-9. https://doi.org/10.1145/3363550
- Choi, J., Park, J., & Kweon, I. S. (2021). Self-supervised real-time video stabilization. arXiv preprint arXiv:2111.05980.
- Grundmann, M., Kwatra, V., & Essa, I. (2011, June). Auto-directed video stabilization with robust l1 optimal camera paths. In CVPR 2011 (pp. 225-232). IEEE.
- Liu, F., Gleicher, M., Jin, H., & Agarwala, A. (2009). Content-preserving warps for 3D video stabilization. ACM Transactions on Graphics (ToG), 28(3), 1-9.
- Liu, S., Wang, Y., Yuan, L., Bu, J., Tan, P., & Sun, J. (2012, June). Video stabilization with a depth camera. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 89-95). IEEE.
- Liu, S., Yuan, L., Tan, P., & Sun, J. (2013). Bundled camera paths for video stabilization. ACM transactions on graphics (TOG), 32(4), 1-10.
- Liu, Y. L., Lai, W. S., Yang, M. H., Chuang, Y. Y., & Huang, J. B. (2021). Hybrid neural fusion for full-frame video stabilization. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2299-2308).
- Shen, X., Wang, C., Li, X., Yu, Z., Li, J., Wen, C., ... & He, Z. (2019). Rf-net: An end-to-end image matching network based on receptive field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8132-8140).
- Shi, Z., Shi, F., Lai, W. S., Liang, C. K., & Liang, Y. (2022). Deep online fused video stabilization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1250-1258).
- Sun, D., Yang, X., Liu, M. Y., & Kautz, J. (2018). Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8934-8943).
- Wang, M., Yang, G. Y., Lin, J. K., Zhang, S. H., Shamir, A., Lu, S. P., & Hu, S. M. (2018). Deep online video stabilization with multi-grid warping transformation learning. IEEE Transactions on Image Processing, 28(5), 2283-2292.
- Xu, S. Z., Hu, J., Wang, M., Mu, T. J., & Hu, S. M. (2018, October). Deep video stabilization using adversarial networks. In Computer Graphics Forum (Vol. 37, No. 7, pp. 267-276).
- Xu, Y., Zhang, J., Maybank, S. J., & Tao, D. (2022). DUT: learning video stabilization by simply watching unstable videos. IEEE Transactions on Image Processing, 31, 4306-4320. https://doi.org/10.1109/TIP.2022.3182887
- Yu, J., & Ramamoorthi, R. (2020). Learning video stabilization using optical flow. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8159-8167).
- Zhang, L., Chen, X. Q., Kong, X. Y., & Huang, H. (2017). Geodesic video stabilization in transformation space. IEEE Transactions on Image Processing, 26(5), 2219-2229. https://doi.org/10.1109/TIP.2017.2676354