References
- 高晗, 田育龙, 许封元, & 仲盛. (2020). Survey of deep learning model compression and acceleration. Journal of Software, 32(1), 68-92.
- Xu, P., Cao, J., Shang, F., Sun, W., & Li, P. (2020).Layer pruning via fusible residual convolutional block for deep neural networks. arXiv preprint arXiv:2011.14356.
- Han, Y., Huang, G., Song, S., Yang, L., Wang, H., & Wang, Y. (2021). Dynamic Neural Networks: A Survey. arXiv preprint arXiv:2102.04906.
- Mostafa, H., & Wang, X. (2019). Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization. arXiv preprint arXiv:1902.05967.
- Liu, X., Pool, J., Han, S., & Dally, W. J. (2018). Efficient sparse-winograd convolutional neural networks. arXiv preprint arXiv:1802.06367.
- Guo, Y., Yao, A., & Chen, Y. (2016). Dynamic Network Surgery for Efficient DNNs. arXiv preprint arXiv:1608.04493.
- Liu, J., Xu, Z., Shi, R., Cheung, R. C., & So, H. K. (2020). Dynamic sparse training: Find efficient sparse network from scratch with trainable masked layers. arXiv preprint arXiv:2005.06870.
- Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., & Peste, A. (2021). Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. arXiv preprint arXiv:2102.00554.
- Atashgahi, Z., Pechenizkiy, M., Veldhuis, R., & Mocanu, D. C. (2023). Adaptive Sparsity Level during Training for Efficient Time Series Forecasting with Transformers. arXiv preprint arXiv:2305.18382.
- Sokar, G., Mocanu, E., Mocanu, D. C., Pechenizkiy, M., & Stone, P. (2021). Dynamic sparse training for deep reinforcement learning. arXiv preprint arXiv:2106.04217.
- Ioffe, S., & Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint arXiv:1502.03167.
- Huang, Q., Zhou, K., You, S., & Neumann, U. (2018). Learning to Prune Filters in Convolutional Neural Networks. arXiv preprint arXiv:1801.07365.
- Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.
- Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014).Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.