References
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convoluti onal neural networks," in 26th Annual Conf. Neural Information Process. Sys. (NIPS) 2012, Stateline, Nevada, USA, Dec. 3-8, 2012, pp. 1106-1114.
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," in Proc. 2015 IEEE Conf. Comput. Vision Pattern Recogn. (CVPR), Boston, USA, June 7-12, 2015, pp. 1-9.
- K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in Proc. 5th Int. Conf. Learning Represent. (ICLR), San Diego, USA, May 7-9, 2015, pp. 1-14.
- K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Comput. Vision Pattern Recogn (CVPR), Las Vegas, USA, Jun. 26-Jul. 1, 2016, pp. 1-12.
- G. Huang, Z. Liu, L. V. D. Maaten, and K. Weinberger, "Densely connected convolutional networks," in Proc. 2017 IEEE Conf. Comput. Vision Pattern Recogn. (CVPR), Honolulu, USA, Jul. 21-26, 2017, pp. 2261-2269.
- ImageNet, Large Scale Visual Recognition Challenge (ILSVRC), http://www.image-net.org/challenges/LSVRC/
- O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, "ImageNet large scale visual recognition challenge," Int. J. Comput. Vis. (IJCV), vol. 115, no. 3, pp. 211-252, 2015. https://doi.org/10.1007/s11263-015-0816-y
- G. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, pp. 1-19, 2015.
- J. Yim, D. Joo, J.-H. Bae, and J. Kim, "A gift from knowledge distillation: Fast optimization, network minimization, and transfer learning," in Proc. of 2017 IEEE Conf. Comput. Vision Pattern Recogn. (CVPR), Honolulu, USA, Jul. 21-26, 2017, pp. 7130-7138.
- A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, "Fitnets: Hints for thin deep nets," in Proc. 5th Int. Conf. Learning Represent. (ICLR), San Diego, USA, May 7-9, 2015, pp. 1-13.
- J.-H. Bae, D. Yeo, J. Yim, N.-S. Kim, C.-S. Pyo, and J. Kim, "Densely distilled flow-based knowledge transfer in teacher-student framework for image classification," IEEE Transactions on Image Processing, vol. 29, pp. 5698-5710, 2020. https://doi.org/10.1109/tip.2020.2984362
- K. Kim and J.-H. Bae, "Important parameter optimized flow-based transfer learning technique supporting heterogeneous teacher network based on deep learning," Journal of KIIT, vol. 18, No. 3, pp. 21-29, 2020.
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. "Generative adversarial nets", Advances in Neural Information Processing Systems, Canada, December 2014, pp. 2672-2680.
- D. Yeo and J.-H. Bae, "Multiple flow-based knowledge transfer via adversarial networks," Electronics Letters, Vol. 551, No. 18, pp.989-992, Sept. 2019.
- S. Lee and B.-C. Song, "Knowledge transfer via decomposing essential information in convolutional neural networks," IEEE Transactions on Neural Networks and Learning Systems, pp.1-12, 2020.