References
- L.A. Gatys et al., Texture and art with deep neural networks, Neurobiology, 2017
- R. Geirhos et al., ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness, ICLR 2019
- J. Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL, 2019
- Misra and Maaten, Self-Supervised Learning of Pretext-Invariant Representations, ArXiv, 2020
- Dosovitskiy et al., Discriminative Unsupervised Feaature Learning with Exemplar Convolutional Neural Networks, NIPS, 2014
- Doersch et al., Unsupervised Visual Representation Learning by Context Prediction, ICCV, 2015
- Norrozi et al., Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles, ECCV, 2016
- Gidaris et al., Unsupervised Representation Learning by Predicting Image Rotations, ICLR, 2018
- Zhang et al., Colorful Image Colorization, ECCV, 2016
- Pathak et al., Context Encoders: Feature Learning by Inpainting, CVPR, 2016
- Chen et al., A Simple Framework for Contrastive Learning of Visual Representations, ICML, 2020
- Z. Wu et al., Unsupervised Feature Learning via Non-Parametric Instance Discrimination, CVPR, 2018
- I. Misra et al., Self-Supervised Learning of Pretext-Invariant Representations, CVPR, 2020
- K. He et al., Momentum Contrast for Unsupervised Visual Representation Learning, CVPR, 2020
- J. B. Grill et al., Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning, NeurIPS, 2020
- M. Caron et al., SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, NeurIPS, 2020
- X. Chen and K. He, Exploring Simple Siamese Representation Learning, CVPR, 2021
- S. Atito et al., SiT: Self-Supervised Vision Transformer, ArXiv, 2021
- M. Caron et al., Emerging Properties in Self-Supervised Vision Transformers, ArXiv, 2021
- H. Bao et al., BEIT: BERT Pre-Training of Image Transformers, ICCV, 2021
- K. He et al., Masked Autoencoders Are Scalable Vision Learners, ICCV, 2021