Self-training with Noisy Student improves ImageNet classification
The idea is to train a noisy student no smaller than the teacher and repeat.
1 | While not converge |
Embedding Expansion: Augmentation in Embedding Space for Deep Metric Learning
Idea: Proposes an embedding augmentation and combines with representation learning.
Motivation: To generate hard-synthsis with easy samples without using GAN.
Related Work: query expansion and database augmentation.
Method: Loss/Negative Pair Mining
Momentum Contrast for Unsupervised Visual Representation Learning
Motivation: Close the gap between unsupervised learning and supervised learning.
Idea: Reformulate contrastive matching as dictionary look-up
Method
- Loss function: maximize softmax on the positive key
- Implement the dictionary as a queue and maintains a momentum update on the key encoder
A Simple Framework for Contrastive Learning of Visual Representations
Method
- Data augmentation plays a crucial role
- Nonlinear transformation between representation and loss is crucial
- Larger batch_size and more training steps compared to supervised training
Learning Diverse Fashion Collocation by Neural Graph Filtering
Motivation: To achieve compatibility, diversity and flexibility requirements of fashion
collocation.
Highlights:
1. Edge-centric graph operations with permutation-invariant symmetric aggregation function
2. Use focal loss to handle imbalance.
3. New dataset for style classification.