MAML (Finn et al., 2017) 🔗
Learn a good weight initialization \(\color{blue}{\omega^*}\) on source tasks during meta-training time, for fine-tuning on target tasks during meta-testing time. See §bayesian_meta_learning for more.
Probabilistic extensions 🔗
Slides by Sangwo Mo: Bayesian Model-Agnostic Meta-Learning
LLAMA (Recasting MAML as hierarchical Bayes) (Grant et al., 2018) 🔗
- Reframed MAML as MAP inference in a hierarchical Bayesian model (HBM).
- Showed that MAML is learning meta parameters such that at test time you’re doing MAP inference under a Gaussian prior under the learnt meta parameters.
- Used local Laplace approximation to model task parameters (post-update parameters), which requires approximating a high-dimensional covariance matrix.
PLATIPUS (Probabilistic Model-Agnostic Meta-Learning) (Finn et al., 2018) 🔗
Approximately infers the pre-update parameters, made tractable through a choice of approximate posterior parameterized by gradient operations.
EMAML (Ensemble of MAML) 🔗
Train an ensemble of MAML models.
BMAML (Bayesian MAML) 🔗
Use Stein variational gradient descent (SVGD).
Bibliography
Finn, C., Abbeel, P., & Levine, S., Model-agnostic meta-learning for fast adaptation of deep networks, In , International Conference on Machine Learning (ICML) (pp. 1126–1135) (2017). : . ↩
Grant, E., Finn, C., Levine, S., Darrell, T., & Griffiths, T. (2018), Recasting Gradient-Based Meta-Learning As Hierarchical Bayes, CoRR. ↩
Finn, C., Xu, K., & Levine, S. (2018), Probabilistic Model-Agnostic Meta-Learning, CoRR. ↩