MAML (Finn et al., 2017) ðŸ”—

Learn a good weight initialization \(\color{blue}{\omega^*}\) on source tasks during meta-training time, for fine-tuning on target tasks during meta-testing time. See §bayesian_meta_learning for more.

Probabilistic extensions ðŸ”—

Slides by Sangwo Mo: Bayesian Model-Agnostic Meta-Learning

LLAMA (Recasting MAML as hierarchical Bayes) (Grant et al., 2018) ðŸ”—

PLATIPUS (Probabilistic Model-Agnostic Meta-Learning) (Finn et al., 2018) ðŸ”—

Approximately infers the pre-update parameters, made tractable through a choice of approximate posterior parameterized by gradient operations.

EMAML (Ensemble of MAML) ðŸ”—

Train an ensemble of MAML models.

BMAML (Bayesian MAML) ðŸ”—

Use Stein variational gradient descent (SVGD).

Bibliography

Finn, C., Abbeel, P., & Levine, S., Model-agnostic meta-learning for fast adaptation of deep networks, In , International Conference on Machine Learning (ICML) (pp. 1126–1135) (2017). : . ↩

Grant, E., Finn, C., Levine, S., Darrell, T., & Griffiths, T. (2018), Recasting Gradient-Based Meta-Learning As Hierarchical Bayes, CoRR. ↩

Finn, C., Xu, K., & Levine, S. (2018), Probabilistic Model-Agnostic Meta-Learning, CoRR. ↩