In recent years, machine learning, and in particular deep learning, has undergone tremendous growth, much of this driven by advances that are computational in nature, including software and hardware infrastructures supporting increasingly complex models and enabling use of increasingly intense compute power. As a result the field is becoming more computational in its nature. In this talk I would like to highlight the continuing importance of statistical thinking in deep learning, by drawing examples from my research blending probabilistic modelling, Bayesian nonparametrics and deep learning. In particular, I will talk about neural processes, which uses neural networks to parameterise and learn flexible stochastic processes and use them for meta-learning (also known as learning to learning), and about the use of probabilistic symmetries in answering recent questions about neural network architecture choices satisfying certain invariance properties.

(Galashov et al., 2019)

Resources

Bibliography

Galashov, A., Schwarz, J., Kim, H., Garnelo, M., Saxton, D., Kohli, P., Eslami, S. M. A., … (2019), Meta-learning surrogate models for sequential decision making, CoRR. ↩