Runa Eschenhagen
Runa Eschenhagen
Verified email at - Homepage
Cited by
Cited by
Practical deep learning with Bayesian principles
K Osawa, S Swaroop, A Jain, R Eschenhagen, RE Turner, R Yokota, ...
NeurIPS 2019, 2019
Laplace Redux--Effortless Bayesian Deep Learning
E Daxberger*, A Kristiadi*, A Immer*, R Eschenhagen*, M Bauer, ...
NeurIPS 2021, 2021
Continual deep learning by functional regularisation of memorable past
P Pan, S Swaroop, A Immer, R Eschenhagen, RE Turner, ME Khan
NeurIPS 2020, 2020
Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning
R Eschenhagen, E Daxberger, P Hennig, A Kristiadi
Bayesian Deep Learning Workshop, NeurIPS 2021, 2021
Benchmarking neural network training algorithms
GE Dahl, F Schneider, Z Nado, N Agarwal, CS Sastry, P Hennig, ...
arXiv preprint arXiv:2306.07179, 2023
Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks
A Kristiadi, R Eschenhagen, P Hennig
NeurIPS 2022, 2022
Kronecker-Factored Approximate Curvature for Modern Neural Network Architectures
R Eschenhagen, A Immer, RE Turner, F Schneider, P Hennig
NeurIPS 2023, 2023
Approximate Bayesian neural operators: Uncertainty quantification for parametric PDEs
E Magnani, N Krämer, R Eschenhagen, L Rosasco, P Hennig
arXiv preprint arXiv:2208.01565, 2022
Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
A Kristiadi, A Immer, R Eschenhagen, V Fortuin
AABI 2023, 2023
Structured Inverse-Free Natural Gradient: Memory-Efficient & Numerically-Stable KFAC for Large Neural Nets
W Lin, F Dangel, R Eschenhagen, K Neklyudov, A Kristiadi, RE Turner, ...
ICML 2024, 2023
Natural Gradient Variational Inference for Continual Learning in Deep Neural Networks
R Eschenhagen
University of Osnabrück, 2019
Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective
W Lin, F Dangel, R Eschenhagen, J Bae, RE Turner, A Makhzani
ICML 2024, 2024
The system can't perform the operation now. Try again later.
Articles 1–12