Follow
Sinho Chewi
Title
Cited by
Cited by
Year
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
S Chen, S Chewi, J Li, Y Li, A Salim, AR Zhang
International Conference on Learning Representations, 2023
2332023
Gradient descent algorithms for Bures-Wasserstein barycenters
S Chewi, T Maunu, P Rigollet, AJ Stromme
Conference on Learning Theory 2020 125, 1276-1304, 2020
1072020
Analysis of Langevin Monte Carlo from Poincaré to log-Sobolev
S Chewi, MA Erdogdu, MB Li, R Shen, M Zhang
Conference on Learning Theory, 1-2, 2022
1022022
Optimal dimension dependence of the Metropolis-adjusted Langevin algorithm
S Chewi, C Lu, K Ahn, X Cheng, T Le Gouic, P Rigollet
Conference on Learning Theory, 1260-1300, 2021
732021
SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence
S Chewi, TL Gouic, C Lu, T Maunu, P Rigollet
Advances in Neural Information Processing Systems 33, 2098-2109, 2020
732020
The probability flow ODE is provably fast
S Chen, S Chewi, H Lee, Y Li, J Lu, A Salim
Advances in Neural Information Processing Systems 36, 2024
712024
Variational inference via Wasserstein gradient flows
M Lambert, S Chewi, F Bach, S Bonnabel, P Rigollet
Advances in Neural Information Processing Systems 35, 14434-14447, 2022
692022
Towards a theory of non-log-concave sampling: first-order stationarity guarantees for Langevin Monte Carlo
K Balasubramanian, S Chewi, MA Erdogdu, A Salim, M Zhang
Conference on Learning Theory, 2896-2923, 2022
662022
Efficient constrained sampling via the mirror-Langevin algorithm
K Ahn, S Chewi
Advances in Neural Information Processing Systems 34, 28405-28418, 2021
602021
Log-concave sampling
S Chewi
Book draft available at https://chewisinho.github.io, 2023
54*2023
Improved analysis for a proximal algorithm for sampling
Y Chen, S Chewi, A Salim, A Wibisono
Conference on Learning Theory, 2984-3014, 2022
522022
Exponential ergodicity of mirror-Langevin diffusions
S Chewi, TL Gouic, C Lu, T Maunu, P Rigollet, A Stromme
Advances in Neural Information Processing Systems 33, 19573-19585, 2020
522020
Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent
J Altschuler, S Chewi, PR Gerber, A Stromme
Advances in Neural Information Processing Systems 34, 22132-22145, 2021
472021
Dimension-free log-Sobolev inequalities for mixture distributions
HB Chen, S Chewi, J Niles-Weed
Journal of Functional Analysis 281 (11), 109236, 2021
392021
Faster high-accuracy log-concave sampling via algorithmic warm starts
JM Altschuler, S Chewi
Journal of the ACM 71 (3), 1-55, 2024
372024
Learning threshold neurons via the "edge of stability"
K Ahn, S Bubeck, S Chewi, YT Lee, F Suarez, Y Zhang
Advances in Neural Information Processing Systems 36, 2022
352022
Fast and smooth interpolation on Wasserstein space
S Chewi, J Clancy, T Le Gouic, P Rigollet, G Stepaniants, A Stromme
International Conference on Artificial Intelligence and Statistics, 3061-3069, 2021
302021
Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space
MZ Diao, K Balasubramanian, S Chewi, A Salim
International Conference on Machine Learning, 7960-7991, 2023
282023
Improved discretization analysis for underdamped Langevin Monte Carlo
S Zhang, S Chewi, M Li, K Balasubramanian, MA Erdogdu
Conference on Learning Theory, 36-71, 2023
272023
An entropic generalization of Caffarelli’s contraction theorem via covariance inequalities
S Chewi, AA Pooladian
Comptes Rendus. Mathématique 361 (G9), 1471-1482, 2023
272023
The system can't perform the operation now. Try again later.
Articles 1–20