Sergey Edunov
Sergey Edunov
Facebook AI Research
Verified email at - Homepage
Cited by
Cited by
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
fairseq: A fast, extensible toolkit for sequence modeling
M Ott, S Edunov, A Baevski, A Fan, S Gross, N Ng, D Grangier, M Auli
arXiv preprint arXiv:1904.01038, 2019
Dense passage retrieval for open-domain question answering
V Karpukhin, B Oğuz, S Min, P Lewis, L Wu, S Edunov, D Chen, W Yih
arXiv preprint arXiv:2004.04906, 2020
Multilingual denoising pre-training for neural machine translation
Y Liu, J Gu, N Goyal, X Li, S Edunov, M Ghazvininejad, M Lewis, ...
Transactions of the Association for Computational Linguistics 8, 726-742, 2020
Understanding back-translation at scale
S Edunov, M Ott, M Auli, D Grangier
arXiv preprint arXiv:1808.09381, 2018
Beyond english-centric multilingual machine translation
A Fan, S Bhosale, H Schwenk, Z Ma, A El-Kishky, S Goyal, M Baines, ...
Journal of Machine Learning Research 22 (107), 1-48, 2021
Scaling Neural Machine Translation. arXiv e-prints, page
M Ott, S Edunov, D Grangier, M Auli
arXiv preprint arXiv:1806.00187, 2018
One trillion edges: Graph processing at facebook-scale
A Ching, S Edunov, M Kabiljo, D Logothetis, S Muthukrishnan
Proceedings of the VLDB Endowment 8 (12), 1804-1815, 2015
No Language Left Behind: Scaling Human-Centered Machine Translation
N Team, MR Costa-jussą, J Cross, O Ēelebi, M Elbayad, K Heafield, ...
arXiv e-prints, arXiv: 2207.04672, 2022
Facebook FAIR's WMT19 news translation task submission
N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov
arXiv preprint arXiv:1907.06616, 2019
Cloze-driven pretraining of self-attention networks
A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli
arXiv preprint arXiv:1903.07785, 2019
CCMatrix: Mining billions of high-quality parallel sentences on the web
H Schwenk, G Wenzek, S Edunov, E Grave, A Joulin
arXiv preprint arXiv:1911.04944, 2019
Classical structured prediction losses for sequence to sequence learning
S Edunov, M Ott, M Auli, D Grangier, MA Ranzato
arXiv preprint arXiv:1711.04956, 2017
Pre-trained language model representations for language generation
S Edunov, A Baevski, M Auli
arXiv preprint arXiv:1903.09722, 2019
Playing the lottery with rewards and multiple languages: lottery tickets in rl and nlp
H Yu, S Edunov, Y Tian, AS Morcos
arXiv preprint arXiv:1906.02768, 2019
On the evaluation of machine translation systems trained with back-translation
S Edunov, M Ott, MA Ranzato, M Auli
arXiv preprint arXiv:1908.05204, 2019
Facebook ai wmt21 news translation task submission
C Tran, S Bhosale, J Cross, P Koehn, S Edunov, A Fan
arXiv preprint arXiv:2108.03265, 2021
Three and a half degrees of separation
S Edunov, C Diuk, IO Filiz, S Bhagat, M Burke
Research at Facebook 694, 2016
Effective long-context scaling of foundation models
W Xiong, J Liu, I Molybog, H Zhang, P Bhargava, R Hou, L Martin, ...
arXiv preprint arXiv:2309.16039, 2023
Three and a half degrees of separation
S Bhagat, M Burke, C Diuk, IO Filiz, S Edunov
Facebook Research 4, 2016
The system can't perform the operation now. Try again later.
Articles 1–20