Follow
Mingxuan Wang
Mingxuan Wang
ByteDance LLM Team
Verified email at bytedance.com - Homepage
Title
Cited by
Cited by
Year
On the sentence embeddings from pre-trained language models
B Li, H Zhou, J He, M Wang, Y Yang, L Li
arXiv preprint arXiv:2011.05864, 2020
6762020
Deep semantic role labeling with self-attention
Z Tan, M Wang, J Xie, Y Chen, X Shi
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
3982018
Contrastive learning for many-to-many multilingual neural machine translation
X Pan, M Wang, L Wu, L Li
arXiv preprint arXiv:2105.09501, 2021
1912021
Towards Making the Most of BERT in Neural Machine Translation
JYMWH Zhou, CZWZY Yu, L Li
171*2020
Glancing transformer for non-autoregressive neural machine translation
L Qian, H Zhou, Y Bao, M Wang, L Qiu, W Zhang, Y Yu, L Li
arXiv preprint arXiv:2008.07905, 2020
1522020
Encoding source language with convolutional neural network for machine translation
F Meng, Z Lu, M Wang, H Li, W Jiang, Q Liu
arXiv preprint arXiv:1503.01838, 2015
1492015
Pre-training multilingual neural machine translation by leveraging alignment information
Z Lin, X Pan, M Wang, X Qiu, J Feng, H Zhou, L Li
arXiv preprint arXiv:2010.03142, 2020
1282020
Syntax-based deep matching of short texts
M Wang, Z Lu, H Li, Q Liu
arXiv preprint arXiv:1503.02427, 2015
1012015
A hierarchy-to-sequence attentional neural machine translation model
J Su, J Zeng, D Xiong, Y Liu, M Wang, J Xie
IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (3), 623-632, 2018
1002018
Imitation learning for non-autoregressive neural machine translation
B Wei, M Wang, H Zhou, J Lin, J Xie, X Sun
arXiv preprint arXiv:1906.02041, 2019
992019
STEMM: Self-learning with speech-text manifold mixup for speech translation
Q Fang, R Ye, L Li, Y Feng, M Wang
arXiv preprint arXiv:2203.10426, 2022
922022
Learning language specific sub-network for multilingual machine translation
Z Lin, L Wu, M Wang, L Li
arXiv preprint arXiv:2105.09259, 2021
872021
Memory-enhanced decoder for neural machine translation
M Wang, Z Lu, H Li, Q Liu
arXiv preprint arXiv:1606.02003, 2016
802016
Cross-modal contrastive learning for speech translation
R Ye, M Wang, L Li
arXiv preprint arXiv:2205.02444, 2022
772022
Learning shared semantic space for speech-to-text translation
C Han, M Wang, H Ji, L Li
arXiv preprint arXiv:2105.03095, 2021
772021
End-to-end speech translation via cross-modal progressive training
R Ye, M Wang, L Li
arXiv preprint arXiv:2104.10380, 2021
752021
Rethinking document-level neural machine translation
Z Sun, M Wang, H Zhou, C Zhao, S Huang, J Chen, L Li
arXiv preprint arXiv:2010.08961, 2020
662020
LightSeq: A high performance inference library for transformers
X Wang, Y Xiong, Y Wei, M Wang, L Li
arXiv preprint arXiv:2010.13887, 2020
642020
Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation
Q Dong, R Ye, M Wang, H Zhou, S Xu, B Xu, L Li
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12749 …, 2021
612021
Deep neural machine translation with linear associative unit
M Wang, Z Lu, J Zhou, Q Liu
arXiv preprint arXiv:1705.00861, 2017
542017
The system can't perform the operation now. Try again later.
Articles 1–20