Follow
Nuo Xu
Nuo Xu
Verified email at m.fudan.edu.cn
Title
Cited by
Cited by
Year
A comprehensive capability analysis of gpt-3 and gpt-3.5 series models
J Ye, X Chen, N Xu, C Zu, Z Shao, S Liu, Y Cui, Z Zhou, C Gong, Y Shen, ...
arXiv preprint arXiv:2303.10420, 2023
188*2023
How robust is gpt-3.5 to predecessors? a comprehensive study on language understanding tasks
X Chen, J Ye, C Zu, N Xu, R Zheng, M Peng, J Zhou, T Gui, Q Zhang, ...
arXiv preprint arXiv:2303.00293, 2023
61*2023
Secrets of rlhf in large language models part i: Ppo
R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ...
arXiv preprint arXiv:2307.04964, 2023
532023
Secrets of rlhf in large language models part ii: Reward modeling
B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ...
arXiv preprint arXiv:2401.06080, 2024
272024
Llm-da: Data augmentation via large language models for few-shot named entity recognition
J Ye, N Xu, Y Wang, J Zhou, Q Zhang, T Gui, X Huang
arXiv preprint arXiv:2402.14568, 2024
32024
Delve into ppo: Implementation matters for stable rlhf
R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Y Zhou, ...
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
22023
An exploration of prompt-based zero-shot relation extraction method
J Zhao, Y Hu, N Xu, T Gui, Q Zhang, Y Chen, X Gao
China National Conference on Chinese Computational Linguistics, 81-95, 2022
12022
Advancing Translation Preference Modeling with RLHF: A Step Towards Cost-Effective Solution
N Xu, J Zhao, C Zu, W Qin, T Gui, Q Zhang, X Huang
arXiv preprint arXiv:2402.11525, 2024
2024
Abstains from Prediction: Towards Robust Relation Extraction in Real World
J Zhao, Y Zhang, N Xu, T Gui, Q Zhang, Y Chen, X Gao
China National Conference on Chinese Computational Linguistics, 96-111, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–9