Wizardlm: Empowering large language models to follow complex instructions C Xu, Q Sun, K Zheng, X Geng, P Zhao, J Feng, C Tao, D Jiang arXiv preprint arXiv:2304.12244, 2023 | 647 | 2023 |
Wizardcoder: Empowering code large language models with evol-instruct Z Luo, C Xu, P Zhao, Q Sun, X Geng, W Hu, C Tao, J Ma, Q Lin, D Jiang arXiv preprint arXiv:2306.08568, 2023 | 479 | 2023 |
Feature selection for ranking X Geng, TY Liu, T Qin, H Li Proceedings of the 30th annual international ACM SIGIR conference on …, 2007 | 347 | 2007 |
Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct H Luo, Q Sun, C Xu, P Zhao, J Lou, C Tao, X Geng, Q Lin, S Chen, ... arXiv preprint arXiv:2308.09583, 2023 | 285 | 2023 |
On the robustness of chatgpt: An adversarial and out-of-distribution perspective J Wang, X Hu, W Hou, H Chen, R Zheng, Y Wang, L Yang, H Huang, ... arXiv preprint arXiv:2302.12095, 2023 | 220 | 2023 |
Query dependent ranking using k-nearest neighbor X Geng, TY Liu, T Qin, A Arnold, H Li, HY Shum Proceedings of the 31st annual international ACM SIGIR conference on …, 2008 | 215 | 2008 |
A new probabilistic model for rank aggregation T Qin, X Geng, TY Liu Advances in neural information processing systems 23, 2010 | 104 | 2010 |
Multi-task learning for conversational question answering over a large-scale knowledge base T Shen, X Geng, T Qin, D Guo, D Tang, N Duan, G Long, D Jiang arXiv preprint arXiv:1910.05069, 2019 | 99 | 2019 |
WizardLM: Empowering large pre-trained language models to follow complex instructions C Xu, Q Sun, K Zheng, X Geng, P Zhao, J Feng, C Tao, Q Lin, D Jiang The Twelfth International Conference on Learning Representations, 2024 | 87 | 2024 |
Promda: Prompt-based data augmentation for low-resource nlu tasks Y Wang, C Xu, Q Sun, H Hu, C Tao, X Geng, D Jiang arXiv preprint arXiv:2202.12499, 2022 | 87 | 2022 |
Improving zero-shot cross-lingual transfer for multilingual question answering over knowledge graph Y Zhou, X Geng, T Shen, W Zhang, D Jiang Proceedings of the 2021 Conference of the North American Chapter of the …, 2021 | 72 | 2021 |
Claret: Pre-training a correlation-aware context-to-event transformer for event-centric generation and classification Y Zhou, T Shen, X Geng, G Long, D Jiang arXiv preprint arXiv:2203.02225, 2022 | 60 | 2022 |
Eventbert: A pre-trained model for event correlation reasoning Y Zhou, X Geng, T Shen, G Long, D Jiang Proceedings of the ACM Web Conference 2022, 850-859, 2022 | 53 | 2022 |
MPC-BERT: A pre-trained language model for multi-party conversation understanding JC Gu, C Tao, ZH Ling, C Xu, X Geng, D Jiang arXiv preprint arXiv:2106.01541, 2021 | 50 | 2021 |
Multimodal dialogue response generation Q Sun, Y Wang, C Xu, K Zheng, Y Yang, H Hu, F Xu, J Zhang, X Geng, ... arXiv preprint arXiv:2110.08515, 2021 | 47 | 2021 |
Learning neural templates for recommender dialogue system Z Liang, H Hu, C Xu, J Miao, Y He, Y Chen, X Geng, F Liang, D Jiang arXiv preprint arXiv:2109.12302, 2021 | 47 | 2021 |
Modeling event-pair relations in external knowledge graphs for script reasoning Y Zhou, X Geng, T Shen, J Pei, W Zhang, D Jiang Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021 | 43 | 2021 |
Dc-bert: Decoupling question and document for efficient contextual encoding P Nie, Y Zhang, X Geng, A Ramamurthy, L Song, D Jiang Proceedings of the 43rd international ACM SIGIR conference on research and …, 2020 | 40 | 2020 |
Augmented large language models with parametric knowledge guiding Z Luo, C Xu, P Zhao, X Geng, C Tao, J Ma, Q Lin, D Jiang arXiv preprint arXiv:2305.04757, 2023 | 37 | 2023 |
Large language models are strong zero-shot retriever T Shen, G Long, X Geng, C Tao, T Zhou, D Jiang arXiv preprint arXiv:2304.14233, 2023 | 37 | 2023 |