Shihan Dou
Shihan Dou
Verified email at
Cited by
Cited by
The rise and potential of large language model based agents: A survey
Z Xi, W Chen, X Guo, W He, Y Ding, B Hong, M Zhang, J Wang, S Jin, ...
arXiv preprint arXiv:2309.07864, 2023
Vulcnn: An image-inspired scalable vulnerability detection system
Y Wu, D Zou, S Dou, W Yang, D Xu, H Jin
Proceedings of the 44th International Conference on Software Engineering …, 2022
Secrets of RLHF in large language models part I: PPO
R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ...
arXiv preprint arXiv:2307.04964, 2023
SCDetector: Software functional clone detection based on semantic tokens analysis
Y Wu, D Zou, S Dou, S Yang, W Yang, F Cheng, H Liang, H Jin
Proceedings of the 35th IEEE/ACM international conference on automated …, 2020
IntDroid: Android malware detection based on API intimacy analysis
D Zou, Y Wu, S Yang, A Chauhan, W Yang, J Zhong, S Dou, H Jin
ACM Transactions on Software Engineering and Methodology (TOSEM) 30 (3), 1-32, 2021
Secrets of RLHF in Large Language Models Part II: Reward Modeling
B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ...
arXiv preprint arXiv:2401.06080, 2024
MINER: Improving out-of-vocabulary named entity recognition from an information theoretic perspective
X Wang, S Dou, L Xiong, Y Zou, Q Zhang, T Gui, L Qiao, Z Cheng, ...
arXiv preprint arXiv:2204.04391, 2022
LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou, Z Xi, X Wang, ...
arXiv preprint arXiv:2312.09979, 2023
Obfuscation-resilient android malware analysis based on contrastive learning
Y Wu, S Dou, D Zou, W Yang, W Qiang, H Jin
arXiv preprint arXiv:2107.03799, 2021
Towards understanding the capability of large language models on code clone detection: a survey
S Dou, J Shan, H Jia, W Deng, Z Xi, W He, Y Wu, T Gui, Y Liu, X Huang
arXiv preprint arXiv:2308.01191, 2023
Contrastive learning for robust android malware familial classification
Y Wu, S Dou, D Zou, W Yang, W Qiang, H Jin
IEEE Transactions on Dependable and Secure Computing, 2022
Loose lips sink ships: Mitigating length bias in reinforcement learning from human feedback
W Shen, R Zheng, W Zhan, J Zhao, S Dou, T Gui, Q Zhang, X Huang
arXiv preprint arXiv:2310.05199, 2023
Codechameleon: Personalized encryption framework for jailbreaking large language models
H Lv, X Wang, Y Zhang, C Huang, S Dou, J Ye, T Gui, Q Zhang, X Huang
arXiv preprint arXiv:2402.16717, 2024
Kernel-whitening: Overcome dataset bias with isotropic sentence embedding
S Gao, S Dou, Q Zhang, X Huang
arXiv preprint arXiv:2210.07547, 2022
Decorrelate irrelevant, purify relevant: Overcome textual spurious correlations from a feature perspective
S Dou, R Zheng, T Wu, S Gao, J Shan, Q Zhang, Y Wu, X Huang
arXiv preprint arXiv:2202.08048, 2022
Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation Engineering
T Li, S Dou, W Liu, M Wu, C Lv, X Zheng, X Huang
arXiv preprint arXiv:2401.06824, 2024
Tailoring Personality Traits in Large Language Models via Unsupervisedly-Built Personalized Lexicons
T Li, S Dou, C Lv, W Liu, J Xu, M Wu, Z Ling, Z Xiaoqing, X Huang
arXiv preprint arXiv:2310.16582, 2024
EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
W Zhou, X Wang, L Xiong, H Xia, Y Gu, M Chai, F Zhu, C Huang, S Dou, ...
arXiv preprint arXiv:2403.12171, 2024
Tooleyes: Fine-grained evaluation for tool learning capabilities of large language models in real-world scenarios
J Ye, G Li, S Gao, C Huang, Y Wu, S Li, X Fan, S Dou, Q Zhang, T Gui, ...
arXiv preprint arXiv:2401.00741, 2024
Improving generalization of alignment with human preferences through group invariant learning
R Zheng, W Shen, Y Hua, W Lai, S Dou, Y Zhou, Z Xi, X Wang, H Huang, ...
arXiv preprint arXiv:2310.11971, 2023
The system can't perform the operation now. Try again later.
Articles 1–20