Follow
Yi Tay
Yi Tay
Research Scientist, Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
47102023
Deep learning based recommender system: A survey and new perspectives
S Zhang, L Yao, A Sun, Y Tay
ACM Computing Surveys, 2017
35422017
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
26622024
Emergent abilities of large language models
J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ...
Transactions of Machine Learning Research (TMLR), 2022
2646*2022
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
13122023
Efficient Transformers: A Survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys, 2022, 2022
1285*2022
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
ICLR 2021, 2020
6172020
Quaternion Knowledge Graph Embedding
S Zhang*, Y Tay*, L Yao, Q Liu
NeurIPS 2019, 2019
5812019
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
International Conference on Machine Learning, 22631-22648, 2023
5222023
Challenging big-bench tasks and whether chain-of-thought can solve them
M Suzgun, N Scales, N Schärli, S Gehrmann, Y Tay, HW Chung, ...
arXiv preprint arXiv:2210.09261, 2022
4632022
Scaling vision transformers to 22 billion parameters
M Dehghani, J Djolonga, B Mustafa, P Padlewski, J Heek, J Gilmer, ...
International Conference on Machine Learning, 7480-7512, 2023
3972023
UL2: Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2022
382*2022
Synthesizer: Rethinking self-attention in transformer models
Y Tay, D Bahri, D Metzler, DC Juan, Z Zhao, C Zheng
ICML 2021, 2020
3742020
Multi-Pointer Co-Attention Networks for Recommendation
Y Tay, LA Tuan, SC Hui
KDD 2018, 2018
3542018
Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking
Y Tay, LA Tuan, SC Hui
Proceedings of WWW 2018, 2018
3442018
Sparse Sinkhorn Attention
Y Tay, D Bahri, L Yang, D Metzler, DC Juan
ICML 2020, 2020
3232020
Next item recommendation with self-attention
S Zhang, Y Tay, L Yao, A Sun
arXiv preprint arXiv:1808.06414, 2018
262*2018
Dive into Deep Learning: Recommender Systems
S Zhang, A Zhang, Y Tay
244*2019
Larger language models do in-context learning differently
J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen, H Liu, D Huang, ...
arXiv preprint arXiv:2303.03846, 2023
2252023
Learning to Attend via Word-Aspect Associative Fusion for Aspect-based Sentiment Analysis
Y Tay, AT Luu, SC Hui
Proceedings of AAAI 2018, 2018
2122018
The system can't perform the operation now. Try again later.
Articles 1–20