I am a Principal Researcher at Microsoft Research. My recent research focuses on large-scale natural language processing and multimodal learning, which includes:
- LLM distillation and adaptation [1, 2, 3, 4]
- LLM test-time scaling [5, 6, 7]
- Building specialized foundation models [8, 9, 10, 11, 12]
Selected Publications [See Google Scholar for full publications]
- Scaling medical imaging report generation with multimodal reinforcement learning
Qianchu Liu*,
Sheng Zhang*,
Guanghui Qin*,
Yu Gu,
Ying Jin, Sam Preston,
Yanbo Xu,
Sid Kiblawi,
Wen-wai Yim,
Timothy Ossowski,
Tristan Naumann,
Mu Wei,
Hoifung Poon (*equal contribution)
[ MSR Blog ]
- OctoMed: Data Recipes for State-of-the-Art Multimodal Medical Reasoning
Timothy Ossowski*,
Sheng Zhang*,
Qianchu Liu,
Guanghui Qin,
Reuben Tan,
Tristan Naumann,
Junjie Hu,
Hoifung Poon (*equal contribution)
[ Model
| Blog
]
- Be My Eyes: Extending Large Language Models to New Modalities Through Multi-Agent Collaboration
James Y. Huang*,
Sheng Zhang*,
Qianchu Liu,
Guanghui Qin,
Tinghui Zhu,
Tristan Naumann,
Muhao Chen,
Hoifung Poon (*equal contribution)
[ Blog
]
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains
Qianchu Liu*,
Sheng Zhang*,
Guanghui Qin*,
Timothy Ossowski,
Yu Gu,
Ying Jin, Sid Kiblawi, Sam Preston,
Mu Wei,
Paul Vozila,
Tristan Naumann,
Hoifung Poon (*equal contribution)
[ Project Page
]
- Generative Medical Event Models Improve with Scale
Shane Waxler,
Paul Blazek,
Davis White,
Daniel Sneider,
Kevin Chung,
Mani Nagarathnam,
Patrick Williams,
Hank Voeller,
Karen Wong,
Matthew Swanhorst,
Sheng Zhang,
Naoto Usuyama,
Cliff Wong,
Tristan Naumann,
Hoifung Poon,
Andrew Loza,
Daniella Meeker,
Seth Hain,
Rahul Shah
- Exploring Scaling Laws for EHR Foundation Models
Sheng Zhang,
Qin Liu,
Naoto Usuyama,
Cliff Wong,
Tristan Naumann,
Hoifung Poon
- BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs
NEJM AI
Sheng Zhang*,
Yanbo Xu*,
Naoto Usuyama*,
Hanwen Xu*,
Jaspreet Bagga,
Robert Tinn,
Sam Preston,
Rajesh Rao,
Mu Wei,
Naveen Valluri,
Cliff Wong,
Andrea Tupini, Yu Wang, Matt Mazzola, Swadheen Shukla, Lars Liden,
Jianfeng Gao,
Angela Crabtree, Brian Piening, Carlo Bifulco,
Matthew P. Lungren,
Tristan Naumann,
Sheng Wang,
Hoifung Poon (*equal contribution)
[ Model
| Data ]
- LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day NeurIPS 2023 Datasets & Benchmarks (Spotlight)
Chunyuan Li*,
Cliff Wong*,
Sheng Zhang*,
Naoto Usuyama,
Haotian Liu,
Jianwei Yang,
Tristan Naumann,
Hoifung Poon,
Jianfeng Gao (*equal contribution)
[ Project page ]
Tutorials
Service
- Area Chair:
NeurIPS 2023;
ARR;
ACL 2024;
NAACL 2021, 2024;
EMNLP 2022;
IJCNLP-AACL 2023
- Tutorial:
KDD 2023
- Organizer:
Workshop on COmmonsense INference in NLP (COIN) at EMNLP 2019
- (S)PC Member/Reviewer:
ACL 2017-2023;
EMNLP 2018-2021;
AAAI 2020-2024;
CVPR 2025;
ICCV 2023-2025;
NAACL 2018-2021;
EACL 2017 2021;
AACL-IJCNLP 2020;
COLM 2024;
COLING 2020;
CoNLL 2019;
IJCNLP 2017;
IWCS 2017;
TACL;
Computational Linguistics;
ARR;
BMC Bioinformatics;
NLE