Willie Neiswanger

Machine learning at Stanford Computer Science SAIL / StatsML

I am a postdoc in computer science at Stanford University, working with Stefano Ermon and affiliated with the StatsML Group, Stanford AI Lab, and SLAC.

Research: I develop machine learning methods to perform efficient optimization and experimental design in costly real-world settings, where resources are limited. My work spans topics in active learning, uncertainty quantification, Bayesian decision making, and reinforcement learning. I apply these methods downstream to solve problems in science and engineering, for example in the physical sciences and machine learning systems.

I have also worked on distributed algorithms for scalable machine learning, and I develop/maintain software libraries for multilevel optimization, uncertainty quantification, AutoML, and Bayesian optimization.

Education: I completed my PhD in Machine Learning at Carnegie Mellon University, where I was advised by Eric Xing and collaborated with Jeff Schneider and Barnabas Poczos.

Previously, I studied at Columbia University, where I worked with Chris Wiggins and Frank Wood.


  • Oct 20, 2022 New paper on uncertainty quantification with pre-trained language models in EMNLP 2022.
  • Oct 10, 2022 New paper (+ code) on trajectory information planning for exploration in RL in NeurIPS 2022.
  • Oct 10, 2022 New paper on decision-theoretic entropies for Bayesian optimization in NeurIPS 2022.
  • July 22, 2022 Co-organized the Real World Experiment Design and Active Learning Workshop at ICML 2022.
  • May 15, 2022 New paper (+ website) on likelihood-free Bayesian optimization in ICML 2022 (long talk).
  • May 15, 2022 New paper on a modular conformal calibration framework for UQ in ICML 2022.
  • Jan 28, 2022 New paper (+ blog post) on experimental design and reinforcement learning in ICLR 2022.
  • Jan 1, 2022 New paper (+ website) on large-scale object counting in satellite images, in AAAI 2022 (oral).
  • Oct 15, 2021 New paper (+ code) on quantile methods for calibrated uncertainties in NeurIPS 2021.
  • Oct 15, 2021 Two papers on explainabile ML and personalized benchmarking in NeurIPS 2021.
  • July 14, 2021 Our paper on Pollux was awarded the Jay Lepreau Best Paper Award at OSDI'21.
  • June 10, 2021 New paper (+ website) on Bayesian Algorithm Execution (BAX) and InfoBAX, in ICML 2021.
  • June 1, 2021 I co-organized the Machine Learning for Data Workshop at ICML 2021.
  • Apr 1, 2021 New paper (+ AdaptDL) on Pollux, a deep learning cluster scheduler/tuner, in OSDI 2021.
  • Mar 16, 2021 New paper (+ code) on uncertainty quantification with martingales for GPs in ALT 2021.
  • Mar 9, 2021 New paper on active classification for catalyst discovery in the Journal of Chemical Physics.
  • Jan 12, 2021 New paper (+ code) on interactive weak supervision in ICLR 2021.
  • Dec 22, 2020 Released Uncertainty Toolbox, for predictive UQ, calibration, metrics, and visualization.
  • Dec 2, 2020 New paper (+ code) on BANANAS for neural architecture search in AAAI 2021.
  • Sep 25, 2020 New paper on encodings for neural architecture search in NeurIPS 2020 (spotlight).
  • June 1, 2020 I co-organized the Real World Experiment Design and Active Learning Workshop at ICML 2020.
  • Mar 6, 2020 New paper on uncertainty quantification for materials property predictions in MLST.
  • Mar 5, 2020 New paper on Dragonfly, a system for scalable and robust Bayesian optimization in JMLR.
  • Jan 7, 2020 New paper on molecular optimization and synthesis route design in AISTATS 2020.



A full list of my publications can be found here.