Kaiwen Wu

PhD Student

Computer and Information Science
University of Pennsylvania

Email: kaiwenwu@seas.upenn.edu

How to pronounce my first name

CV  /  GitHub  /  Google Scholar  /  Twitter  /  Blog

About Me

I am a forth-year PhD student in the Department of Computer and Information Science at the University of Pennsylvania, where I am advised by Jacob Gardner. Previously, I completed my MMath degree in Computer Science at the University of Waterloo. I did my undergraduate at Nanjing University.

I am interested in machine learning and optimization. My recent work focuses on scaling up computation in probabilistic machine learning. Specifically, I work on Gaussian processes, variational inference, and Bayesian optimization. I am also interested in convex optimization and deep generative modeling.

Research

* indicates equal contribution. See Google Scholar for a complete list of publications.

Publications

  • Understanding Stochastic Natural Gradient Variational Inference
    Kaiwen Wu and Jacob R. Gardner
    International Conference on Machine Learning (ICML 2024)
    [paper]

  • Large-Scale Gaussian Processes via Alternating Projection
    Kaiwen Wu, Jonathan Wenger, Haydn Jones, Geoff Pleiss and Jacob R. Gardner
    International Conference on Artificial Intelligence and Statistics (AISTATS 2024)
    [paper] [code]

  • The Behavior and Convergence of Local Bayesian Optimization
    Kaiwen Wu, Kyurae Kim, Roman Garnett and Jacob R. Gardner
    Advances in Neural Information Processing Systems (NeurIPS 2023)
    Spotlight Presentation
    [paper] [code]

  • On the Convergence of Black-Box Variational Inference
    Kyurae Kim, Jisu Oh, Kaiwen Wu, Yian Ma and Jacob R. Gardner
    Advances in Neural Information Processing Systems (NeurIPS 2023)
    [paper] [code]

  • Local Bayesian Optimization via Maximizing Probability of Descent
    Quan Nguyen*, Kaiwen Wu*, Jacob R. Gardner and Roman Garnett
    Advances in Neural Information Processing Systems (NeurIPS 2022)
    Oral Presentation
    [paper] [code]

  • Stronger and Faster Wasserstein Adversarial Attacks
    Kaiwen Wu, Allen Houze Wang and Yaoliang Yu
    International Conference on Machine Learning (ICML 2020)
    [paper] [code]

  • On Minimax Optimality of GANs for Robust Mean Estimation
    Kaiwen Wu, Gavin Weiguang Ding, Ruitong Huang and Yaoliang Yu
    International Conference on Artificial Intelligence and Statistics (AISTATS 2020)
    [paper] [code]

Lightly Reviewed Papers

  • A Fast, Robust Elliptical Slice Sampling Implementation for Linearly Truncated Multivariate Normal Distributions
    Kaiwen Wu and Jacob R. Gardner
    Workshop on Bayesian Decision-Making and Uncertainty at NeurIPS 2024
    [paper] [code]

  • Newton-type Methods for Minimax Optimization
    Guojun Zhang, Kaiwen Wu, Pascal Poupart and Yaoliang Yu
    Workshop on Beyond First-Order Methods in ML Systems at ICML 2021
    [paper] [code]

Miscellaneous

The following notes are for self-reference only. Some notes take forever to finish, unfortunately.

Writing pet peeves
  • Leave latex compilation errors unfixed in overleaf. It is a felony.
  • Orphans lines.

I have reviewed (or will review) for the following conferences: AAAI 2021, AISTATS 2021, ICML 2023, NeurIPS 2023, ICLR 2024, AISTATS 2024, ICML 2024, NeurIPS 2024, ICLR 2025, ICML 2025.

I have reviewed (or will review) for the following journal(s): TMLR 2025.

A website calculating an upper bound of the Erdős number. (Yes, it overestimates my Erdős number.)

Modified from Jon Barron.