I am an associate professor in the Department of Electrical and Systems Engineering (primary) as well as the Department of Computer and Information Science and Department of Statistics and Data Science at the Wharton Business School.
I am also affiliated with Google Research (NYC) as a visiting faculty researcher. Moreover, I am the Penn site-lead at EnCORE: Institute for Emerging CORE Methods of Data Science as well as the co-lead of foundations at the AI Institute for Learning-enabled Optimization at Scale (NSF-TILOS).
Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL. Here is my Curriculum Vitae.
Updates
- New: ICML 2025 Tutorial on AI Safety and Jailbreaking
- Honored to be selected as the recipient of the IEEE Information Theory Society James L. Massey Research and Teaching Award for Young Scholars, 2023.
- The EnCORE: Institute for Emerging CORE Methods of Data Science officially started in October 2022. Excited to be the Penn site lead.
- Our paper received the IEEE Communications Society & Information Theory Society Joint Paper Award, 2023.
- 2023 IEEE North-American School of Information Theory (at Penn).
- Honored to be selected as a Distinguished Lecturer of the IEEE Information Theory Society in 2022–2023.
- Invited talks at UT Austin (FoDS Seminar), Chalmers University (ML seminar), MIT (OPT-ML++ Seminar), Rutgers (MSIS Seminar), University of Wisconsin–Madison (SILO), University of Washington (ML Seminar) — video (2021).
- ICML 2020 Tutorial on Submodular Optimization.
- Selected among Intel’s 2020 Rising Star Faculty Awardees.
- NeurIPS 2020: Works on Submodular Meta-Learning, Efficiently Computing Sinkhorn Barycenters, and “A Natural Gradient Method to Compute The Sinkhorn Distance in Optimal Transport” (Spotlight).
- COLT 2020: Precise Tradeoffs for Adversarial Training.
- ICML 2020: Decentralized Optimization over Directed Networks.
- NSF CAREER Award, 2019.
- AISTATS 2020: Works on Federated Learning, Quantized and Distributed Frank–Wolfe, Black-Box Submodular Maximization, and One-Sample Frank–Wolfe.
- AFOSR Young Investigator Award, 2019.
- NeurIPS 2019: Works on Stochastic Conditional Gradient++, Lipschitz Constants for DNNs, Robust & Communication-Efficient Collaborative Learning, and Bandit Submodular Maximization.
- ICML 2019: Hessian-Aided Policy Gradient, Sample Likelihoods in GANs.
- Data Science Summer School, École Polytechnique, 2019: 6 lectures on “Theory and Applications of Submodularity.”
- Allerton 2018: Organized a session on “Submodular Optimization.”
- ICML 2018: Decentralized Submodular Maximization, Projection-Free Online Optimization.
- ISIT 2018: Scaling of Reed–Muller Codes; A New Coding Paradigm for the Primitive Relay Channel.
- Invited talks: MIT workshop on local algorithms (2018), Harvard CMSA & UMD ECE (coding & info theory), Santa Fe Institute, Dagstuhl Seminar & ITA 2018, UPenn ESE (Coding for IoT), INFORMS Optimization Society (session on Submodular Optimization), AAAI 2018 (Learning to Interact with Learning Agents), ISIT’18 PC member, NIPS 2017 papers, MIT EECS talk, Yale YINS talk.
Some Recent Publications
- A. Robey, L. Chamon, G. Pappas, H. Hassani — Probabilistically Robust Learning, 2022.
- H. Hassani, A. Javanmard — Curse of Overparameterization in Adversarial Training, 2022.
- A. Zhou, F. Tajwar, A. Robey, T. Knowles, G. Pappas, H. Hassani, C. Finn — Do Deep Networks Transfer Invariances Across Classes?, 2022.
- A. Robey, G. Pappas, H. Hassani — Model-Based Domain Generalization, 2021.
- A. Adibi, A. Mokhtari, H. Hassani — Minimax Optimization: The Case of Convex–Submodular, 2021.
- L. Collins, H. Hassani, A. Mokhtari, S. Shakkottai — Shared Representations for Personalized Federated Learning, 2021.
- E. Lei, H. Hassani, S. Saeedi Bidokhti — OOD Robustness in Deep Learning Compression, 2021.
- P. Delgosha, H. Hassani, R. Pedarsani — Robust Classification Under L0 Attack, 2021.
- A. Mitra, R. Jaafar, G. Pappas, H. Hassani — Linear Convergence in Federated Learning, 2021.
- Z. Shen, H. Hassani, S. Kale, A. Karbasi — Federated Functional Gradient Boosting, 2021.
- Z. Shen, Z. Wang, A. Ribeiro, H. Hassani — Sinkhorn Natural Gradient for Generative Models, 2020.
- A. Reisizadeh, I. Tziotis, H. Hassani, A. Mokhtari, R. Pedarsani — Straggler-Resilient Federated Learning, 2020.
- A. Robey, H. Hassani, G. Pappas — Model-Based Robust Deep Learning, 2020.
- A. Javanmard, M. Soltanolkotabi, H. Hassani — Adversarial Training Tradeoffs for Linear Regression, 2020.
- Z. Shen, Z. Wang, A. Ribeiro, H. Hassani — Sinkhorn Barycenter via Functional Gradient Descent, 2020.
- A. Adibi, A. Mokhtari, H. Hassani — Submodular Meta-Learning, 2020.
- E. Dobriban, H. Hassani, D. Hong, A. Robey — Provable Tradeoffs in Robust Classification, 2020.
- X. Chen, K. Gatsis, H. Hassani, S. Saeedi — Age of Information in Random Access Channels, 2020.
- H. Hassani, A. Karbasi, A. Mokhtari, Z. Shen — Stochastic Conditional Gradient++, 2019.
- M. Fazlyab, A. Robey, H. Hassani, M. Morari, G. Pappas — Estimating Lipschitz Constants for DNNs, 2019.
- A. Robey, A. Adibi, B. Schlotfeldt, G. Pappas, H. Hassani — Optimal Algorithms for Submodular Maximization with Distributed Constraints, 2019.
- A. Reisizadeh, H. Taheri, A. Mokhtari, H. Hassani, R. Pedarsani — Robust & Communication-Efficient Collaborative Learning, 2019.
- Z. Shen, H. Hassani, A. Ribeiro — Hessian Aided Policy Gradient, 2019.
- M. Zhang, L. Chen, A. Mokhtari, H. Hassani, A. Karbasi — Quantized Frank–Wolfe, 2019.
- A. Gotovos, H. Hassani, A. Krause, S. Jegelka — Discrete Sampling Using Semigradient-based Product Mixtures, 2018.
- A. Mokhtari, H. Hassani, A. Karbasi — Stochastic Conditional Gradient Methods, 2018.
- Y. Balaji, H. Hassani, R. Chellappa, S. Feizi — Entropic GANs meet VAEs, 2018.
- K. Gatsis, H. Hassani, G. J. Pappas — Latency–Reliability Tradeoffs for State Estimation, 2018.
- A. Mokhtari, H. Hassani, A. Karbasi — Decentralized Submodular Maximization, 2018.
- M. Fereydounian, V. Jamali, H. Hassani, H. Mahdavifar — Channel Coding at Low Capacity, 2018.
- A. Fazeli, H. Hassani, M. Mondelli, A. Vardy — Polar Codes with Large Kernels, 2018.
- H. Hassani, S. Kudekar, O. Ordentlich, Y. Polyanskiy, R. Urbanke — Scaling of Reed–Muller Codes, 2018.
- L. Chen, C. Harshaw, H. Hassani, A. Karbasi — Projection-Free Online Optimization, 2018.
- M. Hayhoe, F. Barreras, H. Hassani, V. M. Preciado — SPECTRE: Seedless Network Alignment, 2018.
- Y. Chen, S. H. Hassani, A. Krause — Near-optimal Bayesian Active Learning, 2017.
Contact
| In person : | 465C (3401 Walnut St.) | |
| Cell : | 650 666 5254 | |
| Email : | hassani@seas.upenn.edu | |
| Mail : | Dept. of Electrical & Systems Engineering | |
| University of Pennsylvania | ||
| Room 465C | ||
| 3401 Walnut Street | ||
| Philadelphia, PA 19104 |