I am an associate professor in the Department of Electrical and Systems Engineering (primary) as well as the Department of Computer and Information Systems and Department of Statistics and Data Science at the Wharton Business School.

I am the Penn site-lead at EnCORE: Institute for Emerging CORE Methods of Data Science.

Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL. Here is my Curriculum Vitae.






Updates

Honored to be selected as the recipient of the IEEE Information Theory Society James L. Massey Research and Teaching Award for Young Scholars, 2023.

The EnCORE: Institute for Emerging CORE Methods of Data Science will officially start in October 2022. Excited to be the Penn site lead.

Our paper received the IEEE Communications Society &Information Theory Society Joint Paper Award, 2023

2023 IEEE North-American School of Information Theory (at Penn)

Honored to be selected as a Distinguished Lecturer of the IEEE Information Theory Society in 2022-2023.

Invited talks at UT Austin (Foundations of Data Science Seminar), Chalmers University (ML seminar), MIT (OPT-ML++ Seminar), Rutgers University (Business School MSIS Seminar), University of Wisconsin-Maddison (SILO Seminar), University of Washington Seattle (ML Seminar) Video. 2021.

 

New: ICML 2020 Tutorial on Submodular Optimization

   
Honored to be selected among "Intel’s 2020 Rising Star Faculty Awardees".

NEURIPS 2020: We will present our works on Submodular Meta-Learning and Efficiently Computing Sinkhorn Barycenters and A Natural Gradient Method to Compute The Sinkhorn Distance in Optimal Transport (as a Spotlight)

COLT 2020: We will present our work on Precise Tradesoffs for Adversarial Training

ICML 2020: We will present our work on Decenteralized Optimization over Directed Networks


NSF CAREER Award, 2019.

AISTATS 2020: We will present our works on Federated Learning and Quantized and Distributed Frank-Wolfe and Black-Box Submodular Maximization and One Sample Frank-Wolfe.


AFOSR Young Investigator Award, 2019.

NEURIPS 2019: We will present our works on Stochastic Conditional Gradient++ and Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks and Robust and Communication-Efficient Collaborative Learning and Bandit Submodular Maximization.


ICML 2019: We will present our works on Hessian-Aided Policy Gradient and A Statistical Approach to Compute Sample Likelihoods in GANs.


Data Science Summer School, Ecole Polytechnique, 2019: I will be giving 6 lectures on "Theory and Applications of Submodularity: From Discrete to Continuous and Back".


Allerton 2018: We will be hoding a session on "Submodular Optimization" with 5 great speakers.


ICML 2018: We will present our work on Decenteralized Submodular Maximization and Projection-Free Online Optimization.


ISIT 2018: We will present our work on The Scaling of Reed-Muller Codes and A New Coding Paradigm for the Primitive Relay Channel.


Invited talk at the workshop on local algorithms (MIT) on "Submodular Maximization: The Decentralized Setting" (June 15th).


AISTATS 2018: We will present our work on The Stochastic Frank-Wolfe Method and Online Submodular Maximization.


Invited talk at the workshop on coding and information theory (Harvard, CSMA) and the University of Maryland (ECE) on "Non-asymptotic Analysis of Codes and its Practical Significance" (April 13th and March 29th).


NSF CISE Research Initiative (NSF-CRII) award, 2018.


Invited talk at the Santa Fe Institute on "Sequential Information Maximization: From Theory to Designs" (Feb 21st).


Talk at the Dagstuhl Seminar and ITA 2018 on "Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings" (Feb 16th).


Talk at UPenn ESE on "Coding for IoT" (Jan 26th).


We are organizing a session on "Submodular Optimization" at the 2018 INFORMS Optimization Society Conference.


AAAI 2018: We will present our work Learning to Interact with Learning Agents.


I will serve as a program committe member for IEEE International Symposium on Information Theory (ISIT'18). Please consider submitting your work to ISIT!


NIPS 2017: We will present our works on Stochastic Submodular Maximization and Gradient Methods for Submodular Maximization.


Invited talk at MIT EECS on "Recent Advances in Channel Coding" (Nov 1st).


Invited talk at Yale Institute for Networking Science (YINS) on "K-means: A Nonconvex problem with Fast and Provable Algorithms" (Oct 25th).




Some Recent Publications


A.Robey, L. Chamon, G. Pappas, H. Hassani Probabilistically Robust Learning: Balancing Average- and Worst-case Performance, 2022.


H. Hassani, A. Javanmard The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression, 2022.


A. Zhou, F. Tajwar, A. Robey, T. Knowles, G. Pappas, H. Hassani, C. Finn Do Deep Networks Transfer Invariances Across Classes?, 2022.


A. Robey, G. Pappas, H. Hassani Model-Based Domain Generalization, 2021.


A. Adibi, A. Mokhtari, H. Hassani Minimax Optimization: The Case of Convex-Submodular , 2021.


L. Collins, H. Hassani, A. Mokhtari, S. Shakkottai Exploiting Shared Representations for Personalized Federated Learning, 2021.


E. Lei, H. Hassani, S. Saeedi Bidokhti Out-of-Distribution Robustness in Deep Learning Compression, 2021.


P. Delgosha, H. Hassani, R. Pedarsani Robust Classification Under L_0 Attack for the Gaussian Mixture Model, 2021.


A. Mitra, R. Jaafar, G. Pappas, H. Hassani Achieving Linear Convergence in Federated Learning under Objective and Systems Heterogeneity, 2021.


Z. Shen, H. Hassani, S. Kale, A. Karbasi Federated Functional Gradient Boosting, 2021.


Z. Shen, Z. Wang, A. Ribeiro, H. Hassani Sinkhorn Natural Gradient for Generative Models, 2020.


A. Reisizadeh, I. Tziotis , H. Hassani, A. Mokhtari, R. Pedarsani Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity, 2020.


A. Robey, H. Hassani, G. J. Pappas Model-Based Robust Deep Learning, 2020.


A. Javanmard, M. Soltanolkotabi, H. Hassani Precise Tradeoffs in Adversarial Training for Linear Regression, 2020.


Z. Shen, Z. Wang, A. Ribeiro, H. Hassani Sinkhorn Barycenter via Functional Gradient Descent, 2020.


A. Adibi, A. Mokhtari, H. Hassani Submodular Meta-Learning, 2020.


E. Dobriban, H. Hassani, D. Hong, A. Robey Provable tradeoffs in adversarially robust classification , 2020.


X. Chen, K. Gatsis, H. Hassani and S. Saeedi Age of Information in Random Access Channels, 2020.


H. Hassani, A. Karbasi, A. Mokhtari, Z. Shen Stochastic Conditional Gradient++, 2019.


M. Fazlyab, A. Robey, H. Hassani, M. Morari, G. Pappas Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks, 2019.


A. Robey, A. Adibi, B. Schlotfeldt, G. Pappas, H. Hassani Optimal Algorithms for Submodular Maximization with Distributed Constraints, 2019.


A. Reisizadeh, H. Taheri, A. Mokhtari, H. Hassani, R. Pedarsani Robust and Communication-Efficient Collaborative Learning, 2019.


Z. Shen, H. Hassani, A. Ribeiro Hessian Aided Policy Gradient, 2019.


M. Zhang, L. Chen, A. Mokhtari, H. Hassani, A. Karbasi Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization, 2019.


A. Gotovos, H. Hassani, A. Krause, S. Jegelka, Discrete Sampling Using Semigradient-based Product Mixtures, 2018.


A. Mokhtari, H. Hassani, A. Karbasi, Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization, 2018.


Y. Balaji, H. Hassani, R. Chellappa, S. Feizi, Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs, 2018.


K. Gatsis, H. Hassani, G. J. Pappas, Latency-Reliability Tradeoffs for State Estimation, 2018.


A. Mokhtari, H. Hassani, A. Karbasi, Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings, 2018.


M. Fereydounian, V. Jamali, H. Hassani, H. Mahdavifar, Channel Coding at Low Capacity, 2018.


A. Fazeli, H. Hassani, M. Mondelli, A. Vardy, Binary Linear Codes with Optimal Scaling: Polar Codes with Large Kernels, 2018.


H. Hassani, S. Kudekar, O. Ordentlich, Y. Polyanskiy, R. Urbanke, Almost Optimal Scaling of Reed-Muller Codes on BEC and BSC Channels, 2018.


L. Chen, C. Harshaw, H. Hassani, A. Karbasi, Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity, 2018.


M. Hayhoe, F. Barreras, H. Hassani, V. M. Preciado, SPECTRE: Seedless Network Alignment via Spectral Centralities, 2018.


Y. Chen, S. H. Hassani, A. Krause, Near-optimal Bayesian Active Learning with Correlated and Noisy Tests, 2017.





Contact

In person :465C (3401 Walnut st.)
Cell :650 666 5254
email :hassani@seas.upenn.edu
    
mail:Dept. of Electrical & Systems Engineering
 University of Pennsylvania
 Room 465C
 3401 Walnut Street
 Philadelphia, PA 19104