Ignacio Hounie



Ph.D. Candidate
University of Pennsylvania

ihounie [AT] seas.upenn.edu

Bio

I'm a 3rd year Ph.D. student at the University of Pennsylvania, advised by Prof. Ribeiro. I am broadly interested in machine learning, signal processing and optimization, and have spent the last few years developing tools for learning under constraints. In this line of work I have tackled problems ranging from invariance and data augmentation to model quantization and federated learning. If you are not entirely sure about what Constrained Learning is, and why it is relevant, I encourage you to read this.

I earned my BSc. in Electrical Engineering from Udelar in Montevideo, Uruguay, which is my hometown. During my time there, I worked on various ML aplications including environmental sound monitoring, image restoration, and genome enabled prediction.

Publications

Full list of publications on Google Scholar.

Resilient Constrained Learning

Ignacio Hounie, Alejandro Ribeiro, Luiz F. O. Chamon

To Appear at NeurIPS 2023.

Automatic Data Augmentation via Invariance Constrained Learning

Ignacio Hounie, Luiz F. O. Chamon, Alejandro Ribeiro

ICML 2023.

Neural Networks with Quantization Constraints

Ignacio Hounie, Juan Elenter, Alejandro Ribeiro

ICASSP 2023.

Image inpainting using patch consensus and DCT priors.

Ignacio Ramírez Paulino, Ignacio Hounie

Image Processing On Line 2021.

DCASE-models: a Python library for computational environmental sound analysis using deep-learning models

Pablo Zinemanas, Ignacio Hounie, Pablo Cancela, Frederic Font Corbera, Martín Rocamora, Xavier Serra

DCASE 2020.

Graph Neural Networks for genome enabled prediction of complex traits.

Ignacio Hounie, Juan Elenter, Guillermo Etchebarne, María Inés Fariello, Federico Lecumberry

Probabilistic Modeling in Genomics 2021.

Resilient Constrained Learning

Ignacio Hounie, Alejandro Ribeiro, Luiz F. O. Chamon

To Appear at NeurIPS 2023.

Automatic Data Augmentation via Invariance Constrained Learning

Ignacio Hounie, Luiz F. O. Chamon, Alejandro Ribeiro

ICML 2023.

Neural Networks with Quantization Constraints

Ignacio Hounie, Juan Elenter, Alejandro Ribeiro

ICASSP 2023.

Image inpainting using patch consensus and DCT priors.

Ignacio Ramírez Paulino, Ignacio Hounie

Image Processing On Line 2021.

DCASE-models: a Python library for computational environmental sound analysis using deep-learning models

Pablo Zinemanas, Ignacio Hounie, Pablo Cancela, Frederic Font Corbera, Martín Rocamora, Xavier Serra

DCASE 2020.

Paco and paco-dct: Patch consensus and its application to inpainting

Ignacio Ramírez Paulino, Ignacio Hounie

ICASSP 2020.

Something old, something new, something borrowed: Evaluation of different neural network architectures for genomic prediction.

Maria Inés Fariello, Lucía Arboleya, Diego Belzarena, Leonardo De Los Santos, Juan Elenter, Guillermo Etchebarne, Ignacio Hounie, Gabriel Ciappesoni, Elly Navajas, Federico Lecumberry

Plant & Animal Genome Conference 2023

Graph Neural Networks for genome enabled prediction of complex traits.

Ignacio Hounie, Juan Elenter, Guillermo Etchebarne, María Inés Fariello, Federico Lecumberry

Probabilistic Modeling in Genomics 2021.

On two dimensional mappings of SNP marker data and CNNs: Overcoming the limitations of existing methods using Fermat distance.

Juan Elenter, Guillermo Etchebarne, Ignacio Hounie, María Inés Fariello, Federico Lecumberry

Probabilistic Modeling in Genomics 2021.

Vitæ

Full Resume in PDF.

Miscelaneous

A Constrained Learning Evangelization.

For the skeptical reader who may harbor reasonable doubts about the relevance of this line of work to their own, let me share my two cents on it. If you're looking for a more thorough, less opinionated and hopefully more convincing argument, I encourage you to delve into our papers, attend our talks, or simply reach out :)

Let’s start with why.

Machine Learning (ML), as we know it, is all about attaining objectives. The pluralization of the word objectives is not coincidental. More often than not, and as in any other system that we build and design, these objectives comprise several aspects or metrics that have to be accounted for. This also entails desiderata that is more naturally expressed as requirements – from social, business and engineering perspectives –. Regardless, current approaches require the daunting task of heuristically adjusting weights, learning curricula or schedules to integrate requirements in ML pipelines. More importantly, they provide little guarantees about satisfying each individual requirement. This should be relatable to everyone that had to tune weights for conflicting objectives or regularization coefficients.


ml_is_objectives

Life is complex and multi-dimensional, why would ML be any different?


ML is also, essentially, an optimization problem. Thus, it would be naive to ignore the vast knowledge about constrained optimization that has been developed (mostly) over the last century. We are not discovering the wheel here; the ML and optimization fields – if one is inclined to believe that these are two entirely separate communities – have nurtured each other for years. Surprisingly, in the deep learning era, incredibly powerful algorithmic tools from constrained optimization have received little attention from both practitioners and researchers. Instead, the focus has been predominantly on unconstrained optimization.


post_diag_2

Constrained Optimization methods have been relegated to old and well studied and more tractable problems and we have made all this progress – seemingly – without them.


It comes as no surprise that upon examining various existing ML techniques, we discover that many, despite their initial motivations, end up tackling constrained learning problems. Rather than being discouraging, this suggests that constrained learning offers an expressive framework that can provide principled and interpretable approaches to several relevant problems. By unifying these existing techniques, which are often highly domain and application dependent, we also aim to develop more general tools. I do so in the belief that we will once again learn the bitter lesson: in the long run, the more general approach prevails.


post_diag_3


(Only) (Some) (Current) Applications

This is a biased, incomplete, unordered, and by no means representative enumeration of applications (references and links coming soon). However, I’ve decided to put it together just to illustrate that Constrained Learning already has a myriad of applications.

Still Lots to Explore!

What sets Constrained Learning apart from other Constrained Optimization problems is that objectives and requirements are statistical. In other words, data – and let’s not forget computation – play crucial roles in defining requirements and enabling their fulfillment. Fueled by impressive achievements in the field of ML, both the data and models it deals with have grown exponentially more complex. This presents us with new challenges and possibilities which, if you have reached this point, you probably agree to some extent that are worth exploring.

The blueprint for this website was shamelessly taken from Juan Elenter's, who in turn took it from Martin Zaveski, and can be found in this GitHub repo. Feel free to use it.