> I'm a 3rd year Ph.D. student at the University of Pennsylvania, where I mostly think about constrained optimization, LLM finetuning, active learning and continual learning with Prof. Ribeiro. I'm also working towards a Masters degree in Data Science and Statistics from The Wharton School . Before graduate school, I visited Anshul Kundaje at Stanford's Kundaje-Lab, where I worked on genomic motif discovery through ATAC-seq analysis. I also worked at CERN, contributing to the CMS Open Data initiative with Dr. Lassila-Perini.
I'm originally from the beautiful city of Montevideo, Uruguay, where I obtained a BSc. in Electrical Engineering from UdelaR. During my undergrad, I was a software developer at IBM . If any of this sparks your interest, or if you want share some Yerba Mate , feel free to reach out.
On the theory side, I work on duality-based constrained optimization, which is particularly relevant in the context of LLM fine tuning and foundation models. This endeavour is about obtaining models that not only excel at a main task, but also adhere to requirements such as safety, invariance, robustness and fairness. For instance, my last paper shows that, in many learning problems, dual subgradient methods yield near-feasible solutions, without randomization, despite non-convexity.
On the applications side, I have delved into data-centric ML and explored the following questions:
- Which set of samples should I label next so that my model is the best possible with the lowest labelling cost ?
- What are the most impactful samples in my dataset? Which ones should I discard ?
- How should I continuously finetune my model as new data, with different properties, gets collected ?
These interrogations have led to works on active and continual learning.
I have a particular interest in problems involving biological signals such as Genome Sequences , Medical Images, Brain Images and the Gut Microbiome.
Near-Optimal Solutions of Constrained Learning Problems.
Juan Elenter, Luiz Chamon, Alejandro Ribeiro
International Conference on Learning Representations (ICLR), 2024
Primal-Dual Continual Learning: Stability and Plasticity through Lagrange Multipliers.
Juan Elenter, Navid NaderiAlizadeh, Tara Javidi, Alejandro Ribeiro
Preprint Under Review
A Lagrangian Duality Approach to Active Learning
Juan Elenter, Navid NaderiAlizadeh, Alejandro Ribeiro
Neural Information Processing Systems, 2022.
Neural Networks with Quantization Constraints.
Ignacio Hounie*, Juan Elenter*, Alejandro Ribeiro
IEEE International Conference on Acoustics, Speech, and Signal Processing, 2023.
Graph Neural Networks for genome enabled prediction of complex traits.
Juan Elenter, Ignacio Hounie, Guillermo Etchebarne, María Inés Fariello, Federico Lecumberry
Probabilistic Modeling in Genomics, CSHL, 2021.