## Large-Scale Learning

Many
machine learning and signal processing problems may be formulated as
convex optimization problems with an objective that may be written as
the sum of
predictive risks evaluated at each sample point of a training data set.
This is variously referred to as empirical risk minimization or
stochastic average approximation. When the number of sample points is
static and not too large, deterministic iterative algorithms such as
gradient descent or Newton's method provide effective tools for
solviong this class of problems.

When the number of sample
points is huge-scale, possible infinite, stochastic approximation
becomes necessary. Stochastic gradient methods operate on stochastic
gradients in lieu of true gradients, which allow for processing data
points one at a time, and under suitable conditions converge to the
minimum of an expected
risk
over all data realizations. This methodology has been around since a
landmark paper of Robbins and Monroe in 1951, but is increasingly
popular in the "Big Data" era.

Within this problem class, I have
worked on (1) how to learn intrinstic low-dimensional geometric
structure underlying streaming data, and (2) how to handle learning
problems where the number of predictive parameters, or features, is
comparable to the (huge-scale) number of sample points. For the first
problem class, I am developing an online kernelized learning algorithm
with a pruning procedure to represent the sequentially observed data as
efficiently as possible in a function space, which in some cases yields
improved empirical accuracy with a significant gain in efficiency from
standard kernelized methods. In the second, I am working on
parallelized doubly stochastic approximation algorithms, which allow
for operating on random subsets of both
sample points and features. [Video]

## Multi-Agent Optimization

Multi-agent systems refer to settings in which distinct computing nodes that may communicate with one another according to some network structure aim to solve a task which is global to the network. Sensor networks, computers networks, and robotic teams may all be described with this perspective. Classically, this problem class has led to the development of decentralized optimization algorithms, where distinct nodes operate on local information only. Through appropriate information mixing strategies, agents may learn a decision variable which is optimal in terms of observations aggregated globally over the network.

My
contributions to this.problem domain are in developing new information
mixing strategies. Prior approaches are based upon schemes where agents
combine a weighted average of neighbors decision variables with a local
gradient step ("the consensus protocol"), whereas the method I propose
for this setting, called the saddle point algorithm, expoits duality in
convex optimization theory to have agents approximately enforce
local agreement constraints while learning based upon local
information. This saddle point method can solve more general classes of
decentralized optimization problems than consensus methods, i.e.
problems in which each agent's cost function is parameterized by a
random variable with a possibly distinct distribution function, or more
general types of network constraints. Moreover,
these primal-dual algorithms are well-suited to online or dynamic
settings where training observations become sequentially available, and
agents must repeatedly adapt to new information. [Video]

## Robotic Learning-Based Control

Robotics is an exciting field for signal processing researchers because there are so many frontiers to which we may explore and contribute. My particular focus has been on learning-based control in single-robot and multi-robot systems. In particular, I study information processing strategies for streaming sensory data, such that useful intelligence may be extracted from the robot's operating domain for adapting its control strategies.This problem breaks down into three components (1) how to train a statistical model online relating sensory data to some label or output variable space, (2) how to map the predictions of this statistical model into the control space, and (3) tailoring the robotic control strategy to the statistics of its operating environment.

For problem (1), I have developed extensions of task-driven dictionary learning to decentralized settings. Task-driven dictionary learning is a method for learning a signal representation and statistical model relating a feature space and predictive variable jointly in a manner that is tuned for minimizing the expected prediction risk. By creating a framework for this task in multi-agent systems, I've provided the capability for robotic networks to handle streaming data and collaboratively exchange information to maximize the informativeness of the paths they've traversed for a particular prediction task.

The task of how to map statistical predictions into the control space of the autonomous system (2) classically has been treated with robust control techniques, in which one adds chance constraints or stochastic variables to the objective of an optimal control formulation. While doing yields robust performance for linear systems, most robotic kinematics are highly nonlinear, in which case tried and true methods for linear systems are not appropriate. To circumvent this issue, I am considering developing new formulations of receding horizon control of nonlinear systems where the cost functional automatically switches between optimal and robust control based on the level of predicted uncertainty in the environment. By approaching task (3) in this manner, I am developing a method such that an autonomous system may learn online to compensate for the simplifications made in kinematic models of robot platforms that are typically used to generate control signals, and thus have a robot adjust in real-time to unexpected changes in the environment in which it is operating.

## Earlier Work

Prior to coming to UPenn, the unifying theme of my research
projects was Monte Carlo and stochastic diffusion approximation
algorithms. My
undergraduate research project in the Math Dept. at Washington
University in St. Louis was to study probabilistic models of
predator-prey interactions in migrational settings. To do so, I made
use of tools in stochastic simulation and diffusion approximations to
discern under which cases population stability, explosion, and
extinction occurred across different sites in a nonlinear networked
dynamical system.

I spent 2011-2012 at the U.S. Army Research Laboratory ALC as part of the Science Outreach for Army Research (SOAR) program investigating different frameworks for modeling robotic networks in complex domains. The first project I worked on was to evaluate a "bio--inspired" control strategy on an asynchronous parallel computing architecture. Numerically I was able to conclude that if the amount of information processed passed a certain threshold, the system was controllable. The other project I worked on was generalizing a leader-follower framework for a robotic network to maintain communications connectivity while exploring the environment. I implemented a simulated annealing strategy to overcome the tendency of follower-robots becoming trapped at obstacles in the environment, which is a consequence of non-convexity of the feasible space.

I also worked as a biostatistical research intern at Washington University School of Medicine's Summer Institute for Training in Biostatistics (SIBS), where I performed statistical analyses and explored latent variable patterns in clinical trials via independent component analysis, and presented my findings to a team of physicians.