Humans learn to solve increasingly complex tasks by
continually building upon and refining knowledge over a
lifetime of experience. This process of continual
learning and transfer allows us to rapidly learn new tasks,
often with very little training. Over time, it enables
us to develop a wide variety of complex abilities across
many domains.
Despite recent advances in transfer learning and
representation discovery, lifelong machine learning remains
a largely unsolved problem. Lifelong machine learning
has the huge potential to enable versatile systems that are
capable of learning a large variety of tasks and rapidly
acquiring new abilities. These systems would benefit
numerous applications, such as medical diagnosis, virtual
personal assistants, autonomous robots, visual scene
understanding, language translation, and many others.
Learning over a lifetime of experience involves a number of
procedures that must be performed continually, including:
Discovering representations from raw sensory data that
capture higher-level abstractions,
Transferring knowledge learned on previous tasks to
improve learning on the current task,
Maintaining the repository of accumulated knowledge,
and
Incorporating external guidance and feedback from
humans or other agents.
Each of these procedures encompasses one or more subfields
of machine learning and artificial intelligence. The
primary goal of this symposium is to bring together
practitioners in each of these areas and focus discussion on
combining these lines of research toward lifelong machine
learning.
Topics
The symposium will include paper presentations, talks, and
discussions on a variety of topics related to lifelong
learning, including but not limited to:
knowledge transfer
active transfer learning
multi-task learning
cross-domain transfer
knowledge/schema mapping
source knowledge selection
one-shot learning
transfer over long sequences of tasks
continual learning
online multi-task learning
online representation learning
knowledge maintenance/revision
developmental learning
scalable transfer learning
task/concept drift
self-selection of tasks
representation
discovery
learning from raw sensory data
deep learning
latent representations
multi-modal/multi-view learning
multi-scale representations
incorporating
guidance from external teachers
learning from demonstration
skill shaping
curriculum-based training
interactive learning
corrective feedback
agent-teacher communication
frameworks for
lifelong learning
architectures
software frameworks
testbeds
evaluation methodology
applications of
lifelong learning
data sets
application
domains/environments
simulators
deployed applications
Within these topics, the symposium will explore lifelong
learning in different problem formats, including
classification, regression, and sequential decision-making
problems.
Invited Speakers
Rich Sutton, University of Alberta
Jeff Dean, Google: "Large-Scale Learning from
Multimodal Data"
Paul Ruvolo, Bryn Mawr College: "Efficient
Lifelong Machine Learning"
Matthew Taylor, Washington State University:
"Agents as Teachers and Learners"
Adam Coates (Stanford University)
Alan Fern (Oregon State University)
Quoc Le (Stanford University)
Clayton Morrison (University of Arizona)
Diane Oyen (University of New Mexico)
Paul Ruvolo (Bryn Mawr College)
Danny Silver (Acadia University)
Monica Vroman (Rutgers University)