PAL_group_photo

Some more pictures of us: 1, 2, 3, 4, 5, 6, 7

PAL_logo_text

Welcome to the Perception, Action, & Learning Research Group at UPenn. Our goal is to build general-purpose visually equipped autonomous robots, acquiring and performing many task skills with ease in our homes, offices, farms, and hospitals much like us humans. Biology has long pointed to the critical role of visual perception for achieving this, and we now have cheap camera sensors and performant deep learning-based visual perception systems. Yet, these advances have not translated to robots: they continue to struggle in dynamically evolving, uncertainty-rife, and open-world settings. What are the missing pieces of the puzzle?

A core theme in our group is that, rather than monolithic and inflexible control loops, robots benefit from various forms of attention to steer flexible abstractions on-the-fly throughout the robot learning loop: observation, state representation, policy mapping & learning. Traditional pre-determined modular controllers are too information-limited, and monolithic blackbox end-to-end learning systems are too sample-inefficient to be practical. Our research combines the best of both worlds to afford: (1) flexible pre-training of modules from diverse data sources, (2) flexible task specification for robots to acquire new skills for diverse use cases even from layperson trainers, and (3) flexible resource allocation for efficiency in compute, energy, and training data. This is an exciting research area that lies at the intersection of computer vision, machine learning, and robotic control, and we benefit from our many collaborators at UPenn’s GRASP lab and beyond.

Some recent projects in our group have explored:

  • Learning visual representations shared with humans, by exploiting vision-language datasets, and the object-centric structure of the visual world.
  • Teaching robots in flexible ways, by showing them image goals, language goals, physical object goals, demonstrations, and more.
  • Focusing visual model learning in model-based RL on task-relevant transitions and image regions.
  • Exploration algorithms for robots to autonomously discover and learn how to perform new tasks in unknown environments.

Note: A more complete research statement, written in August 2023, is here.

Some recent talks that might give you more of a sense of what we work on:

Our work is possible because of our funding sources.

Current PhD students in the lab

Current MS and undergraduate students in the lab

  • James Springer (MS)
  • Vaidehi Som (MS)
  • Yunshuang Li (MS)
  • Tasos Panagopoulos (undergraduate)
  • Jason Yan (undergraduate)
  • Will Liang (undergraduate)
  • Fiona Luo (undergraduate)
  • Hungju Wang (undergraduate)

Student Collaborators

  • Kaustubh Sridhar (PhD student advised by Insup Lee and Jim Weimer)
  • Chris Watson (PhD student advised by Rajeev Alur)
  • Kyle Vedder (PhD student advised by Eric Eaton)
  • Nachi Stern (Postdoc advised by Andrea Liu)
  • Souradeep Dutta (Postdoc advised by Insup Lee)

Visiting Students and Postdocs

  • Chuan Wen (PhD student at Tsinghua University, advisor: Yang Gao, 2020-now)
  • Weilin Wan (PhD student at University of Hong Kong, advisor: Taku Komura, 2023-now)
  • Jingxi Xu (PhD student at Columbia University, advisors: Matei Ciocarlie, Shuran Song, 2020)
  • Oleh Rybkin (PhD student collaborator 2020-now, next postdoc at UC Berkeley)

Past Students

  • Srinath Rajagopalan (MS 2020, next at Amazon Robotics)
  • Adarsh Modh (MS 2020, next at NEC Research Labs America)
  • Kun Huang (MS 2022, next at Cruise Automation), winner of the SEAS MS Research Award
  • Andrew Shen (visiting undergrad 2021, next at CMU MS in ML)
  • Lloyd Acquaye Thomson (visiting MS student 2021, African Masters in Machine Intelligence program)
  • Kausik Sivakumar (MS 2023, next at Tutor Intelligence), winner of the GRASP MS Research Award

Principal Investigator

  • Dinesh Jayaraman