Oleh Rybkin

Hi! I am Oleh, a Ph.D. student in the GRASP laboratory at the University of Pennsylvania advised by Kostas Daniilidis. I work on visual model-based reinforcement learning.

I am building agents that gain understanding of their environment by learning internal world models. Recently, I've been working on self-supervised reinforcement learning agents, long-horizon planning, and building better predictive models.

I received my bachelor's degree from Czech Technical University in Prague, where I worked with Tomas Pajdla. I've spent time at INRIA with Josef Sivic, TiTech with Akihiko Torii, and UC Berkeley with Sergey Levine, Dinesh Jayaraman, and Chelsea Finn.

Google Scholar  /  GitHub  /  CV  /  Email  /  Twitter

  • May 2021: Two ICML papers accepted: one on simple and effective VAE training, and another coming soon.
  • Dec 2020: New workshop poster (video, paper) on latent collocation.
  • Oct 2020: New talk (video, slides) on visual model-based RL (given at GRASP; Berkeley)!
  • Oct 2020: A new paper on learning from interaction and observation is accepted to CoRL 2020 as an oral!
  • Sep 2020: A new blog post at the BAIR and CMU ML blogs about our Plan2Explore agent!
  • Sep 2020: Our paper on hierarchical goal-conditioned prediction and planning will be presented at NeurIPS 2020.
  • Jun 2020: A preprint on simple and effective VAE tranining is out.

Simple and Effective VAE Training with Calibrated Decoders
Oleh Rybkin, Kostas Daniilidis, Sergey Levine
International Conference on Machine Learning (ICML), 2021
project page & videos / arXiv / code

Commonly used uncalibrated decoders adversely affect training of VAEs and sequence VAEs. However, learning appropriate calibrated decoders produces better samples, is simple to implement, and does not require the common heuristic weight on the KL divergence term.

Hover/tap here to see the video.

Reinforcement Learning with Videos: Combining Offline Observations with Interaction
Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, Chelsea Finn
Conference on Robot Learning (CoRL), 2020 (oral presentation, 4% acceptance rate)
project page & videos / arXiv / video (5 minutes) / code

We use offline observations of humans jointly with online robot interaction data in a joint reinforcement learning algortihm. The resulting approach is able to learn from real-world human videos to solve challenging robotic tasks.

Hover/tap here to see the video.

Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors
Karl Pertsch*, Oleh Rybkin*, Frederik Ebert, Chelsea Finn, Dinesh Jayaraman, Sergey Levine
Neural Information Processing Systems (NeurIPS), 2020
project page & videos / arXiv / demo video (1 minute) / talk (5 minutes) / code

We propose a hierarchical goal-conditioned predictive model that is able to scale to very long horizon visual prediction (more than 500 frames). Leveraging the model, we also propose a hierarchical visual planning algorithm that is effective at long-horizon control.

Hover/tap here to see the video.

Learning Predictive Models From Observation and Interaction
Karl Schmeckpeper, Annie Xie, Oleh Rybkin, Stephen Tian, Kostas Daniilidis, Sergey Levine, Chelsea Finn
European Conference on Computer Vision (ECCV), 2020
project page & videos / arXiv / demo video (1 minute) / talk (8 minutes) / workshop version

We are able to learn action representations that generalize between robot data and passive observations of other agents (e.g. humans). This enables the use of additional diverse sources of data to train models for visual robotic control.

Hover/tap here to see the video.

Planning to Explore via Self-Supervised World Models
Ramanan Sekar*, Oleh Rybkin*, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, Deepak Pathak
International Conference on Machine Learning (ICML), 2020
project page & videos / arXiv / demo video (2 minutes) / talk (10 minutes) / VentureBeat / blog / code

We propose a visual model-based agent for self-supervised reinforcement learning. Our agent is able to adapt in a zero/few-shot setup, achieving comparable performance to supervised state-of-the-art RL.

Hover/tap here to see the video.

Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning
Karl Pertsch*, Oleh Rybkin*, Jingyun Yang, Shenghao Zhou, Kosta Derpanis, Kostas Daniilidis, Joseph Lim, Andrew Jaegle
Conference on Learning for Dynamics and Control (L4DC), 2020
project page & videos / arXiv / poster / slides / video (5 minutes)

We discover keyframes in videos by learning to select frames that enable prediction of the entire sequence. By using the keyframe structure of the data for prediction, our method is further able to perform planning for longer horizons.

Hover/tap here to see the video.

Learning what you can do before doing anything
Oleh Rybkin*, Karl Pertsch*, Kosta Derpanis, Kostas Daniilidis, Andrew Jaegle
International Conference on Learning Representations (ICLR), 2019
project page & videos / paper / arXiv / poster / slides

We learn to discover an agent's action space along with a dynamics model from pure video data. The model can be used for model predictive control, requiring orders of magnitude fewer action-annotated videos than other methods.


The reasonable ineffectiveness of pixel metrics for future prediction

MSE loss and its variants are commonly used for training and evaluation of future prediction. But is this the right thing to do?

Hover/tap here to see the video.

Science reading list

This is a list some books and shorter papers that changed my understanding of the field of AI and science in general. I hope you might find this useful too! The list is evolving, and suggestions for new reading material are welcome.

The Structure of Scientific Revolutions, Thomas S. Kuhn.
Vision, David C. Marr.

Short works:
Computing Machinery and Intelligence, Alan M. Turing.
The importance of stupidity in scientific research, Martin A. Schwartz.
You and Your Research, Richard W. Hamming.
As we may think, Vannevar Bush.
Levels of Analysis for Machine Learning, Jessica Hamrick, Shakir Mohamed.

Undergraduate/Master's students

I am actively looking for students who are strongly motivated to work on a research project, including students who want to do a Master's thesis. Check out some of my work above and if you find it interesting, do send me an email!

Past (current affiliation):

Inspired by this template. Hosted on Eniac.