2

Vision-Based Contact Localization Without Touch or Force Sensing

Nov 1, 1010

LIV: Language-Image Representations and Rewards for Robotic Control

Jul 1, 1010

VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training

Jan 1, 1010

Planning Goals for Exploration

Jan 1, 1010

Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching

Jan 1, 1010

Learning a Meta-Controller for Dynamic Grasping

Jan 1, 1010

Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning

Sep 13, 13130

Discovering Deformable Keypoint Pyramids

Jul 25, 25250

How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $ f $-Advantage Regression

Jul 1, 1010

Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming

May 15, 15150

Femtomolar SARS-CoV-2 Antigen Detection Using the Microbubbling Digital Assay with Smartphone Readout Enables Antigen Burden Quantitation and Dynamics Tracking

Sep 1, 1010

An Exploration of Embodied Visual Exploration
An Exploration of Embodied Visual Exploration

Mar 1, 1010

MAVRIC: Morphology-Agnostic Visual Robotic Control
MAVRIC: Morphology-Agnostic Visual Robotic Control

We demonstrate visual control within 20 seconds on a robot with unknown morphology, from a single uncalibrated RGBD camera.

May 17, 17170

DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation
DIGIT: A Novel Design for a Low-Cost Compact High-Resolution Tactile Sensor with Application to In-Hand Manipulation

We design and demonstrate a new tactile sensor for in-hand tactile manipulation in a robotic hand.

May 17, 17170

More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

By exploiting high precision tactile sensing with deep learning, robots can effectively iteratively adjust their grasp configurations to boost grasping performance from 65% to 94%.

Jan 1, 1010

End-to-End Policy Learning For Active Visual Categorization

Active visual perception with realistic and complex imagery can be formulated as an end-to-end reinforcement learning problem, the solution to which benefits from additionally exploiting the auxiliary task of action-conditioned future prediction.

Jan 1, 1010

Learning Image Representations Tied to Egomotion from Unlabeled Video

An agent's continuous visual observations include information about how the world responds to its actions. This can provide an effective source of self-supervision for learning visual representations.

Jan 1, 1010