Training generalist robot agents is an immensely difficult feat due to the requirement to perform a huge range of tasks in many different environments. We propose selectively training robots based on end-user preferences instead.
Given a factory model that lets an end user instruct a robot to perform lower-level actions (e.g. ‘Move left’), we show that end users can collect demonstrations using language to train their home model for higher-level tasks specific to their needs (e.g. ‘Open the top drawer and put the block inside’). We demonstrate this hierarchical robot learning framework on robot manipulation tasks using RLBench environments. Our method results in a 16% improvement in skill success rates compared to a baseline method.
In further experiments, we explore the use of the large vision- language model (VLM), Bard, to automatically break down tasks into sequences of lower-level instructions, aiming to bypass end-user involvement. The VLM is unable to break tasks down to our lowest level, but does achieve good results breaking high-level tasks into mid-level skills.
There are an enormous number of tasks an end user might want a robot to complete in their home. We can't pretrain a model for every possibility. Even for tasks that are pretrained, the end user might want to alter the way the task is performed.
Teleoperation requires additional hardware which can be expensive and challenging to set up. Even relatively cheap and user-friendly teleoperation hardware doesn't compare to the convenience and ease of use of natural language.
We show the first 5 evaluation episodes for the multi-skill and multi-task models that had the highest average success rates across all skills or tasks.
This project is funded by the MnRI Seed Grant from the Minnesota Robotics Institute. We thank Chahyon Ku for insightful discussions and proof reading this paper.
@ARTICLE{10608414,
author={Winge, Carl and Imdieke, Adam and Aldeeb, Bahaa and Kang, Dongyeop and Desingh, Karthik},
journal={IEEE Robotics and Automation Letters},
title={Talk Through It: End User Directed Manipulation Learning},
year={2024},
pages={1-8},
keywords={Robots;Task analysis;Production facilities;Training;Natural languages;Grippers;Cognition;Learning from Demonstration;Incremental Learning;Human-Centered Robotics},
doi={10.1109/LRA.2024.3433309}
}