By W. Bradley Knox, Stephane Hatgis-Kessell, Serena Booth, Scott Niekum, Peter Stone, and Alessandro Allievi

PDF: https://openreview.net/pdf?id=hpKJkVoThY
TMLR 2024: https://openreview.net/forum?id=hpKJkVoThY
Arxiv: https://arxiv.org/abs/2206.02231

An early version was presented as a Spotlights paper at RLDM 2022, under the title Partial return poorly explains human preferences.

Dataset of human preferences between pairs of trajectory segments
Github repository, containing code for learning and for re-running our main experiments, as well as our interface for training subjects and for preference elicitation

Try out our preference elicitation interface.


Figure shows illustrations of segment pairs for which the common partial return preference model poorly explains intuitive human preference. The task has -1 reward each time step, penalizing time taken to reach the goal. In both pairs, both segments have the same partial return (-2), but the one on the right is nonetheless the intuitive choice. Additionally, the right segment in each pair consists only of optimal actions, whereas the left segment includes at least one suboptimal action. Regret, which our proposed preference model is based upon, is designed to measure a segment’s deviation from optimal decision making. The right segment in each pair is therefore more likely to be preferred by a regret preference model. In the left pair, the preferred segment has a higher end state value. In the right pair, the preferred segment has a lower start state value, indicating a lower opportunity cost (i.e., it did not waste a more valuable start state).


Paper summary: The utility of reinforcement learning is limited by the alignment of reward functions with the interests of human stakeholders. One promising method for alignment is to learn the reward function from human-generated preferences between pairs of trajectory segments.

These human preferences are typically assumed to be informed solely by partial return, the sum of rewards along each segment. We find this assumption to be flawed and propose modeling preferences instead as arising from a different statistic: each segment’s regret, a measure of a segment’s deviation from optimal decision-making. Given infinitely many preferences generated according to regret, we prove that we can identify a reward function equivalent to the reward function that generated those preferences. We also prove that the previous partial return model lacks this identifiability property without preference noise that reveals rewards’ relative proportions, and we empirically show that our proposed regret preference model outperforms it with finite training data in otherwise the same setting. Additionally, our proposed regret preference model better predicts real human preferences and also learns reward functions from these preferences that lead to policies that are better human-aligned.

Overall, this work establishes that the choice of preference model is impactful, and our proposed regret preference model provides an improvement upon a core assumption of recent research.