Results 1 
2 of
2
Computational Rationalization: The Inverse Equilibrium Problem
"... Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the singleagent decisiontheoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the singleagent decisiontheoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision problem. These techniques learn a utility function that explains the example behavior and can then be used to accurately predict or imitate future behavior in similar observed or unobserved situations. In this work, we consider similar tasks in competitive and cooperative multiagent domains. Here, unlike singleagent settings, a player cannot myopically maximize its reward — it must speculate on how the other agents may act to influence the game’s outcome. Employing the gametheoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior, as well as recovering a reward function in these domains. 1.
Part of the Robotics Commons Recommended Citation
"... Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the singleagent decisiontheoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision ..."
Abstract
 Add to MetaCart
(Show Context)
Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the singleagent decisiontheoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision problem. These techniques learn a utility function that explains the example behavior and can then be used to accurately predict or imitate future behavior in similar observed or unobserved situations. In this work, we consider similar tasks in competitive and cooperative multiagent domains. Here, unlike singleagent settings, a player cannot myopically maximize its reward — it must speculate on how the other agents may act to influence the game’s outcome. Employing the gametheoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior, as well as recovering a reward function in these domains. 1.