Humans depart from optimal computational models of socially interactive decision-making under partial information

Abstract

Decision making under uncertainty and under incomplete evidence in multiagent settings is of increasing interest in decision science, assistive robotics, and machine assisted cognition. The degree to which human agents depart from computationally optimal solutions in socially interactive settings is generally unknown. Yet, this knowledge is critical for advances in these areas. Such understanding also provides insight into how competition and cooperation affect human interaction and the underlying contributions of Theory of Mind. In this paper, we adapt the well-known ‘Tiger Problem’ from artificial-agent research to human participants in single agent and interactive, dyadic settings under both competition and cooperation. A novel element of the adaptation required participants to predict the actions of their dyadic partners in the interactive Tiger Tasks, to facilitate explicit Theory of Mind processing. Compared to computationally optimal solutions, participants gathered less information before outcome-related decision when competing with others and collected more evidence when cooperating with others. These departures from optimality were not haphazard but showed evidence of improved performance through learning across sessions. Costly errors resulted under conditions of competition, yielding both lower rates of rewarding actions and lower accuracy in predicting the actions of others, compared to prediction accuracy in cooperation. Taken together, the experiments and collected data provide a novel approach and insights into studying human social interaction and human-machine interaction when shared information is partial.

Related