Noisy Foresight
Anujit Chakraborty & Chad Kendall
Abstract
In a controlled experiment, we show that decision-makers in a one-player, dynamic setting often fail to think through their own future actions before making initial decisions. This failure to plan at future contingencies implies a lack of perfect foresight, violating a fundamental assumption in dynamic decision problems. We show that neither experience nor prompting subjects to think about their future actions improve behavior. Instead the problem stems from failing to think through how future actions translate to optimal actions in the first period. We then turn to the question of how to model the foresight of such boundedly rational agents. Using the rich dataset we collect, across the five behavioral models we consider, we find that a model in which subjects expect to make less mistakes when the utility consequences of their future actions are more disparate best fits behavior.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.