Faculty of Economics and Business Administration Publications Database

Learning by Doing and the Value of Optimal Experimentation

Selected
Authors:
Source:
Volume: 24
Number: 4
Pages: 501 - 534
Month: April
ISSN-Print: 0165-1889
Link External Source: Online Version
Year: 2000
Keywords: Bayesian learning; Optimal control with unknown parameters; Learning by doing experimentation; Dynamic programming
Abstract: Recent research on learning by doing has provided the limit properties of beliefs and actions for a class of learning problems, in which experimentation is an important aspect of optimal decision making. However, under these conditions the optimal policy cannot be derived analytically, because Bayesian learning about unknown parameters introduces a nonlinearity in the dynamic optimization problem. This paper utilizes numerical methods to characterize the optimal policy function for a learning by doing problem that is general enough for practical economic applications. The optimal policy is found to incorporate a substantial degree of experimentation under a wide range of initial beliefs about the unknown parameters. Dynamic simulations indicate that optimal experimentation dramatically improves the speed of learning and the stream of future payoffs. Furthermore, these simulations reveal that a policy, which separates control and estimation and does not incorporate experimentation, frequently induces a longlasting bias in the control and target variables. While these sequences tend to converge steadily under the optimal policy, they frequently exhibit non-stationary behavior when estimation and control are treated separately.
back