You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
Optimal Learning with Costly Adjustment
Mark Feldman and Michael Spagat
Vol. 6, No. 3 (Nov., 1995), pp. 439-451
Published by: Springer
Stable URL: http://www.jstor.org/stable/25054891
Page Count: 13
Preview not available
We formulate an infinite-horizon Bayesian learning model in which the planner faces a cost from switching actions that does not approach zero as the size of the change vanishes. We recast the model as a dynamic programming problem which will always have a continuous value function and an optimal policy. We show that the planner's beliefs will converge eventually to some stochastic limit belief which, however, is not necessarily a point mass on the "truth". The planner's actions will also converge, although not necessarily to an optimal action given the truth. A key implication of adjustment costs is that the planner will change her action only finitely many times. We present a simple example illustrating how adjustment costs can lead the planner to settle in the long run on an action that is far away from the optimal action given the "truth" and which yields a reward significantly below that of the optimal action.
Economic Theory © 1995 Springer