You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
Multichain Markov Decision Processes with a Sample Path Constraint: A Decomposition Approach
Keith W. Ross and Ravi Varadarajan
Mathematics of Operations Research
Vol. 16, No. 1 (Feb., 1991), pp. 195-207
Published by: INFORMS
Stable URL: http://www.jstor.org/stable/3689856
Page Count: 13
Preview not available
We consider finite-state finite-action Markov decision processes which accumulate both a reward and a cost at each decision epoch. We study the problem of finding a policy that maximizes the expected long-run average reward subject to the constraint that the long-run average cost be no greater than a given value with probability one. We establish that if there exists a policy that meets the constraint, then there exists an ∈-optimal stationary policy. Furthermore, an algorithm is outlined to locate the ∈-optimal stationary policy. The proof of the result hinges on a decomposition of the state space into maximal recurrent classes and a set of transient states.
Mathematics of Operations Research © 1991 INFORMS