You are not currently logged in.
Access JSTOR through your library or other institution:
If You Use a Screen ReaderThis content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Decision Processes with Total-Cost Criteria
Stephen Demko and Theodore P. Hill
The Annals of Probability
Vol. 9, No. 2 (Apr., 1981), pp. 293-301
Published by: Institute of Mathematical Statistics
Stable URL: http://www.jstor.org/stable/2243461
Page Count: 9
You can always find the topics here!Topics: Total costs, Minimization of cost, Transition probabilities, Dynamic programming, Markov processes, Markov chains, Algebra, Optimal strategies, Stationary processes
Were these topics helpful?See somethings inaccurate? Let us know!
Select the topics that are inaccurate.
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Preview not available
By a decision process is meant a pair (X, Γ), where X is an arbitrary set (the state space), and Γ associates to each point x in X an arbitrary nonempty collection of discrete probability measures (actions) on X. In a decision process with nonnegative costs depending on the current state, the action taken, and the following state, there is always available a Markov strategy which uniformly (nearly) minimizes the expected total cost. If the costs are strictly positive and depend only on the current state, there is even a stationary strategy with the same property. In a decision process with a fixed goal g in X, there is always a stationary strategy which uniformly (nearly) minimizes the expected time to the goal, and, if X is countable, such a stationary strategy exists which also (nearly) maximizes the probability of reaching the goal.
The Annals of Probability © 1981 Institute of Mathematical Statistics