Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

Decision Processes with Total-Cost Criteria

Stephen Demko and Theodore P. Hill
The Annals of Probability
Vol. 9, No. 2 (Apr., 1981), pp. 293-301
Stable URL: http://www.jstor.org/stable/2243461
Page Count: 9
  • Read Online (Free)
  • Download ($19.00)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
Decision Processes with Total-Cost Criteria
Preview not available

Abstract

By a decision process is meant a pair (X, Γ), where X is an arbitrary set (the state space), and Γ associates to each point x in X an arbitrary nonempty collection of discrete probability measures (actions) on X. In a decision process with nonnegative costs depending on the current state, the action taken, and the following state, there is always available a Markov strategy which uniformly (nearly) minimizes the expected total cost. If the costs are strictly positive and depend only on the current state, there is even a stationary strategy with the same property. In a decision process with a fixed goal g in X, there is always a stationary strategy which uniformly (nearly) minimizes the expected time to the goal, and, if X is countable, such a stationary strategy exists which also (nearly) maximizes the probability of reaching the goal.

Page Thumbnails

  • Thumbnail: Page 
293
    293
  • Thumbnail: Page 
294
    294
  • Thumbnail: Page 
295
    295
  • Thumbnail: Page 
296
    296
  • Thumbnail: Page 
297
    297
  • Thumbnail: Page 
298
    298
  • Thumbnail: Page 
299
    299
  • Thumbnail: Page 
300
    300
  • Thumbnail: Page 
301
    301