You are not currently logged in.
Access JSTOR through your library or other institution:
If You Use a Screen ReaderThis content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
An Entropy Concentration Theorem: Applications in Artificial Intelligence and Descriptive Statistics
Journal of Applied Probability
Vol. 27, No. 2 (Jun., 1990), pp. 303-313
Published by: Applied Probability Trust
Stable URL: http://www.jstor.org/stable/3214649
Page Count: 11
You can always find the topics here!Topics: Entropy, Statistical theories, Statistical discrepancies, Topological theorems, Mathematical theorems, Reasoning, Shannon entropy, Artificial intelligence, Descriptive statistics, Probabilities
Were these topics helpful?See something inaccurate? Let us know!
Select the topics that are inaccurate.
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Preview not available
The maximum entropy principle is used to model uncertainty by a maximum entropy distribution, subject to some appropriate linear constraints. We give an entropy concentration theorem (whose demonstration is based on large deviation techniques) which is a mathematical justification of this statistical modelling principle. Then we indicate how it can be used in artificial intelligence, and how relevant prior knowledge is provided by some classical descriptive statistical methods. It appears furthermore that the maximum entropy principle yields to a natural binding between descriptive methods and some statistical structures.
Journal of Applied Probability © 1990 Applied Probability Trust