You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
If You Use a Screen ReaderThis content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
An Application of Information Theory to Multivariate Analysis
The Annals of Mathematical Statistics
Vol. 23, No. 1 (Mar., 1952), pp. 88-102
Published by: Institute of Mathematical Statistics
Stable URL: http://www.jstor.org/stable/2236403
Page Count: 15
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Preview not available
The problem considered is that of finding the "best" linear function for discriminating between two multivariate normal populations, π1 and π2, without limitation to the case of equal covariance matrices. The "best" linear function is found by maximizing the divergence, J'(1, 2), between the distributions of the linear function. Comparison with the divergence, J(1, 2), between π1 and π2 offers a measure of the discriminating efficiency of the linear function, since J(1, 2) ≥ J'(1, 2). The divergence, a special case of which is Mahalanobis's Generalized Distance, is defined in terms of a measure of information which is essentially that of Shannon and Wiener. Appropriate assumptions about π1 and π2 lead to discriminant analysis (Sections 4, 7), principal components (Section 5), and canonical correlations (Section 6).
The Annals of Mathematical Statistics © 1952 Institute of Mathematical Statistics