Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

Relative Entropy Measures of Multivariate Dependence

Harry Joe
Journal of the American Statistical Association
Vol. 84, No. 405 (Mar., 1989), pp. 157-164
DOI: 10.2307/2289859
Stable URL: http://www.jstor.org/stable/2289859
Page Count: 8
  • Download ($14.00)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
Relative Entropy Measures of Multivariate Dependence
Preview not available

Abstract

There has been a lot of work on measures of dependence or association for bivariate probability distributions or bivariate data. These measures usually assume that the variables are both continuous or both categorical. In comparison, there is very little work on multivariate or conditional measures of dependence. The purpose of this article is to discuss measures of multivariate dependence and measures of conditional dependence based on relative entropies. These measures are conceptually very general, as they can be used for a set of variables that can be a mixture of continuous, ordinal-categorical, and nominal-categorical variables. For continuous or ordinal-categorical variables, a certain transformation of relative entropy to the interval [0, 1] leads to generalizations of the correlation, multiple-correlation, and partial-correlation coefficients. If all variables are nominal categorical, the relative entropies are standardized to take a maximum of 1 and then transformed so that in the bivariate case, there is a relative reduction in variability interpretation like that for the correlation coefficient. The relative entropy measures of dependence are compared with commonly used bivariate measures of association such as Kendall's τb and Goodman and Kruskal's λ and with measures of dependence based on Pearson's φ2 distance. Examples suggest that these new measures of dependence should be useful additional summary values for nonmonotonic or nonlinear dependence. Assuming that the multivariate data are a random sample, the statistical measures of dependence with estimated probability density or mass functions can be studied asymptotically. Standard errors are obtained when all variables are categorical, and an outline of what must be done in the case of all continuous variables is given.

Page Thumbnails

  • Thumbnail: Page 
157
    157
  • Thumbnail: Page 
158
    158
  • Thumbnail: Page 
159
    159
  • Thumbnail: Page 
160
    160
  • Thumbnail: Page 
161
    161
  • Thumbnail: Page 
162
    162
  • Thumbnail: Page 
163
    163
  • Thumbnail: Page 
164
    164