# Combining Sample Information in Estimating Ordered Normal Means

Constance Van Eeden and James V. Zidek
Sankhyā: The Indian Journal of Statistics, Series A (1961-2002)
Vol. 64, No. 3, In Memory of D. Basu, Part 1 (Oct., 2002), pp. 588-610
Stable URL: http://www.jstor.org/stable/25051416
Page Count: 23

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

## Abstract

In this paper we answer a question concerned with the estimation of $\theta _{1}$ when $Y_{i}\sim ^{ind}{\cal N}(\theta _{i},\sigma _{i}^{2}),i=1,2,$ are observed and $\theta _{1}\leq \theta _{2}$. In this case $\theta _{2}$ contains information about $\theta _{1}$ and we show how the relevance weights in the so-called weighted likelihood might be selected so that $Y_{2}$ may be used together with $Y_{1}$ for effective likelihood inference about $\theta _{1}$. Our answer to this question uses the Akaike entropy maximization criterion to find the weights empirically. Although the problem of estimating $\theta _{1}$ under these conditions has a long history, our estimator appears to be new. Unlike the MLE it is continuously differentiable. Unlike the Pitman estimator for this problem, but like the MLE, it has a simple form. The paper describes the derivation of our estimator, presents some of its properties and compares it with some obvious competitors. One of these competitors is the inadmissible maximum likelihood estimator for which we present a dominator. Finally, a number of open problems are presented.

• [588]
• 589
• 590
• 591
• 592
• 593
• 594
• 595
• 596
• 597
• 598
• 599
• 600
• 601
• 602
• 603
• 604
• 605
• 606
• 607
• 608
• 609
• 610