Access
You are not currently logged in.
Access JSTOR through your library or other institution:
If You Use a Screen Reader
This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.LargeSample Estimation of an Unknown Discrete Waveform Which is Randomly Repeating in Gaussian Noise
Melvin Hinich
The Annals of Mathematical Statistics
Vol. 36, No. 2 (Apr., 1965), pp. 489508
Published by: Institute of Mathematical Statistics
Stable URL: http://www.jstor.org/stable/2238154
Page Count: 20
 Item Type
 Article
 Thumbnails
 References
Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader.
Abstract
Suppose we have an input X(t) made up of an unknown waveform θ(t) of known length, which is repeated randomly, and is imbedded in Gaussian noise with a known covariance function. The rate of recurrence of the waveform is a known small constant. In addition, the signaltonoise ratio of the input X(t) is quite low. We wish to estimate the waveform θ(t) and its autocorrelation ψ(τ) = ∫ θ(t + τ)θ(t) dt. Restricting ourselves to discretetime observations on X(t), we shall derive an optimal estimator of the discrete version of ψ(τ). This estimator is a weighted average of the sample autocorrelation and the square of a linear estimator of the time average (the zerofrequency or DC value) of the waveform. For the estimation of θ, the problem is more complicated. The optimality concept (asymptotic efficiency) used in this work is based upon largesample theory and the CramérRao Inequality (Chernoff [2] and Cramér [4]). The problem stated above was motivated by a problem of electronic surveillance of an enemy communication system based upon pulse position modulation, PPM. To illustrate this system suppose station A is sending a message, which we wish to intercept and decode, to station B by PPM over a certain FM bandwidth. A continues to repeat a fixed pulsetype waveform θ(t) = ∑n i = 1 θiH[ t  (i  1)T] where $H(t) = 1\quad\text{if} \quad 0 \leqq t < T,\\ = 0\quad\text{otherwise}$. The vector θ' = (θ1, ⋯, θn), the parameter n, and Tthe pulse widthare known to both A and B. Notice that nT is the time duration (length) of θ(t). Since θ, n, and T are known to the receiver B, one may ask what the coding scheme is for the information that A is sending. The answer is that the length of time between successive recurrences of the waveform, is the variable which contains the information. In many applications the average length of time between successive occurrences of θ(t) is around 102 times nT. While the intervals between repetitions are fundamental in the transmission of information, someone who does not yet know the waveform may regard the recurrences to be purely random in time. This modulation technique has the effect of spreading the power in the FM bandwidth over a wider swath of the frequency scale. This makes surveillance more difficult because it requires that we somehow determine the actual bandwidth being used. Moreover, the spreading of the power makes jamming of the channel difficult. To further complicate matters, A transmits the pulses with low power so that B picks up an input X(t) with low signaltonoise ratio. Since B knows θ(t), he uses matched filtering to detect the times of occurrence of the θ's. If the noise is assumed to be additive Gaussian noise with known covariance, then matched filtering is optimal in a decision theoretic sense (Wainstein and Zubakov [8]). But suppose that we are listening in on this channel without knowing θ and we wish to find out what A is saying. First we must determine the frequency band of the channel which A is using. Then we must detect the times of occurrence of the θ's, although we do not know θ. However, let us assume that we have already determined n and T, although it will turn out that n is not a vital parameter in the estimator developed in this paper. Jakowatz, Shuey, and White [6] present a special discretetime (sampled data) system, called the Adaptive Filter, which estimates an unknown waveform which is repeating in additive noise. The system uses a complicated stochastic iterative procedure. The Filter obtains a crude estimate of the waveform from the initial input and uses the discrete crosscorrelation between this estimate and the input to detect the times of occurrence of the waveform. When it decides that a waveform is present in the input, it refines the estimate by averaging it with the section of input where the waveform is thought to be present. Provided the autocorrelation of the waveformψ(τ)has ψ(0), its maximum, a good deal larger than the relative maxima of ψ, and provided the noise is well behaved, then this iterative procedure results in an asymptotically stable estimate for the waveform. This stable estimate is then used as the matching element in matched filter detection of the waveform. Thus we could call the Adaptive Filter an adaptive matched filter. A partial analysis of this system is given by Hinich [5]. The estimate of the discrete autocorrelation of the waveform is helpful to the analysis of systems which are based upon discretetime crosscorrelation such as the Adaptive Filter, since the autocorrelation is a basic parameter in the distributions of the random variables (correlations) which arise in the operation of these systems. Suppose we can obtain an expression for θ in terms of ψ, θ = f(ψ). Once we have obtained an asymptotically efficient estimator of ψ, call it $\hat\psi$, then $\hat\theta = f(\hat\psi)$ would be an asymptotically efficient estimator of θ. Unfortunately, there is a multiplicity of θ's which have ψ as their autocorrelation. In Section 6 we will present a method for obtaining the correct θ (the one which appears in the input) from ψ by using the observations on X(t). Unfortunately this method does not seem to be sufficiently practical. To conclude, let us outline the rest of this paper. In Section 2 we give a formal statement and description of the problem posed above. In Section 3 we state the CramerRao theorem and derive the information matrices relevant to the estimation of θ and ψ, as well as their inverses. In Section 4 we present the optimal estimator of ψ. We also show that while the normalized sample correlation is an unbiased estimator of ψ, it is not efficient. In Section 5 we discuss three examples. These are the general case white noise, the case of white noise when the discrete waveform has only two components θ1, θ2, and the case of Gaussian Markov noise $(EN (t + \tau)N(t) = \rho^{\tau}, 0 < \rho < 1)$. In Section 6 we discuss the problem of estimating θ after the autocorrelation ψ has been estimated.
Page Thumbnails

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508
The Annals of Mathematical Statistics © 1965 Institute of Mathematical Statistics