Exam zone

Sampling of Random Signals In the class we saw an argument from the frequency domain that speciﬁed that if we sample a zero-mean, WSS, continuous-time random signal x c ( t ), at twice the largest frequency component present in the PSD Px cx c (Ω), i.e., Ω = the discrete-time random sequence x [n ] obtained via impulse sampling of s 2Ω , then m the random signal x c ( t ) contains all the information present in the continuous-time signal. This was a consequence of the alias-sum PSD formula given by: µ ¶ 1 1 X ! ¡ 2k¼ Pxx ( ej! ) = Px cx c ; ! 2 [¡ ¼;¼]: Ts Ts k=

In this exercise, we shall look at a time-domain argument that describes the same. First consider the interpolation sum described in the class: µ ¶ X1 t xˆ c ( t ) = x c ( kTs )sinc ¡ k : Ts

(1)

k=

The equality above however, is in the mean-square sense and not in a pointwise sense. From the discussion intheclasswesawthat,theACFofthecontinuous-timerandomsignalwasalsosampledduringtheprocess R xx [k ]= E f x [n ]x ¤ [n ¡ k ]g = Rx cx c ( kTs ) : Consequently, if we sampled the continuous-time random signal at an appropriate rate we could recover the ACF of the continuous-time random signal, i.e., µ ¶ X1 ¿ ˆ R x cx c ( ¿) = R x cx c ( kTs )sinc ¡ k : (2) Ts k=

Note thathis expansion holdsgood foranylowpass waveformthat isbandlimited andsatisﬁes that Nyquist sampling criteria. The signal R x cx c ( ¿ ¡ t ) being a lowpass waveform and can therefore be expanded via: µ ¶ X1 ¿ ˆ x cx c ( ¿ ¡ t ) = R R x cx c ( kTs ¡ t )sinc ¡ k Ts k=

Replacing the variable ¿ with the variable ¿ + t we obtain: µ

X1

ˆ x cx c ( ¿) = R

R x cx c ( kTs ¡ t )sinc k=

¿+ t ¡ k Ts

If the above expression is evaluated at¿ =0 we obtain: µ

X1

ˆ x cx c (0) = R

R x cx k=

t ¡ k c ( kT s ¡ t )sinc Ts

¶ (3)

IfthesuminEq.(1)weretoconvergetothecontinuous-timerandomsignalintheMSsensewerequirethat: 80 1 29 > > µ ¶ N < = X t @ A lim E xc(t ) ¡ x c ( pTs )sinc ¡ p =0 : N !1 Ts :> ;> p= ¡ N

Expanding the square in the expectation and rewriting the requirement: lim E f [x c ( t ) ¡ xˆ c ( t )] x c ( t ) g¡

N !1

lim E f xˆ c ( t )[ x c ( t ) ¡ xˆ c ( t )] g =0 :

(4)

N !1

If we look at just the ﬁrst sum in Eq. (4) and incorporating Eq. (3): lim E f [x c ( t ) ¡ xˆ c ( t )] x c ( t ) g =

N !1

=

µ

XN R x cx c (0) ¡ 0:

lim

N !1

R x cx p= ¡ N

¶ t ¡ p c ( t ¡ pTs )sinc Ts

In a similar fashion the second sum in Eq. (4) can be rewritten in the following form: X1

lim E f xˆ c ( t )[ x c ( t ) ¡ xˆ c ( t )] g =

N !1

µ E f x c ( mTs )( x c ( t ) ¡ xˆ c ( t )) gsinc

m=

t ¡ m Ts

¶ (5)

Expanding R xx ( t ¡ mTs ) in terms of the sinc function basis we obtain: µ

X1 R x cx c ( t ¡ mTs ) =

R x cx c (( n ¡ m ) Ts )sinc n=

¶ t ¡ n : Ts

(6)

Expanding the expectation in Eq. (5) and incorporating Eq. (6) we have: lim E f x c ( mTs )( x c ( t ) ¡ xˆ c ( t )) g =

N !1

=

µ

XN R x cx c ( t ¡ mTs ) ¡

lim

N !1

R x cx c (( m ¡ n ) Ts )sinc n= ¡ N

t ¡ n Ts

0:

The frequency domain interpretation of the sampling theorem, that dealt with the alias sum formula and the minimum sampling rate required for signal recovery, in the time domain implies that ˆx c ( t ) converges to x c ( t ) or is equivalent to x c ( t ) in the mean-square sense, if we sample at an appropriate rate dictated by the sampling theorem.

prob theory & stochastic16
prob theory & stochastic16

Exam zone E f ˆ x c ( t )[ x c ( t ) ¡ ˆ x c ( t )] g =0 : (4) If we look at just the ﬁrst sum in Eq. (4) and incorporating Eq. (3): Ifthesu...