Issuu on Google+

Exam zone


Example: Oscillator with Random Phase Considertheoutputofasinusoidaloscillatorthathasarandomphaseandamplitudeofthe form: X ( t ) =cos(- ct + £) ; where £ » U ([0; 2¼]). Writing out the explicit dependence on the underlying sample space S the oscillator output can be written as x ( t; £) = cos(-

ct

+ £) :

(31)

This random signal falls in the continuous-time, continuous parameter, and continuous amplitudecategoryandisusefulinmodelingpropagationphenomenasuchasmulti-pathfading. The¯rstorderdistributionofthisprocesscanbefoundbylookingatthedistributionof the R.V X t (£) =cos(£+ µo) ; where- ct = µo isanonrandomquantity. Thiscaneasilybeshownviathederivativemethod shown in class to be of the form: 1 f X (x) = p ; j x j < 1: (32) ¼ 1 ¡ x2 Notethatthisdistributionisdependentonlyonthesetofvaluesthattheprocesstakesand is independent of the particular sampling instantt and the constant phase o®setµo. If the second-order distribution is needed then we use the conditional distribution of x ( t 2) as in : f x ( t 1 ) ;x ( t 2 ) ( x 1;x 2) = f x ( t 2 ) ( x 2) f x ( t 1 ) j x ( t 2 ) ( x 1j x 2) (33) If the value of x ( t 2) is to be equal to x 2 then we require cos(£+happen only when :

ct 2 )

= x 2. This can

£ = cos ¡ 1( x 2) ¡ - ct 2 or £ = 2 ¼¡ cos¡ 1( x 2) ¡ - ct 2;

(34)

where0 · cos¡ 1( x 2) · ¼. Allother possiblesolutionslieoutsidethedesiredinterval[0 Consequently the random process att = t 1 can only take on the values: x ( t 1) = cos -³

ct 1

+ cos¡ 1( x 2) ¡ - ct 2´

x ( t 1) = cos -³

ct 1

¡ cos¡ 1( x 2) ¡ - ct 2´

; 2¼].

or (35)

Thus the conditional distribution of x ( t 1) given that x ( t 2) = x 2 is of the form: f x ( t 1 ) j x ( t 2 ) ( x 1j x 2 )

= +

1 µ¶

± ³x 1 ¡ cosh- ct 1 + cos¡ 1( x 2) ¡ - ct 2i´ 2 1 ³ µ¶ ± x 1 ¡ cosh- ct 1 ¡ cos¡ 1( x 2) ¡ - ct 2i´ : 2

(36)

Combining Eq. (32) and Eq. (36) we have: f x ( t 1 ) ;x ( t 2 ) ( x 1;x 2) = +

8 < : 8 < :

1

9 =

q

2¼ 1 ¡ x 22 ; 1 q

2¼ 1 ¡

9 =

x 22 ;

± ³x 1 ¡ cosh- ct 1 + cos¡ 1( x 2) ¡ - ct 2i´ ± ³x 1 ¡ cosh- ct 1 ¡ cos¡ 1( x 2) ¡ - ct 2io : 7

(37)


Noteherethatthesecond-orderPDFdependsonlyonthedi®erencevariable ¿ = t 1 ¡ t 2. Let uslookatthe¯rst-orderandsecond-ordermomentsoftherandomprocess X ( t ). Themean of the process is obtained by taking the expectation operator with respect to the random parameter £ on both sides of Eq. (31) keeping in mind that the expectation integral is a linear operation: ¹X

= E £ ( x t (£))= E £ [cos(- ct + £)] = E £ [cos(- ct ) cos(£) ¡ sin(- ct ) sin(£)] = E £ [cos(£)]cos(- ct ) ¡ E £ [sin(£)]sin(-

ct ) :

(38)

Since the random parameter £ is uniformly distributed, the above expression reduces to: ¹ X = cos(-

ct )

µ

1 ¶ Z 2¼ µ 1 ¶ Z 2¼ cos(µ) dµ ¡ sin(- ct ) sin(µ) dµ = 0 : 2¼ 0 2¼ 0

(39)

The variance of the random processX ( t ) is obtained by via 2 2 ¾X2 = E £ h( x t (£) ¡ ¹ X ) i = E £ ³ [x t (£)] ´ ¡ ¹ 2X

(40)

Substituting the mean of the process in the above expression we have: ¾X2 =

µ

1 ¶ Z 2¼ 2 1 µ 1 ¶ Z 2¼ " 1 + cos(2- ct + 2µ) # cos (- ct + µ) dµ = dµ = 2¼ 0 2¼ 0 2 2

(41)

This means that the average power of the random sinusoidal signal X ( t ) is X Pave = ¾X2 =

1 : 2

Notethatthisisthesameastheaveragepowerofasinusoidwherethephaseisnotrandom. Letuslookatthestatisticsfromthesecond-orderdistribution. Thecorrelationbetweenthe R.Vs x ( t 1) and x ( t 2) denoted asRXX ( t 1;t 2) is obtained via: RXX ( t 1;t 2)

= = =

E £ [x ( t 1) x ( t 2)] =

Z 2¼ 0

cos[- ct 1 + µ]cos[- ct 2 + µ] dµ

1 ¶ Z 2¼ µ 1 ¶ Z 2¼ cos[- (c t 1 + t 2) + 2µ] dµ + cos[- (c t 1 ¡ t 2)] dµ 4¼ 0 4¼ 0 1 µ¶ cos[- (c t 1 ¡ t 2)] : (42) 2 µ

The covariance of R.VsX ( t 1) and X ( t 2) denoted CXX ( t 1;t 2) is given by: CXX ( t 1;t 2) = Rxx ( t 1;t 2) ¡ ¹ X ( t 1) ¹ X ( t 2) =

1 µ¶ 2

cos[- (c t 1 ¡ t 2)] :

(43)

The correlation coe±cient of the R.Vs X ( t 1) and X ( t 2) denoted ½XX ( t 1;t 2) is: ½XX ( t 1;t 2) = cos[- ( c t 1 ¡ t 2)] :

(44)

Looking at the mean and the variance of the random process X ( t ) we can see that they areshift-invariantandconsequentlytheprocessis¯rst-orderstationary. TheACFandother second-order statistics of the process are dependent only on the variable ¿ = t 1 ¡ t 2. The 8


randomprocessX ( t ) isthereforeaWSSprocessalso. TheACFcanthenexpressedinterms of the variable ¿ = t 1 ¡ t 2 as: RXX ( ¿) =

1 µ¶ 2

cos(- c¿) :

(45)

Let us now look at time averages of a single sample function or realization of the random processX ( t ). Thesamplemeanoftherandomprocessirrespectiveofthesamplerealization that we choose is: T 1Z 2 h¹ X i T = cos[- ct + £] dt: (46) T ¡ T2 As T ! 1

we have: lim h¹ X i T = 0 :

T !1

(47)

Thesamplemeanoftheprocessisthereforeindependentoftheparticularensemblewaveform used to calculate the time-average, i.e., independent of the value of £ for the realization. Consequently we have: lim E fh ¹ X i Tg

T !1

lim Var fh ¹ X i Tg

= ¹ X (t ) = 0 =0 :

T !1

(48)

The random processX ( t ) is therefore ergodic in the mean (¯rst-order ergodic). Let us now lookatthesampleACFoftherandomprocess X ( t ). ThesampleACFisagainindependent of the particular realization of the process as evident from : T

lim hRXX ( ¿) i T =

T !1

lim hRXX ( ¿) i T =

T !1

lim hRXX ( ¿) i T =

T !1

1Z 2 cos[- ct + £]cos[- (c t ¡ ¿) + £] dt T !1 T ¡ T2 T T 1 Z 2 1 Z 2 lim cos[2- ct ¡ - c¿ + 2£] dt + lim cos(- c¿) dt T !1 T !1 2T ¡ T2 2T ¡ T2 1 µ¶ cos(- c¿) = RXX ( ¿) : (49) 2 lim

The random processX ( t ) is therefore ergodic in the ACF (second-order ergodic). The power-spectrum of this random signal, i.e., the Fourier transform of the ensemble ACF can then be computed as : PXX (-)

=

¼ [±(- + - c) + [±(- ¡ - c)] : 2

Notethatthisexpressionforthepowerspectrumisidentical totheexpressionforthespectrum of a deterministic sinusoidal signal.

9

(50)


prob theory & stochastic4