Abstract

Regime Switching

Long Memory

Long memory vs Regime Switching

The MSM-ARFIMA model

Appraisal and extensions

Section

Hamilton (1989) and the Markov Switching Theory

Regime Switching vs Long Memory: How to deal with them? A generalized ARFIMA process with Markov-switching fractional differencing parameter

Gilles de Truchis - Brown bag session of DEFI

Abstract

Regime Switching

Long Memory

Long memory vs Regime Switching

The MSM-ARFIMA model

Appraisal and extensions

The Markov-Switching model : intuitive approach

The interest of finding a more attractive way to capture changes in regime, grew during the 90â€™s. One of the most relevant answer was suggested by Hamilton (1989) According to him, the parameters of an autoregressive (AR) model are viewed as the outcome of a discrete-state Markov process. Hamilton investigates the US real GNP rate and stresses unobservable business cycles Quarterly Growth Rate of US GNP 4 3 2 1 0 -1 -2 -3 1952 1954

1956

1958

1960

1962

1964 1966

1968

1970

1972

1974

1976 1978

1980

1982

1984

Probability of Economy Being in Contraction 1.0 0.8 0.6 0.4 0.2 0.0 1952 1954 1956 1958 1960 1962 1964 1966 1968 1970 1972 1974 1976 1978 1980 1982 1984

Regime Switching vs Long Memory: How to deal with them? A generalized ARFIMA process with Markov-switching fractional differencing parameter

Gilles de Truchis - Brown bag session of DEFI

Abstract

Regime Switching

Long Memory

Long memory vs Regime Switching

The MSM-ARFIMA model

Appraisal and extensions

The Markov-Switching model : intuitive approach

Intuitively, Hamilton assumes : GNP is governed by a latent process which impact the probability of economy to be in recession (named state 1) or in expansion (named state 2). The Markov chain is used to model switches between periods of contraction and expansion of economy. However, we cannot observe directly the Markov chain and we have to deduce state throughout the GNP series. 2.00

1.00 2600

2650

2700

2750

2800

2850

2900

2950

3000

2750

2800

2850

2900

2950

3000

30 20 10 0 -10 -20 -30 2600

2650

2700

Regime Switching vs Long Memory: How to deal with them? A generalized ARFIMA process with Markov-switching fractional differencing parameter

Gilles de Truchis - Brown bag session of DEFI

Abstract

Regime Switching

Long Memory

Long memory vs Regime Switching

The MSM-ARFIMA model

Appraisal and extensions

The Markov-Switching model : formal approach

More formally, we can define the MSM-AR(p) model by the equation (1) : yt = µst +

p X

φkst (yt−k − µst−k ) + σs2t εt , εt ∼ i.i.d N (0, σ 2 )

(1)

k =1

Where St is a first order stationary (or time-homogeneous) Markov chain defined as follows : ∀i, j ∈ E 2 , P(St+1 = j |St = i) = P(St = j |St−1 = i) = pi,j Properties of St : E is the state space such (i, j ) ∈ E 2 pi,j is the probability of going from state i to state j in one step St is irreducible since it is possible to get any state from any state St is recurrent since states are not transient, in other words persistent : P ∞ n=0 pii = ∞ St is aperiodic since the returns from state j to state i can occur at irregular times All this properties ensure that we can apply the strong law of large numbers (hence almost surely convergence) and obtain stationarity and ergodicity properties. P.S. Markov Switching can be modeling both in mean (MSM) and with intercept (MSI) Regime Switching vs Long Memory: How to deal with them? A generalized ARFIMA process with Markov-switching fractional differencing parameter

Gilles de Truchis - Brown bag session of DEFI

Abstract

Regime Switching

Long Memory

Long memory vs Regime Switching

The MSM-ARFIMA model

Appraisal and extensions

The Markov-Switching model : formal approach

St is governed by a transition matrix P. Each line of P is equal to 1 : ∀i ∈ E ,

X

pi,j = 1

j ∈E

The dimension of P depends on the assumption upon the number of states. Considering a two states Markov chain process, P is given by :

p11 p21

p12 p22

p11 is the probability to be in state 1 and to stay in, p12 = 1 − p11 is the probability to be in state 1 and to move to state 2. p22 is the probability to be in state 2 and to stay in, p21 = 1 − p22 is the probability to be in state 2 and to move to state 1.

Gilles de Truchis - Brown bag session of DEFI

Abstract

Regime Switching

Long Memory

Long memory vs Regime Switching

The MSM-ARFIMA model

Appraisal and extensions

The Markov-Switching model : formal approach

As we said previously, in our case the markov chain is not directly observable However, assuming that yt is well specified, we compute the most probable sequence of hidden states, by observing sequences of events that occur at each t :

p12 S1

e12 e11

S2

p21 e22

e21

y1

e23

e14 e13 y2

e24 y3

y4 Gilles de Truchis - Brown bag session of DEFI

Abstract

Regime Switching

Long Memory

Long memory vs Regime Switching

The MSM-ARFIMA model

Appraisal and extensions

The Markov-Switching model : formal approach

Under the assumption that εt in (1) is normally distributed (conditional upon the history Ωt−1 ), the density of yt conditional on the regime st is given by :

f (yt |st = (i, j ), Ωt−1 ; θ) =

θ = {µst =(i,j ) , φjs

t =(i,j )

1 √

σ(i,j ) 2π

exp

−

(yt − yˆt )2 2 2σ(i,j )

, p11 , p22 , σs2t =(i,j ) }, (yt −ˆ yt ) = (yt −µst −

p X

!

φkst (yt−k −µst−k ))/σs2t

k =1

Finally, ln f (yt |Ωt−1 ; θ) can be obtain from the joint density of yt and st as follows : f (yt |Ωt−1 ; θ)

= =

f (yt , st = 1|Ωt−1 ; θ) + f (yt , st = 2|Ωt−1 ; θ) P2

s=1

f (yt , st = s|Ωt−1 ; θ) × P(st = s|Ωt−1 ; θ)

Many Markov Switching extensions have been explored. In particular, we can underline the MSM-AR with Time Varying Transition Probabilities (TVTP) suggested by Filardo (1994). In this model, a lagged information variable is used to guide the TVTP. Regime Switching vs Long Memory: How to deal with them? A generalized ARFIMA process with Markov-switching fractional differencing parameter

Gilles de Truchis - Brown bag session of DEFI