ARIMA and Neural Networks. An application to the real GNP growth rate and the unemployment rate

Page 4

division and linear activation function. More specifically input layer receives the input vector X and distributes the data to the pattern layer. Each neuron in the pattern layer generates an output θ, which θ i = exp( − || x − x2i || ) 2σ i 2

and present the results to the

summation layer. In this layer the numerator and denominator neuron compute the weighted and simple sums based on the value of w and θ , which is wijθj , the numerator is Sj = Σi wijθj and denominator is Sd = Σi θj. In the output layer output y are computed as Υj = Sj/ Sd. We must mention that the hidden layer consists of 24 units. The smooth rate for GNP is set at 0.01 and for the unemployment rate is set at 0.05 based on the lowest train and test errors. In our case we propose the AR-GRNN model (Li et al., 2007), which means that the output is the vector of data yt and inputs are the data with lags as yt-1, yt-2…yt-p. So the general form of the AR-GRNN is defined as , , … … … , (4) , where F is a function produced by GRNN network. But in the case of unemployment we consider the first differences, because we suspect that unemployment , is not probably stationary, as indicates the KPSS test , so we apply the following AR(p) function , , … … … ,

(5)

We apply relation (4) and (5) for all neural networks models and specifically we apply AR(1) for GNP and AR(2) for the first differences of unemployment rate. The technique we obtain is the following. Suppose that we have quarterly output data for a period e.g. 1948:Q1-2006:Q4 which is the variable yt. If we have AR(1) then we obtain the yt-1 , which is the output data with one lag. But this lag is referred again to 4


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.