Issuu on Google+

International Journal of Mathematics and Computer Applications Research (IJMCAR) ISSN 2249-6955 Vol. 3, Issue 2, Jun 2013, 65-78 Š TJPRC Pvt. Ltd.

FORECASTING SURFACE AIR TEMPERATURE USING NEURAL NETWORKS K. ANITHA KUMARI1, NAVEEN KUMAR BOIROJU2, T. GANESH3 & P. RAJASHEKARA REDDY4 1,3,4

Department of Statistics, Sri Venkateswara University, Tirupati, Andhra Pradesh, India 2

Department of Statistics, Osmania University, Hyderabad, Andhra Pradesh, India

ABSTRACT In this paper, forecasting of monthly mean of minimum surface air temperature of India using seasonal autoregressive integrated moving average (SARIMA) model, feed forward neural networks (FFNN) and higher order neural networks (HONN) is discussed. The prediction ability of the models also tested using sign test, Diebold-Mariano test and Bootstrap test procedure for absolute errors. FFNN and HONN models outperforming than that of SARIMA model in out-of-sample forecasts.

KEYWORDS: Box-Jenkins Methodology, Neural Networks, Prediction Accuracy Tests INTRODUCTION Surface air temperature (SAT) is a measurement of the average kinetic energy of the air near the surface of the earth and it also measures the radiation. SAT is very important in all fields of natural sciences, including physics, geology, chemistry, atmospheric sciences and biology. SAT plays a vital role in environmental and agricultural issues, water cycle, energy cycle, weather forecasts and global climate change. Atmospheric stability is determined by SAT. Hypothermia and Frostbite are due to the changes in SAT. Air temperature prediction is of a concern in environment, industry, agriculture, and health management. Tosadduq et.al. (2005) discussed on application of neural networks for the prediction of hourly mean surface temperatures. Afzali et.al. (2011) used artificial neural networks to predict the ambient air temperature. Shrivastva et.al. (2012) presented a review on applications of neural networks in weather forecasting. Smith (2006) discussed on air temperature prediction using neural networks. Stein and Lloret (2001) used Box-Jenkins methodology for the forecasting of air and water temperatures. Some of the authors, Brunetti et.al. (2000), Anisimov (2001), Bodri (2003), Alfaro (2004), Lee and Sohn(2007), Kulkarni et.al. (2008), FAN Ke (2009), Hejase and Assi (2012), Kemajou (2012) discussed on modelling of air temperatures. Monthly mean of minimum surface air temperature in degrees Celsius data of all India is collected from Indian Institute of Tropical Meteorology (IITM), Pune, India. This data consists of 1284 monthly observations during 1901 to 2007, in which 100 years of data (1212 monthly observations) during1901-2001 are used for model fitting and remaining 6 years of data (72 monthly observations) during 2002-2007 are used as out-of-sample set to measure the predictability of the selected models using mean absolute error, mean absolute percentage error and root mean squared error. The following section presents the Box-Jenkins methodology and neural networks methodologies. Section 3 presents the forecasting models using seasonal autoregressive integrated moving average (SARIMA) model, feed forward neural networks (FFNN) and higher order neural networks (HONN). Comparison of models reported in the Section 4 and final conclusion presented in Section 5 followed by references.

METHODOLOGY This section presents the forecasting model building procedures using Box-Jenkins methodology and neural networks methodologies.


66

K. Anitha Kumari, Naveen Kumar Boiroju, T. Ganesh & P. Rajashekara Reddy

Box-Jenkins Methodology Box Jenkins methodology provides us a class of univariate time series models. SARIMA model is one of the models in Box Jenkins methodology. SARIMA model sometimes consider as a benchmark model for comparison of the models applied to same data set. Box-Jenkins methodology is an iterative procedure to build an adequate model for the given time series. The basic class of SARIMA model is denoted by SARIMA  p, d , q X P, D, Q S and the model is given by

 p B P B s d  sD Z t   q BQ B s at where Z t is the time series value at time t and

 , , and  are

respectively. B is the backward shift operator, B Z t  Z t  s and s

polynomials of order of p, P, q and Q

  1  B  . Order of seasonality is represented by s.

Non-seasonal and seasonal difference order is denoted by d and D respectively. White noise process is denoted by a t . (Box et.al. 1994). The Box-Jenkins procedure consists of the following four steps: (1) model identification, where the orders d, D, p, P, q and Q are determined by observing the behaviour of the corresponding autocorrelation function (ACF) and partial autocorrelation function (PACF); (2) estimation, where the parameters of the model are estimated by the maximum likelihood method; (3) diagnostic checking by the “Portmanteau test”, where the adequacy of the fitted model is checked by the Ljung-Box statistic applied to the residuals of the model; (4) forecasts are obtained from an adequate model using minimum mean squared error method. If the model is judged to be inadequate, steps 1-3 are repeated with different values of d, D, p, P, q and Q until an adequate model is obtained. The detailed procedure of SARIMA Model building is explained in the Section 3. Feed Forward Neural Networks An artificial neural networks, usually called neural networks, is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. In a feed forward neural network (FFNN) structure, the only appropriate connections are between the outputs of each layer and the inputs of the next layer. Therefore, no connections exist between the outputs of a layer and the inputs of either the same layer or previous layers. In this topology, the inputs of each neuron are the weighted sum of the outputs from the previous layer. There are weighted connections between the outputs of each layer and the inputs of the next layer. If the weight of a branch is assigned a zero, it is equivalent to no connection between correspondence nodes. The inputs are connected to each neuron in hidden layer via their correspondence weights. Outputs of the last layer are considered the outputs of the network. Selecting the best number of hidden neurons involves experimentation. The forward selection method involves adding hidden neurons until network performance starts deteriorating. A neural network is required to go through training before it is actually being applied. Training involves feeding the network with data so that it would be able to learn the knowledge among inputs through its learning rule. Backpropogation algorithm is used in supervised learning of the network. The main idea of the backpropagation algorithm is to minimize the error, which is the difference between the expected value and the output of the model. Weights between neurons are adjusted until the error reaches an acceptable value. In order to train the network successfully, the output of the network is made to approach the desired output by continually reducing the error between the network's output and the desired output. This is achieved by adjusting the


67

Forecasting Surface Air Temperature Using Neural Networks

weights between layers by calculating the approximation error and backpropagating this error from the final layer to the first layer. The weights are then adjusted in such a way to reduce the approximation error. The approximation error is minimized using the gradient descent optimization technique (Rojas 1996). Faraway and Chatfield (1998) compared FFNN models with a SARIMA model on their accuracy for forecasting airline data. In their paper, they discovered that FFNN model also reduces the mean square errors (MSEs) of out-of-sample prediction. Rao (2011), Naveen Kumar Boiroju (2012) and Zhang et.al. (1998) provide a comprehensive review of the current status of research in this area. Forecasting of minimum SAT using FFNN model is explained in Section 3. Higher Order Neural Networks HONNs have only started recently to be used in time series modeling. Higher order neural network is a feed forward neural network trained with the higher order inputs. HONNs use joint activation functions; this technique reduces the need to establish the relationships between inputs when training. Furthermore this reduces the number of free weights and means that HONNS are faster to train than even MLPs. However because the number of inputs can be very large for higher order architectures, orders of 4 and over are rarely used. Another advantage of the reduction of free weights means that the problems of over fitting and local optima affecting the results of neural networks can be largely avoided. For a detailed description of HONNs see Knowles et al. (2005) and Naveen Kumar Boiroju (2012). Forecasting of minimum SAT using HONN is explained in the section 3. Measures of Errors The mean absolute error (MAE) measures forecast accuracy by averaging the magnitudes of the forecast errors (i.e. absolute values of each error). N N 1 1 ˆ MAE  Z  Z e  t t t N N t  1 t  1

The mean squared error (MSE) is another method for evaluating a forecasting technique. N N 1 2 1 2 ˆ MSE  ( Z Z )  e t t t N N t  1 t  1

And the root mean squared error (RMSE) is given as N N 1 1 2 2 ˆ RMSE  ( Z Z )  e t t t N N t  1 t  1

Mean absolute percentage error is given by

MAPE 

1 N et  100 . N t 1 Z t

These error measures used to compare the accuracy of the different techniques applied on a time series data.

FORECASTING MODELS This section presents some forecasting models for forecasting monthly mean of minimum SAT using Box-Jenkins methodology and neural networks.


68

K. Anitha Kumari, Naveen Kumar Boiroju, T. Ganesh & P. Rajashekara Reddy

Building SARIMA Model In this section, monthly mean of minimum SAT is modeled using Box-Jenkins methodology. The development of SARIMA model for any variable involves following four steps: Identification, Estimation, Diagnostic checking and Forecasting.

Time plot of the minimum SAT reveals that the data is seasonal and non stationary.

Figure 1: Time Plot of Monthly Mean of Minimum Surface Air Temperature in India Sample autocorrelation function (ACF) also computed to check whether the time series is non-stationary and seasonal or not.

Figure 2: Autocorrelation Function for Minimum SAT


Forecasting Surface Air Temperature Using Neural Networks

69

From the above ACF, it is observed that the ACF is dies out slowly for higher lags which indicates nonstationarity of the series and the significant spikes at seasonal lags shows that the series is a seasonal time series. Seasonal difference of order one (D=1) is sufficient to achieve stationarity of the series. Autocorrelation function and partial autocorrelation function (PACF) is computed for the differenced series and are presented below.

Figure 3: Sample ACF for Minimum SAT with Seasonal Difference D=1

Figure 4: Sample PACF for Minimum SAT with Seasonal Difference D=1 From the above ACF and PACF it is observed that the order of p is at most 1, P is at most 4, q is at most 2 and Q is at most 1. All the tentative models are considered with the above specifications and observed that the most suitable model is SARIMA (1, 0, 2) X (0, 1, 1)12. Model parameters (without constant term in the model) are estimated using SPSS for selected model. Estimates of parameters are given below.


70

K. Anitha Kumari, Naveen Kumar Boiroju, T. Ganesh & P. Rajashekara Reddy

Table 1: SARIMA Model Parameters Transformation

No Transformation

Parameter AR

Lag 1 Lag 1 MA Lag 2 Seasonal Difference MA, Seasonal

Lag 1

Estimate .895 .647 .113 1

SE .040 .050 .035

T 22.551 12.949 3.240

Sig. .000 .000 .001

.951

.012

80.540

.000

with the above parameters the fitted model is

~ (1  0.895B)112 Zt  (1  0.647B  0.113B 2 )(1  0.951B12 )at Adequacy of the model is tested using Portmanteau test. For this purpose, the various autocorrelations of residuals for 25 lags are computed and the same along with their significance which is tested by Box-Ljung Q- test statistic. Let the hypothesis on the model is Ho: The selected model is adequate. H1: The selected model is inadequate. Table 2: Portmanteau Test Ljung-Box Q-Test Statistic DF Sig. 9.421 14 0.803 Since the probability corresponding to Ljung-Box Q-statistic is greater than 0.05, therefore, we accept H o and we may conclude that the selected SARIMA model is an adequate model for the given time series on Mean of Minimum Surface Air temperature. One can forecast the future mean of minimum SAT using the equation (fitted model)

(1  0.895B)112 Zt  (1  0.647B  0.113B2 )(1  0.951B12 )at by minimum mean square error method. The above model is used to forecast the future values of monthly mean of minimum SAT and the forecasts for out-of-sample are presented in the Table 5. Prediction performance of the model is measured using the error measures and the results are presented in the Table 6. Building FFNN Model In this section, building of forecasting model for minimum SAT using FFNN is discussed. The in-sample data set is partitioned into two sets namely training set and testing set. For model building 70% of the in-sample data is taken as training set and 30% of the data is taken as testing set. The feed forward neural network (FFNN) consists of input layer, hidden layer and output layer. Input layer consists of 14 units representing the month (numbers from 1 to 12), Z t 1 and

Z t 12 values. Output layer consists of only one neuron and represents the forecast value ( Zˆ t ) of the series. Number of hidden neurons in the hidden layer is determined using forward selection method. The optimum number of hidden neurons is four. Hyperbolic tangent function is used as an activation function and scaled conjugate gradient algorithm is used to


71

Forecasting Surface Air Temperature Using Neural Networks

train the network. The network is trained until the number of epochs is equivalent to 10,000. SPSS software is used to train the network. With the above specifications the following synaptic weights are obtained. Table 3: Synoptic Weights of FFNN Model

(Bias) [MONTH=1]

H(1:1) -0.118 -0.310

Predicted Hidden Layer 1 H(1:2) H(1:3) H(1:4) 0.151 -0.298 0.209 -0.361 0.708 0.144

[MONTH=2]

-0.256

-0.043

0.278

0.280

[MONTH=3]

-0.360

0.969

-0.278

0.341

[MONTH=4]

0.779

0.193

-0.129

-0.384

[MONTH=5]

0.715

0.564

-0.031

-0.381

[MONTH=6]

0.504

0.207

-0.312

-0.821

[MONTH=7]

0.303

0.154

-0.381

-0.567

[MONTH=8]

0.288

0.273

-0.422

-0.241

[MONTH=9]

0.409

-0.118

-0.371

-0.399

[MONTH=10]

-0.253

0.232

-0.286

0.160

[MONTH=11]

-0.113

-0.514

0.065

0.151

[MONTH=12]

-0.747

-0.457

0.599

0.070

0.080

0.121

0.260

-0.672

0.265

-0.187

-0.368

0.103

Predictor

Input Layer

S Z t 1 

S Z t 12  Hidden Layer 1

Output Layer Min_SAT

(Bias) H(1:1) H(1:2) H(1:3) H(1:4)

-0.233 0.601 0.708 -0.411 -0.697

FFNN forecasting model can be constructed using above synoptic weights as follows

Zˆ t    Zˆ s Where  and  are the mean and standard deviation of the in-sample data set and Zˆ s  0.233  0.601H 1 : 1  0.708H (1 : 2)  0.411H (1 : 3)  0.697H (1 : 4) wher


72

K. Anitha Kumari, Naveen Kumar Boiroju, T. Ganesh & P. Rajashekara Reddy

M= month, S Z t 1   Z t 1   ( Z t 1 )  /  ( Z t 1 ) , S Z t 12   Z t 12   Z t 12  /  Z t 12  and

I  A is an

indicator function. The above model is used to forecast the future values of monthly mean of minimum SAT and the forecasts for out-of-sample are presented in the table 5. Prediction performance of the model is measured using the error measures and the results are presented in the Table 6. Building HONN Model In this section, building of forecasting model for minimum SAT using HONN is discussed. The in-sample data set is partitioned into two sets namely training set and testing set. For model building 70% of the in-sample data is taken as training set and 30% of the data is taken as testing set. The HONN model is similar to FFNN model and it consists of input layer, hidden layer and output layer. Input layer consists of 14 units representing the month (numbers from 1 to 12), Z t 1 ,

Z t 12 , Z t21 , Z t212 and Z t 1 Z t 12 values. Output layer consists of only one neuron and represents the forecast value ( Zˆ t ) of the series. Number of hidden neurons in the hidden layer is determined using forward selection method. The optimum number of hidden neurons is four. Hyperbolic tangent function is used as an activation function and scaled conjugate gradient algorithm is used to train the network. The network is trained until the number of epochs is equivalent to 1,000. SPSS software is used to train the network. With the above specifications the following synaptic weights are obtained. Table 4: Synoptic Weights for HONN Model

Predictor

Input Layer

(Bias) [MONTH=1] [MONTH=2] [MONTH=3] [MONTH=4] [MONTH=5] [MONTH=6] [MONTH=7] [MONTH=8] [MONTH=9] [MONTH=10] [MONTH=11] [MONTH=12]

S Z t 1 

H(1:1) .232 -.232 .241 -.179 -.072 -.315 -.268 -.198 .032 -.059 -.078 .156 -.409 .343

Predicted Hidden Layer 1 H(1:2) H(1:3) H(1:4) -.069 -.470 -.355 .227 .274 .377 -.220 .155 -.070 .504 -.533 -.549 -.821 -.757 .092 -.064 -.187 -.518 -.240 .127 -.394 -.312 .174 -.020 .413 .465 -.334 -.364 .230 .240 -.155 .007 .893 .676 .653 -.065 .169 .548 .343 -.394

.020

.101

Output Layer Min_SAT


73

Forecasting Surface Air Temperature Using Neural Networks

-.111

-.706

-.139

.177

S Z t21

.544

-.275

-.165

-.469

-.158

-.249

-.099

-.005

.495

.094

-.161

.086

  S Z  2 t 12

S Z t 1 Z t 12  Hidden Layer 1

Table – 4 Contd.,

S Z t 12 

(Bias) H(1:1) H(1:2) H(1:3) H(1:4)

-.433 .132 -.585 -.589 -.709

From the above weight matrix the forecasting model can be constructed as

Zˆ t    Zˆ s Where  and  are the mean and standard deviation of the in-sample data set and Zˆ s

 0.433 1.32H (1:1)  0.585H (1: 2)  0.589H (1: 3)  0.709H (1: 4)

M= month,

S Z  is the standardized values of Z and I  A is an indicator function.

The above model is used to forecast the future values of monthly mean of minimum SAT and the forecasts for out-of-sample are presented in the table 5. Prediction performance of the model is measured using the error measures and the results are presented in the Table 6.


74

K. Anitha Kumari, Naveen Kumar Boiroju, T. Ganesh & P. Rajashekara Reddy

Table 5: Out-of-Sample Forecasts from SARIMA, FFNN and HONN Models

COMPARISON OF FORECASTING MODELS This section presents the error measures and the significance of equal prediction ability of the forecasting models. Table 6: Error Measures for SARIMA, FFNN and HONN Models Sample In-Sample

Out-of-Sample

Error MAE MAPE RMSE MAE MAPE RMSE

SARIMA 0.431 2.809 0.573 0.519 3.186 0.653

FFNN 0.425 2.793 0.566 0.440 2.719 0.581

HONN 0.419 2.753 0.557 0.426 2.584 0.558

From the above table, it is observed that the HONN model has minimum error measures in both the in-sample and out-of-sample sets compared to SARIMA and FFNN models. FFNN model has minimum error measures than that of SARIMA model. From the above study it is observed that, HONN model is good at forecasting of minimum SAT. Testing Equal Forecasting Accuracy with Respect to Absolute Errors This section presents the results of Sign test, Diebold-Mariano (DM) test and Bootstrap test to test the equal prediction accuracy of the forecasting models with respect to the out-of-sample absolute errors. The detailed procedure of


75

Forecasting Surface Air Temperature Using Neural Networks

these tests presented in the paper of Naveen Kumar Boiroju et.al. (2011). the following table presents the test statistic values for testing the null hypothesis of equal prediction accuracy of the models. Table 7: Equal Prediction Accuracy Tests Models SARIMA vs FFNN SARIMA vs HONN FFNN vs HONN

Test Statistics Sign test DM test 2.593 2.228 2.828 2.742 1.179 1.037

Bootstrap Test LDL UDL 0.0188 0.1399 0.0284 0.1565 -0.0109 0.0385

Sign test statistic value (2.593) is greater than the critical value (1.96) at 5% level of significance for comparison of SARIMA and FFNN models. Therefore null hypothesis is rejected and we may conclude that there is a significant difference in forecasting ability between SARIMA and FFNN models. Sign test statistic value (2.828) is greater than the critical value (1.96) at 5% level of significance for comparison of SARIMA and HONN models. Therefore null hypothesis is rejected and we may conclude that there is a significant difference in forecasting ability between SARIMA and HONN models. Sign test statistic value (1.179) is greater than the critical value (1.96) at 5% level of significance for comparison of FFNN and HONN models. Therefore null hypothesis is accepted and we may conclude that there is no significant difference in forecasting ability between FFNN and HONN models. DM test statistic value (2.228) which is greater than the critical value (1.96) at 5% level of significance for comparison of SARIMA and FFNN models. Therefore the null hypothesis is rejected and we may conclude that there is a significant difference in prediction ability between SARIMA and FFNN models. DM test statistic value (2.742) which is greater than the critical value (1.96) at 5% level of significance for comparison of SARIMA and HONN models. Therefore the null hypothesis is rejected and we may conclude that there is a significant difference in prediction ability between SARIMA and HONN models. DM test statistic value (1.037) which is greater than the critical value (1.96) at 5% level of significance for comparison of FFNN and HONN models. Therefore the null hypothesis is accepted and we may conclude that there is no significant difference in prediction ability between FFNN and HONN models. Bootstrap test procedure for absolute errors is also applied to test the equal prediction accuracy of the models. The decision limits (0.0284, 0.1565) are obtained using the bootstrap test procedure for comparison of SARIMA and FFNN models. Since the hypothetical difference zero does not belongs to the interval of decision limits that is, 0  (0.0284, 0.1565). Therefore null hypothesis is rejected and we may conclude that there is a significant difference in prediction ability between the SARIMA and HONN models. The decision limits (0.0188, 0.1399) are obtained using the bootstrap test procedure for comparison of SARIMA and HONN models. Since the hypothetical difference zero does not belongs to the interval of decision limits that is, 0 (0.0188, 0.1399). Therefore null hypothesis is rejected and we may conclude that there is a significant difference in prediction ability between the SARIMA and HONN models. The decision limits (-0.0109, 0.0385) are obtained using the bootstrap test procedure for comparison of FFNN and HONN models. Since the hypothetical difference zero belongs to the interval of decision limits that is, 0  (-0.0188,


76

K. Anitha Kumari, Naveen Kumar Boiroju, T. Ganesh & P. Rajashekara Reddy

0.0385). Therefore null hypothesis is accepted and we may conclude that there is no significant difference in prediction ability between the FFNN and HONN models. From the above tests, it is observed that, the forecasting accuracy of the models is not same and empirical evidence shows that FFNN model is good at forecasting than that of SARIMA model. FFNN and HONN models are equally efficient at forecasting of minimum SAT.

CONCLUSIONS Three different forecasting models, SARIMA, FFNN and HONN models have been developed in this study to predict the mean of minimum surface air temperature in India. Error measures showed small values that demonstrate the developed models are suitable and adequate for the forecasting of SAT. Forecasting errors of neural network models is less compared to SARIMA model. Equal prediction accuracy tests reports that the neural networks models are significantly different from SARIMA model. FFNN and HONN models are equally efficient at forecasting of minimum SAT. Hence, neural networks are robust models for forecasting the mean of minimum surface air temperature.

ACKNOWLEDGEMENTS Author is very thankful to Department of Science and Technology (DST), India, for providing Inspire fellowship to carry out this research work.

REFERENCES 1.

Afzali, M., Afzali, A. & Zahedi, G. (2011). Ambient Air Temperature Forecasting Using Artificial Neural Network Approach, 2011 International Conference on Environmental and Computer Science , IPCBEE Vol.19, IACSIT Press, Singapore.

2.

Alfaro, E. (2004). A Method for Prediction of California Summer Air Surface Temperature, Eos, Vol. 85, No. 51, 21 December 2004.

3.

Anisimov O.A., (2001). Predicting Patterns of Near-Surface Air Temperature Using Empirical Data, Climatic Change, Vol. 50, No. 3, 297-315.

4.

Bodri, L.,Čermák, V. (2003). Prediction of Surface Air Temperatures by Neural Network, Example Based on Three-Year Temperature Monitoring at Spořilov Station, Studia Geophysica et Geodaetica, Vol. 47, 1, 173-184.

5.

Box, G. E. P., Jenkins, G. M. & Reinsel, G. C. (1994). Time Series Analysis Forecasting and Control, 3rd ed., Englewood Cliffs, N.J. Prentice Hall.

6.

Brunetti, M., Buffoni,L., Maugeri, M. & Nanni, T., (2000). Trends of minimum and maximum daily temperatures in Italy from 1865 to 1996. Theor. Appl. Climatol., 66, 49–60.

7.

FAN Ke, (2009). Predicting Winter Surface Air Temperature in Northeast China, Atmospheric and Oceanic Science Letters, Vol. 2, NO. 1, 14−17.

8.

Faraway, J. & Chatfield, C. (1998). Time series forecasting with neural networks: a comparative study using the air line data, Journal of the Royal Statistical Society, Series C, Vol. 47, 2, 231-250.

9.

Hejase, H.A.N. & Assi, A.H. (2012). Time-Series Regression Model for Prediction of Mean Daily Global Solar Radiation in Al-Ain, UAE, ISRN Renewable Energy, Vol. 2012, Article ID 412471, 11 pages.


Forecasting Surface Air Temperature Using Neural Networks

77

10. KÊmajou, A., Mba, L. & Meukam, P. (2012). Application of Artificial Neural Network for Predicting the Indoor Air Temperature in Modern Building in Humid Region, British Journal of Applied Science & Technology 2(1), 23-34. 11. Knowles, A., Hussein, A., Deredy, W., Lisboa, P. & Dunis, C. L. (2005). Higher-Order Neural Networks with Bayesian Confidence Measure for Prediction of EUR/USD Exchange Rate, CIBEF Working Papers. Available at www.cibef.com. 12. Kulkarni,M.A., Patil, S., Rama, G.V. & Sen, P.N. (2008). Wind speed prediction using statistical regression and neural network, J. Earth Syst. Sci. 117, No. 4, 457–463. 13. Lee, J.H., Sohn, K. (2007). Prediction of monthly mean surface air temperature in a region of China, Advances in Atmospheric Sciences, Vol. 24, 3, 503-508. 14. Naveen Kumar Boiroju (2012). Forecasting Foreign Exchange Rates using Neural Networks, Unpublished Ph.D. Thesis, Department of Statistics, Osmania University. 15. Naveen Kumar Boiroju, Ramu, Y., Venugopala Rao, M. & Krishna Reddy, M. (2011). A bootstrap test for equality of mean absolute errors, ARPN Journal of engineering and applied sciences, 6(5), 9-11. 16. Rao, S.S. (2011). Forecasting of Monthly Rainfall in Andhra Pradesh using Neural Networks, Unpublished Ph.D. Thesis, Department of Statistics, Osmania University. 17. Rojas, R. (1996). Neural Networks: A systematic introduction, Springer-Verlag. 18. Shrivastava, G., Karmakar, S., Kowar, M.K., Guhathakurta, P. (2012). Application of Artificial Neural Networks in Weather Forecasting: A Comprehensive Literature Review, International Journal of Computer Applications, Vol. 51, No.18, August 2012. 19. Smith, B.A. (2006). Air temperature prediction using artificial neural networks, M.Sc. (Thesis), University of Gerogia. 20. Stein, M. & Lloret, J. (2001). Forecasting of Air and Water Temperatures for Fishery Purposes with Selected Examples from Northwest Atlantic, J. Northw. Atl. Fish. Sci., Vol. 29, 23-30. 21. Tasadduq. I., Rehman, S., Bubshait, K. (2005). Application of neural networks for the prediction of hourly mean surface temperatures in Saudi Arabia. Renewable Energy, 25, 545-554. 22. Zhang, G., Patuwo, B.E. & Hu, M.Y. (1998). Forecasting with Artificial Neural Networks: The State of the Art, International Journal of Forecasting, 14, 35-62.



7.-Forecasting surface .FULL