Cover

68

vol. 18 oct. â€˜10

This edition:

Modelling the Willingness to Take Part in a Voluntary HIV Test

And:

Interview with Judith Lammers A State-Space Model for Residential Real Estate Valuation Interview with Prof. David A. Jaeger

Wat als haar bakje kibbeling morgen DNB 20 euro kost? Ze kan niet langs de visboer lopen zonder een portie kibbeling te halen. En dan maar gelijk een broodje haring en een pond kabeljauw voor vanavond. Waarom ook niet. Maar wat als de inﬂatie zo hoog zou zijn, dat een bakje kibbeling ineens 20 euro kost? En een haring een tientje? Daarom draagt de Nederlandsche Bank (DNB) – als onderdeel van het Europese Stelsel van Centrale Banken – bij aan een solide monetair beleid. We gebruiken de rente om de inﬂatie te beteugelen en zo een waardevaste euro veilig te stellen. Monetair beleid is niet de enige taak van DNB. We houden ook toezicht op de ﬁnanciële instellingen en leveren een bijdrage aan een zo soepel en veilig mogelijk betalingsverkeer. Zo maken we ons sterk voor de ﬁnanciële stabiliteit van Nederland. Want vertrouwen in ons ﬁnanciële stelsel is de voorwaarde voor welvaart en een gezonde economie. Wil jij daaraan meewerken? Kijk dan op www.werkenbijdnb.nl. | Economen | Econometristen

Werken aan vertrouwen.

Colofon Chief editor Ewout Schotanus Editorial Board Ewout Schotanus Editorial Staff Daniëlla Brals Winnie van Dijk Tara Douma Ron Stoop Chen Yeh Design United Creations © 2009 Lay-out Taek Bijman Maartje Gielen Cover design ©Stockvault (edit by Michael Groen) Circulation 2000

“How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?” – Sherlock Holmes – by: Daniëlla Brals

A free subscription can be obtained at www.aenorm.eu. Advertisers DNB KPMG Towers Watson Information about advertising can be obtained from Axel Augustinus at aaugustinus@ vsae.nl Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine. ISSN 1568-2188 Editorial Staff adresses VSAE Roetersstraat 11, E2.02 1018 WB Amsterdam tel. 020-5254134

Kraket De Boelenlaan 1105 1018 HV Amsterdam tel. 020-5986015

The academic year has just started. Probably you are now stuck reading Bain and Engelhardt, Davidson and MacKinnon, Cameron and Trivedi or some other heavy literature. And while you are squeezing the last bit of life out of your grey cells, you might begin to wonder whether any of it makes any sense at all. Why apply linear models to a world that is clearly not flat? Why bother looking for a unit root, if the probability of outcome one is exactly zero? And if you can choose to believe in the axiom of choice, isn’t the axiom then true by definition? If you ever had a question surpassing the curriculum, now is the opportunity to ask. AENORM has introduced a new section on the final pages, where we challenge you to ask questions that are as creative, ingenious and provoking as you can make them. And I will chase distinguished professors of econometrics all over the world to have your queries answered. Don’t you wonder why we assume rationality, when the world is filled with so much insanity? Insanity is definitely what comes to mind thinking about the never ending IsraelPalestine conflict. An interesting paper about this conflict by David Jaeger is summarized in this AENORM. But there are more incomprehensible things in this world. What about the existence of global poverty? How come we haven’t been able to solve that? Perhaps some of you might choose to tackle such issues instead of derivative markets and returns on investment. We are spammed about the credit crisis, but what about the hunger crisis? Do you know that one billion people suffer from hunger and poverty today? As you noticed this edition of AENORM is about questions and awareness. It’s also the Econometric Game special edition and therefore devoted to the HIV problem in SubSaharan Africa. We present you a very nice article by the winning team from Monash University with their solution to the Econometric Game case. For those of you interested in issues closer to home, like for example procuring one, there is a very interesting article on house prices in Amsterdam. So enjoy reading our magazine and never forget that econometrics is the perfect tool for answering truly important questions!

AENORM

vol. 18 (68)

October 2010

1

00 68 Modelling the Willingness to Take Part in a Voluntary HIV Test

04

by: Team Monash University This is the winning paper of the Econometric Game 2010. The paper focuses on both cases that had to be solved. The first case required an investigation into the modeling of individualâ€™s choices as to whether to take the free test for HIV. The second case focuses, amongst other things, on examining the interdependency of spousal decisions in determining whether the test was taken. Approximating the problem with a linear specification, it was found that there was a positive relationship in both directions. However, the household head has a less significant effect on their spouses than the other way around.

Interview with Prof. David A. Jaeger

12

by: Ewout Schotanus David A. Jaeger received his Ph.D. (economics, 1995) from the University of Michigan and was the first recipient of the W.E. Upjohn Institute Dissertation Award. His research focuses on immigration and migration, education, conflict, and applied econometrics. His work has been published in the American Economic Review, among others. In April 2010 David Jaeger was in Amsterdam to be the chairman of the jury of the Econometric Game 2010. He also gave a presentation on the Econometric Game Congress regarding one of his latest papers.

An Econometric View of Conflict: Dynamics of Violence in the PalestinianIsraeli Conflict

15

Summarized by: Chen Yeh While studying scenarios of violence might not seem the domain of the traditional economist, Jaeger and Paserman (2008, American Economic Review) empirically analyze fatalities in the Palestinian-Israeli conflict by using econometric methods. Their findings are somewhat surprising as they seem to contradict a popular notion of the conflict between these two groups: violence between the Israelis and Palestinians is not likely to be a vicious cycle of ongoing conflict.

A Discrete-Time Queueing Model with Abandonments

18

by: Rein Nobel and Suzanne van der Ster Traditionally, two types of models dominate the queueing literature, either it is assumed that all customers wait in line until they are served successfully [delay models] or the assumption is made that customers who find all servers busy upon their arrival are rejected and leave the system forever without being served [loss models]. However, It is quite common that customers who upon arrival find all servers busy do not wait in line but leave the system temporarily and try to reenter the system some (random) time later. The queueing models which incorporate this phenomenon are called retrial models. Apart from the primary arrivals, this article also copes with the customers who try to enter the system anew after one or more unsuccessful attempts.

2

AENORM

vol. 18 (68)

October 2010

vol. 18 00 oct. m. y. 10

BSc - Recommended for readers of Bachelor-level MSc - Recommended for readers of Master-level PhD - Recommended for readers of PhD-level

A State-Space Model for Residential Real Estate Valuation

23

by: Marc Francke All property in the Netherlands has to be appraised yearly. Yearly valuation has only been made possible with the help of models. The number of real estate appraisers is simply too small to value the more than 7 million residential properties. This paper describes the statistical model that is used by Ortec Finance to value residential real estate for the local government, housing corporations, and mortgage providers. Transaction prices are explained by housing characteristics, location and time in a hierarchical trend model. In this state-space model the impact of time on transaction prices is modeled in an advanced and flexible manner. Estimation results are provided for Amsterdam.

Interview with Judith Lammers

27

by: Ewout Schotanus Judith Lammers has done extensive research in the field of the Aids epidemic in Sub-Sahara Africa. She holds a PhD at Tilburg University on her dissertation “HIV/AIDS, Risk, and Intertemporal Choice.” Her research interests lie in behavioural economics. She studies risk behaviour and risk perception, including the anticipation and prevention of health shocks in Africa. Her article “HIV/AIDS, Risk Aversion and Intertemporal Choice”, which she published with Professor Sweder van Wijnbergen, let to the theme of the Econometric Game 2010.

Statement: Publication Bias

30

by: Daniella Brals and David Hollanders It is generally more difficult to publish non-significant results than significant findings. Indeed in small samples -when tests have low power- a null-finding is arguably less convincing than a significant finding. However, when testing a true null hypothesis with several data sets, a significant result will arise sooner or later. Not publishing the non-findings, can then lead to the misleading impression the published result is significant. Consequently, if only significant results are published, results published are not significant (, as it is unknown if there were any unpublished non-findings). So, a higher willingness to publish null-findings is desirable.

Puzzle

31

Facultive

32

AENORM

vol. 18 (68)

October 2010

3

Econometrics

Econometric Game 2010: Modelling the Willingness to Take Part in a Voluntary HIV Test by: Timothy Weterings and Cameron Chisholm (Monash University) The following paper is a summary of the analysis performed by the Monash University team at the Econometric Game in Amsterdam, April, 2010. Teams were given a dataset on AIDS prevalance and other characteristics of individuals from in an unspecified African nation, as well as a series of questions directing the analysis to be undertaken. At the end of the survey, each participant was given the choice of whether to have a free blood test for AIDS as well as other illnesses. The first day of competition required an investigation into the modelling of individual’s choices as to whether to take the free test for HIV. The second day’s analysis focussed, amongst other things, on examining the interdependency of spousal decisions in determining whether the test was taken. Approximating the problem with a linear specification, it was found that there was a positive relationship in both directions. However, the household head has a less significant effect on their spouses than the other way around.

Introduction Fighting the prevalence of HIV is a great challenge for many of the world’s leaders, and defeat in this fight will have grave consequences on the lives of many. The world Health Organization in its recent report titled “AIDS Epidemic Update 2009”, states that in 2008 around 35 million people have been living with AIDS (2.7 million higher than 2007); more than two million of them are children under 15. The report also claims that over last 20 years, HIV testing and counselling have helped millions of people to find out their HIV status. As a result, these people have been able to help slow the spread of this disease and manage the consequences of HIV infection better. This can have significant social and economic benefits like improvements in labour supply, decrease in government

Team Monash University The winning team of the Econometric Game 2010 consisted of the following participants: Cameron Chisholm, whose interests lie in microeconometrics and development economics; Taya Dumrongrittikul, whose research interests are development economics and panel data analysis; Behrooz Hassani Mahmooei, who is doing a PhD researching computational modeling of sustainable development, conflict and climate change; Tim Weterings, who is interested in discrete choice modeling and financial Econometrics and Wenying Yao, who is interested in the field of time series analysis and macroeconometrics.

4

AENORM

vol. 18 (68)

October 2010

and household spending, increase in private savings and also reductions in poverty, especially for the countries highly affected by the disease1. Using the data provided by Econometric Game 2010 Committee on adult individuals from an anonymous African nation, this report uses econometric techniques to investigate factors associated with taking a HIV test. The second part attempts to analyse whether the decision for the head of a household to take a HIV test is independent of the spouse’s decision. In a general overview of the literature the main factors associated with HIV testing willingness can be categorized in four groups: 1. Personal attributes like age, sex, marital status, religion 2. Social factors such as stigmatization and family matters and neighbourhood. 3. Infrastructure access like health and education availability. 4. Sex related knowledge and behaviours like sexual habits, knowledge of AIDS and use of condoms.

Data Issues Survey data usually needs to be altered in a number of The Impact of HIV/AIDS on the South African Economy: A Review of Current Evidence by Booysen, F., Geldenhuys, J. and Marinkov, M. 1

Econometrics

ways for econometric analysis to be effective and easier to interpret. In this case, a number of indicator variables were created for demographic factors such as age, marital status and religion. Income was assumed to influence decision making at the household level, so a new variable for total household income (logged) was created. Health variables included indicators for changes in reported health over twelve months, and an indicator for chronic disease. Additional indicators for whether a person has ever had diabetes test or a blood pressure test are included to account for attitudes towards testing and health in general. There is a strong potential that health related variables are endogenous, and this was taken into account in the analysis. Personality traits Questions were asked to each participant in order to help determine their score on four of the “big five” personality traits described in Srivastava (2010). The survey provided answers to a number of personality related questions, each with five responses usually ranging from ‘strongly agree (5)’ to ‘strongly disagree (1)’. To reduce the number of variables and hence have a more parsimonious model, principal components analysis (PCA) was run to give each person a ‘score’ for the four personality traits; this can be thought of as a latent propensity. PCA assumes that the scores are a linear function of the question answers. However, this linear relationship is at odds with the ordinal, rather than cardinal, properties of the likert scale variables. That is, when answering such a question, participants that give a response of ‘2’, for example, would not necessarily exhibit twice the propensity to exhibit a character trait than a person that answers ‘1’. To deal with this, each personality question variable was transformed into five binary variables, indicating a person’s response. For PCA, one of the five variables is removed for each question to avoid perfect multicollinearity. This is the method endorsed by Filmer & Pritchett (2001). Dealing with missing observations A number of potentially relevant variables contained missing observations. Including these variables without accounting for the missing observations creates the problem of a reduced sample, as well as potential bias if the occurrence of missing data is endogenous. The missing values for these variables were imputed using a random regression imputation method (Gelman (2006). For each variable with missing data: - the non-missing observations were regressed against the other socio-demographic variables used in the model. - For each of the non-missing observations, the predicted value from the regression is subtracted from the actual value to obtain a vector of errors.

- For each of the missing observations, a bootstrapping method was then used to obtain an error, which was added to the point prediction found by the linear model. The random regression imputation method was used rather than a simple regression imputation method, in order to ensure a degree of uncertainty in the missing values, and improve variation.

Factors influencing individuals uptake of the free HIV test In order to model the binary choice of whether to take the test, a “naive” probit model was first estimated. The probit model assumes that there is some underlying propensity for participant i to undertake the blood test, yi*, that is a linear function of parameters. yi* = xi' + ui

The choice of outcome, yi, is then expressed as 0 if yi* ≤ 0 yi = * 1 if yi > 0

Under the assumption that u ~ N(0,1), this leads to the log-likelihood function: l(β)= yi lnΦ(xi’β) + (1 – yi)ln [1 – Φ(xi’β)] The key issue with this model is that the blood pressure test indicator variable is likely to be endogenous, as unobserved factors that influenced the participant’s decision are likely to also influence whether participants take the free HIV test. In order to deal with a single binary endogenous regressor, Dubin & Rivers (1990) suggest a bivariate probit. The framework is: y1* = x ' + y 2* + 1 y 2* = z ' + 2

corr ( 1 , 2 ) = y = x + y + 1 * 1

'

* 2

' propensity for an individual to + latent 2 Where y 2* =is zthe take a corr blood ( 1pressure , 2 ) = test; this implies the following loglikelihood function:

l ( ) = y1 y2 ln 2 ( x ' + , z ' , ) + (1 − y1 )(1 − y2 ) ln 2 ( − x ' , − z ' , ) + y1 (1 − y2 ) ln 2 ( x ' , − z ' , − )

+ (1 − y1 ) y2 ln 2 ( − x ' − , z ' , − )

Where Φ2(a,b,ρ) refers to the bivariate normal CDF

AENORM

vol. 18 (68)

October 2010

5

Econometrics

evaluated at a and b with correlation ρ. The same logic could be applied to extend to a multivariate probit if more variables are endogenous (Arendt & Holm, 2006). However it is difficult and time consuming to estimate multivariate normal integrals, and was therefore considered infeasible under the conditions of the Econometric Game. An alternative presented by Arendt & Holm (2006) is to use an instrumental variables probit model, implemented with a two-step procedure. This involves first estimating a system generalised regression of the binary endogenous variables. The binary equation of interest is then estimated by maximum likelihood conditional on the estimated parameters from the first stage. For identification purposes, there must be at least as many unique instruments as there are endogenous variables. A full description of the procedure is given in Rivers & Vuong (1988). The first stage assumes the endogenous variables are continuous, which they are clearly not. However, despite the linear probability model having flaws of unbounded probabilities and heteroskedasticity, it is known to estimate marginal effects close to other binary choice models. Given the time constraints, the estimation benefits are considered to outweigh the disadvantages. In addition to the indicator variable measuring whether a person has had a blood pressure test, a diabetes test indicator and a binary variable for chronic disease were also assumed to be endogenous. Unique instruments for those variables include height and weight, household size and previous health conditions; these were all significantly correlated with the endogenous health variables, but not with the HIV test indicator.

estimated coefficients for this trait are negative (i.e.: more conscientious people are less likely to take the test), given that these people may be less likely to be impulsive and risk-taking Srivastava (2010). This may be a result of conscientious people liking to be in control and therefore more adverse to risk of being told they have HIV. The most significant variable was the age that individuals first engaged in sexual intercourse (significant at 1%). The positive coefficient on this variable indicates that the later the individual first has sexual intercourse the more likely they are to take the HIV test. This may have reflected some factors such as the individual’s sensibility.

Estimation results

The second round of analysis focussed on the effect of household head/spousal decisions. The problem can be split into two components. Firstly, the latent propensity of the head of household i to take the test for HIV, yi* ,= xi' 1 + zi' + is assumed to be determined by some linear function that is dependent on a vector of variables, xi, as well as the (binary) decision of their spouse, zi.

Estimation of the aforementioned models obtained the results shown in appendix 1. The first significant result is that, at least concerning the significance of variables, endogeneity has appeared to have an effect. In fact, while health related variables are considered strongly significant in the probit model, the variables are not significant once endogeneity is taken into account. As a result, further analysis of results will focus on the bivariate and IV probits. Interestingly, the stigma index did not seem to play a large role, at odds with findings by the correlation study by Kalichman & Simbayi (2003). An explanation for this could be that, in this case, an index has been constructed, while in Kalichman & Simbayi (2003) each potential source of stigma was considered separately. The fact that many other variables are controlled for in the current model may also have reduced the significance of such stigmas. The only personality trait found to be significant was conscientiousness. Perhaps more surprising is that the 2

6

Sensitivity analysis IV probit models were also estimated separately for different genders, for household heads, and for Muslims and non-Muslims, as per appendix 22. Comparing males and females, it is interesting to note that a higher number of sexual partners reduced the likelihood that a female would take a HIV test, but was not significant for males. On the other hand, married men are less likely to take the test compared to single or divorced/widowed men, but this effect was insignificant for females. Muslim men are also less likely to take the test than Muslim women. Running the model on those who are a household head did not show any new significant results, and the results for the Muslim sample were also relatively consistent with the entire sample.

Investigation of the effects of partner decisions on HIV testing uptake

yi* = xi' 1 + zi' + i

The choice of outcome, yi, is then expressed as 0 if yi* ≤ 0 yi = * 1 if yi > 0

Then, under the assumption that ε ~ N(0,1), this leads to the following log-likelihood function for the model: l(β)i= yi lnΦ(xi’β + zi’γ) + (1 – yi)ln [1 – Φ(xi’β + zi’γ)]

Results for non-household head and non-Muslims displayed little significance and therefore have not been reported.

AENORM

vol. 18 (68)

October 2010

Econometrics

A similar model, estimated separately, considers the * = xi' propensity 1 + zi + i decision for each spouse. Assuming theyilatent * ' equation for the spouse of household, zi ,= wi 2 + yi + ui corr ( i , ui ) = zi* = wi' 2 + yi' + ui * 0 if zi ≤ 0 zi = * 1 if zi > 0

Assuming u ~ N(0,1), the log-likelihood will be similar to above. For the former model we assume that the spouse’s decisions are not correlated to the household head’s decision, while the latter model assumes the household head’s decision is uncorrelated with the spouse’s decision; in reality this is unlikely to be true. If both decisions are correlated with each other we have simultaneous equations: yi* = xi' 1 + zi' + i zi* = wi' 2 + yi' + ui

For the household head’s propensity: yi* = xi' 1 + ( wi' 2 + yi* + ui ) + i yi* = xi' 1 + wi' + yi* + ui + i

yi* (1 − ) = xi' 1 + wi' + ui + i yi* =

' i

' i

x 1 w u +i + + i (1 − ) (1 − ) (1 − )

As yi is a monotonic function of yi* , =assuming xi' 1 + zi that +i γ≠0, the error terms are correlated, and the equations * ' z = wi 2 + yi + ui are endogenous. In order to deal with ithis, it may seem corr ( i , ui(BVP), )= reasonable to estimate a bivariate probit model taking into account the correlation of the error terms. yi* = xi' 1 + zi + i zi* = wi' 2 + yi + ui corr ( i , ui ) =

However, such a model is not identified, as the BVP requires a triangular (recursive) structure. To estimate a BVP model we would have to assume that one of the two decisions is independent. A three stage least squares (3SLS) model was instead used; although this is a linear approximation, it does not require a triangular structure. As mentioned previously, linear probability models typically give good approximations of the marginal effects. To improve the results, exclusion restrictions were applied across the equations; wi and xi are not constructed 3

using exactly the same variables. Estimation Results Estimation results from the separate individual choice models, as well as the bi-varying model estimated by 3SLS, can be found in appendices three to seven3. For the separately estimated models of individual choice, the influence of both partner’s decisions appear to affect each other, even (asymptotically) significant at the 1% level. This is consistent with the 3SLS model results. However, the partner variable coefficients in the one-way models are more significant than those in the 3SLS case. This suggests that ignoring the endogeneity does have an effect on the outcome. For the 3SLS model results, it should be noted that the unbounded probability problem arose (some calculated probabilities were found to be below zero and above one). If this is ignored, however, an interesting result if found. While the choices made by household heads are well predicted (69.7% correct vs. 63.2% for the constant model), the model only made marginal gains over the constant model when predicting the choices of spouses; it appears that the spouse has more influence on the household head’s decision than the other way around, but the effects are still positive and significant both ways.

Conclusions This analysis has involved identifying factors that influence whether an individual in a particular subSaharan African country will take a free HIV test when offered. Because of the binary nature of the dependent variable, the econometric analysis for this is more sophisticated than standard regression techniques. The analysis was complicated further by the issue of missing data for a number of observations, a large number of variables, and for endogenous binary variables in the model. Imputation techniques were used to deal with the missing data and principal components analysis was used to significantly reduce the number of parameters to be estimated by constructing an index for personality traits and for HIV knowledge and stigmas. The econometric models used included a binary probit, a bivariate probit and finally, an instrumental variables probit to account for the endogenous regressors. In addition, although a simultaneous equation bivariate probit would be ideal in order to deal with simultaneity in partner decision making, given the time constraints faced we estimated a linear three stage least squares model to approximate the effects. The results found were that a number of variables were significant in explaining whether or not an individual would take the test, however the HIV stigma index, and the personality indices were not significant (apart from conscientiousness). When considering the effect of spousal decisions, if was found that a ‘spouse’

Appendices 3 to 7 can be obtained by sending an email to aenorm@vsae.nl

AENORM

vol. 18 (68)

October 2010

7

Econometrics

taking a HIV test has a significant positive effect on the ‘household head’ also taking the test. While these results also held in the opposite direction, the effect was weaker.

Mahajan, A. P & J.N. Sayles. “Stigma in the HIV/ AIDS epidemic: A review of the literature and recommendations for the way forward.” AIDS 22 (2008):67-79.

References

Mahal, A. “Economic Implications of Inertia on HIV/ AIDS and Benefits of Action.” Economic and Political Weekly 39.10 (2004):1049-1063.

Arendt, J N and A Holm. “Probit Models with Binary Endogenous Regressors 2006. Bond L, J. Lauby J, H. Batson. “HIV testing and the role of individual- and structural-level barriers and facilitators.” AIDS Care 17 (2005):125-140. Boyer, S. & F. Marcellin. “Financial barriers to HIV treatment in Yaoundé, Cameroon: first results of a national cross-sectional survey.” Bulletin of the World Health Organization 87 (2009):279-287. Dubin, J A & D. Rivers. “Seletion Bias in Linear Regression, Logit and Probit Models.” Sociological Methods & Research 18.2 (1990):360-390. Filmer, D & L. Pritchett. “Estimating wealth effect without expenditure data—or tears: An application to educational enrollments in states of India.” Demography 38 (2001):115–132. Fylkesnes K & S. Siziya. “A randomized trial on acceptability of voluntary HIV counselling and testing.” Tropical Medicine & International Health 9 (2004):566-572. Gage, A. J. & D. Ali. “Factors associated with selfreported HIV testing among men in Uganda.” AIDS Care: Psychological and Socio-medical Aspects of AIDS/HIV 17.2 (2005):153-165. Gelman, A & J. Hill. Chapter 25: Missing-data imputation. New York: Cambridge University Press, 2006. Homsy, J. & R. King. “The Need for Partner Consent Is a Main Reason for Opting Out of Routine HIV Testing for Prevention of Mother-to-Child Transmission in a Rural Ugandan Hospital.” Journal of Acquired Immune Deficiency Syndromes 44.3 (2007):366-369. Jackman, S. “An Introduction to Factor Analysis.” (2005):1-12. Kalichman, S.C. & L.C. Simbayi. “HIV testing attitudes, AIDS stigma, and voluntary HIV counsellling and testing in a black township in Cape Town, South Africa.” Sexually Transmitted Infections 79 (2003):442-447.

8

AENORM

vol. 18 (68)

October 2010

Matovu, J. K. B. & F.E. Makumbi. “Expanding access to voluntary HIV counselling and testing in sub-Saharan Africa: alternative approaches for improving uptake.” Tropical Medicine & International Health 12.11 (2009):1315-1322. Matovu, J. K. B., G. Kigozi, F. Nalugoda, F. WabwireMagnen, F. & R.H. Gray. “The Rakai Project counseling programme experience.” Tropical Medicine and International Health 7 (2002):1064-1067. Muller, O., L. Barugahare, B. Schwartlander, E. Byaruhanga, P. Kataaha & D. Kyeyune. “HIV prevalence, attitudes and behaviour in clients of a confidential HIV testing and counselling centre in Uganda.” AIDS 6 (1992):869-879. Obermeyer, C. M. & M. Osborn. “The Utilization of Testing and Counseling for HIV: A Review of the Social and Behavioral Evidence.” American Journal of Public Health 97.10 (2007):1762-1774. Rivers, D. & Q.H. Vuong. “Limited information estimators and exogeneity tests for simultaneous probit models.” Journal of Econometrics 39 (1988):347-366. Rou, K. & J. Guan. “Demographic and Behavioral Factors Associated With HIV Testing in China.” Journal of Acquired Immune Deficiency Syndromes 55.4 (2009):432-434. Srivastava, S. “Measuring the Big Five Personality Factors.” 2010. 12 April 2010 <http://www.uoregon. edu/~sanjay/bigfive.html>.

Econometrics

Appendix 1: Results from Models assessing different factors affecting test uptake

Dependent Variable

Has chronic disease Had blood pressure check Has had diabetes test Age 18-25 Age 26-35 Age 36-50 Age 51-65 Age 65+ Muslim Female Married/Union Household Head HH size = 1 HH size = 2 HH size = 3 HH size = 4 HH size = 5 HH size = 6 HH size = 7 HH size = 8 or more Has attended school Health: Much better Health: Somewhat better Health: Same Health: Worse Has cronic disease Has had health problem Has fallen pregnant or impregnated someone Log(income) (joint) Age when first had sex How often use condom No. of sexual partners Extraversion index Conscientiousness index Neuroticism index Openness index Weight Knowledge of HIV index Stigma attached to HIV index Constant * for 5% significance level

Probit with missing obs

Probit with imputed obs

participate

participate

Bivariate probit participate

0.460 (2.81)** 0.410 (4.13)** -0.034 (0.32) 0.099 (1.03) 0.138 (2.11)* 1.506 (18.54)** -0.427 (2.01)* -0.164 (1.05) -0.129 (1.22) 0.202 (1.18) 0.325 (2.77)** 0.068 (0.63) -0.092 (0.77) 0.040 (0.48) 0.001 (0.02) Reference (-) Reference (-) Reference (-) 0.031 (0.25) -0.002 (0.03) -0.024 (0.33) -0.201 (1.12) -0.293 (2.89)** -0.200 (2.17)* -0.018 (0.16) -0.061 (0.78) -0.100 (1.47) 0.655 (3.37)** 0.475 (4.83)** 0.099 (1.01) -0.305 (1.19) -0.018 (0.18) -0.269 (3.00)** 0.327 (1.65) 0.307 (3.18)** 0.143 (1.64) 0.180 (0.91) 0.101 (0.77) 0.038 (0.33) 0.079 (0.43) 0.090 (0.80) 0.020 (0.20) -0.186 (1.30) -0.252 (2.72)** -0.163 (1.93) Reference (-) Reference (-) Reference (-) 0.293 (2.05)* 0.120 (1.25) -0.046 (0.52) 0.240 (1.56) 0.082 (0.81) 0.001 (0.01) -0.288 (1.69) -0.002 (0.02) -0.022 (0.22) 0.293 (1.66) 0.310 (2.58)** 0.224 (2.09)* -0.140 (1.29) -0.244 (3.38)** -0.037 (0.52) -0.187 (1.30) -0.074 (0.81) 0.018 (0.22) -0.246 (1.48) -0.093 (0.88) -0.094 (1.01) Reference (-) Reference (-) Reference (-) 0.253 (0.68) 0.243 (1.18) -0.141 (0.79) 0.460 (2.81)** 0.410 (4.13)** -0.034 (0.32) 0.156 (0.63) 0.394 (2.32)* 0.308 (2.10)* -0.075 (0.28) -0.147 (0.93) -0.099 (0.87) 0.090 (1.90) 0.025 (2.34)* -0.080 (0.90) -0.005 (0.09) -0.015 (0.24) -0.018 (0.35) -0.002 (0.03) 0.003 (0.04) 0.005 (1.12) -0.009 (0.20) -0.039 (0.92)

0.007 (0.22) 0.030 (4.07)** -0.153 (2.04)* -0.043 (0.82) 0.023 (0.55) -0.074 (1.96)* 0.030 (0.73) -0.022 (0.50) 0.005 (1.93) -0.031 (0.83) -0.034 (0.91)

-0.875 (1.12) 0.002 (0.00) ** for 1% significance level

-0.008 (0.28) 0.023 (4.34)** -0.114 (2.26)* -0.032 (0.78) -0.018 (0.48) -0.069 (2.06)* 0.044 (1.19) 0.016 (0.41) -0.001 (0.58) -0.020 (0.74) -0.022 (0.83) 0.437 (0.97)

AENORM

IV-probit

Blood pressure

0.543 (6.31)**

1.354 (1.32) 0.163 (0.18) 2.186 (0.75) 0.307 (2.82)** 0.308 (1.60) 0.049 (0.61) 0.084 (0.77) Reference (-) Reference (-) 0.042 (0.52) -0.099 (0.92) -0.030 (0.29) -0.481 (3.29)** 0.101 (1.35) 0.016 (0.13) 0.463 (5.39)** 0.363 (2.53)* 0.467 (5.29)** -0.091 (0.48) 0.145 (1.65) 0.248 (2.43)* 0.050 (0.38) 0.092 (0.85) -0.052 (0.55) 0.245 (2.61)** 0.144 (1.43) 0.051 (0.46) -0.002 (0.01) -0.240 (3.39)** -0.097 (1.08) 0.080 (0.78) Reference (-) 0.488 (2.77)** 0.543 (6.31)** -0.047 (0.30)

0.026 (0.78)

0.070 (0.97) 0.035 (0.74) -0.030 (1.81) -0.078 (1.70) 0.008 (3.58)**

-0.185 (1.19)

-0.223 (1.15) 0.003 (0.09) 0.029 (3.75)** -0.127 (1.54) -0.156 (1.34) 0.017 (0.29) -0.094 (1.99)* 0.035 (0.70) -0.024 (0.47) 0.003 (0.06)

-2.168 (5.00)**

vol. 18 (68)

-0.033 (0.55)

October 2010

9

Econometrics

Appendix 2: IV-Probit for samples by different groups (dependent variable = HIV test participation) Dependent Variable

IV-probit Male

IV-probit Female

IV-probit Head

IV-probit Muslim

Has cronic disease Has had blood pressure check within 12 months Has had diabetes test Age 18-25 Age 26-35 Age 36-50 Age 51-65 Age 65+ Muslim Female Married/Union Household Head Has attended school Has fallen pregnant or impregnated someone Log(income) (joint) Age when first had sex How often use condom No. of sexual partners Extraversion index Conscientiousness index

1.343 (1.09) 0.811 (0.69)

2.181 (2.16)* -0.024 (0.03)

0.225 (0.13) 1.830 (1.16)

0.361 (0.18) -0.178 (0.09)

-5.047 (1.82) 0.326 (1.13) -0.014 (0.08) Reference (-) 0.263 (1.39) -0.165 (0.82) -0.396 (2.22)*

2.452 (0.59) 0.485 (2.23)* 0.136 (0.89) Reference (-) -0.198 (1.38) -0.748 (3.32)** 0.214 (1.18)

10.138 (1.23) 0.292 (0.71) 0.074 (0.37) Reference (-) -0.123 (0.63) -0.457 (1.92)

-0.496 (1.74) 0.736 (2.46)* -0.450 (2.45)* 0.099 (0.32)

-0.010 (0.05) 0.269 (1.73) -0.152 (0.83) -0.213 (0.61)

-4.369 (1.25) 0.473 (1.41) 0.039 (0.23) Reference (-) 0.256 (1.36) -0.012 (0.04) -0.400 (1.63) 0.032 (0.12) -0.228 (0.64)

-0.001 (0.02) 0.028 (2.34)* -0.090 (0.79) 0.165 (1.24) 0.010 (0.12) -0.136 (1.67)

0.020 (0.37) 0.040 (2.95)** -0.313 (1.79) -0.332 (1.67) 0.109 (1.39) -0.114 (1.82)

Neuroticism index 0.067 (0.87) 0.024 (0.30) Openness index 0.026 (0.31) -0.059 (0.78) Knowledge of HIV index -0.115 (1.43) 0.014 (0.22) Stigma attached to HIV index -0.124 (1.43) -0.066 (0.87) Constant 0.111 (0.12) 0.647 (0.59) Observations 1023 1321 * for 5% significance level ** for 1% significance level

10

AENORM

vol. 18 (68)

October 2010

-0.359 (1.87) -0.111 (0.33)

0.524 (1.79) -0.144 (0.35) 0.403 (1.93) 0.057 (0.14) -0.571 (1.33)

-0.028 (0.53) 0.028 (2.41)* -0.189 (1.53) 0.122 (0.63) -0.002 (0.02) -0.122 (1.47)

-0.047 (0.63) 0.035 (2.60)** 0.044 (0.24) -0.316 (1.27) -0.074 (0.47) -0.056 (0.67)

0.061 (0.84) 0.002 (0.02) -0.128 (1.18) -0.049 (0.45) 1.276 (1.37) 1252

-0.006(0.05) 0.035 (0.34) 0.023 (0.29) 0.004 (0.04) 0.059 (0.06) 1874

Are you interested in being in the editorial staff and having your name in the colofon? If the answer to the question above is yes, please send an e-mail to the chief editor at aenorm@vsae.nl. The staff of Aenorm is looking for people who like to: - find articles that can be published in Aenorm; - take interviews for Aenorm; - make summaries of famous articles; - maintain the Aenorm website or even build a new one. You do not have to be living in the Netherlands to be in the editorial staff.

Interview with David Jaeger by: Ewout Schotanus

David Jaeger David A. Jaeger received his Ph.D. (economics, 1995) from the University of Michigan and was the first recipient of the W.E. Upjohn Institute Dissertation Award. His research focuses on immigration and migration, education, conflict, and applied econometrics. His work has been published in the American Economic Review, among others. In April 2010 David Jaeger was in Amsterdam to be the chairman of the jury of the Econometric Game 2010. He also gave a presentation on the Econometric Game Congress regarding one of his latest papers.

Could you give me a short summary of your childhood? I grew up in northern New Jersey, about 12 miles from New York City. Like most boys In the US at that time, my first interest was baseball, which I played until I was 15. But I also liked computers, as my father was a high school computer science teacher. I started programming in BASIC when I was about 12. In high school, I acted in many theatrical productions and also did a lot of singing. I’m also a bit embarrassed to admit I played the saxophone from the time I was in fifth grade through university. Did you already have a particular interest in Economics in high school? Ha, no! When I started university, I thought I was going to major in English and write poetry! I took Economics 101 during the second semester of my freshman year from Morton Schapiro (then a professor at Williams College, now the president of Northwestern University). After taking labor economics during my second year of university, I was hooked and wanted to be an economist. That’s a good thing, too – the market for poets is pretty small. You started your Master’s in Economics in New York, but you got your degree in Michigan. Did you have to switch universities for your Ph.D. program? I took M.A. classes at NYU while I was working in New York City as a research assistant at MDRC, a policy research company. But I never thought I would do my Ph.D. at NYU, and applied to a variety of places two years after I graduated from college. In the end, it came down to Berkeley and Michigan. Michigan was closer to my girlfriend (now wife), who was doing her Ph.D. at Columbia, in New York, and so I went there. At the time, Michigan was also much better in labor economics than

12

AENORM

vol. 18 (68)

October 2010

Berkeley, and so it was the right choice for me -- now it would be a much more difficult choice. Were you asked to follow a Ph.D. program or did you want to do it yourself? It was all my idea, for better or worse. What was your Ph.D. research about? It contained three essays – one on sheepskin effects in the returns to education, which was published in the Review of Economics and Statistics, one on the change in the education question in the Current Population Survey, which was published in the Journal of Economics and Business Statistics, and the third was on how immigration affected the wage structure in the U.S. during the 1980s. This is actually the best paper of the three, and although it has been cited a lot, I’ve never published it. Did you choose this field of research yourself or did some professor provide you with this? The genesis of the ideas was the result of discussions with my advisor, John Bound. But he didn’t provide me with the topics. What were the main results of your dissertation? Well, you can look the papers up… it’s hard to summarize three distinct papers. But overall I was proud of the way it turned out. You received a couple of prices for your dissertation. Did you expect this to happen? No, not at all. I was very pleasantly surprised to be the first winner of the W.E. Upjohn Institute dissertation prize. It was a great honor – there have been a lot of great labor economists who have won or been honorable mentions for this prize since then.

When you finished your PhD, you first started working as a research economist at the Bureau of Labor Statistics. At that time, you were not planning to become a professor? I wanted an academic job from the start, and I got four job offers on my first year on the market – all of them at the BLS! But I knew in the end I’d end up as a professor. Eventually you returned to the university. You’ve worked at universities in Michigan, New York, New Jersey, Virginia, but then you decided to go to Germany. How did you decide to leave America and go teach in Germany? Well, it was a complicated decision regarding my wife, who is also a professor, and my kids, who really like living in Germany. For an American, there are many nice things, but also a few challenges, about living in Europe. I understand you are setting up a econometrics program in Cologne, how is this going? There is already a statistics program, but there is traditionally less of a focus on econometrics in Germany, particularly applied econometrics. I think the main challenge is changing the mindset a bit about what students should learn and what they are capable of learning as undergraduates. But also, I think applied research in general is undervalued in Germany. This is changing – the move of Gerhard van den Berg from Amsterdam to Mannheim is probably the biggest sign of how econometrics and applied work is now being valued in some places in Germany. I hope that we can establish that in Cologne as well. Is there going to be a Ph.D. program as well? There is an interdisciplinary Ph.D. program already, as well as the traditional German way of “apprenticing” for a professor. But it will take some time for us to establish a rigorous Ph.D. program in the Anglo-American tradition. Are you planning to stay in Cologne or are you planning to go and teach at other universities in Europe as well? Are you planning on making me a job offer? Of all the universities you have worked, which one did you like the most? I would have to say that I enjoyed teaching at Princeton the most. The other professors and the students are all incredibly smart – and that made me smarter. You have to stay on your toes when surrounded by people like that. Being around so many great labor economists (Orley Ashenfelter, Alan Krueger, Hank Farber, Ceci Rouse, Marianne Bertrand, and Jeff Kling were all there when I was there) was an amazing experience. And it is just an incredible university.

You have published a lot of papers, which one are you most proud of? That’s tough. My paper on weak instruments (with John Bound and Virginia Baker) certainly has had the biggest impact, and I am very proud of that work, because it happened so early in my career. But I am perhaps prouder of my paper on the Palestinian-Israeli conflict in the American Economic Review because although I knew that the topic had the potential to be published in a good journal, it was a big risk to start working on something so far afield from my usual research, as well as something that is might not even be considered “economics”. To see the idea through from the flash of insight that I had in the shower one day to publication in the AER was a great thing. Coincidentally, my coauthor Daniele Paserman had come to the same idea independently – it was only later that we discovered that we were working on the same idea and decided to work together. It has been a fantastic collaboration. Did your article in American Economic Review on the Palestinian-Israeli conflict receive a lot of criticism? The biggest critique we got on that paper was “Is this economics?” I think it is, at least to the extent that folks like Nobel-prize winter Thomas Schelling also worked on conflict. I also occasionally got critiques from noneconomists who didn’t think that such topics can be examined scientifically and empirically. These tended to be folks who thought that either the Palestinians or Israelis were “right” in their actions and the other side was “wrong”. The data say what they say, and we try to give the results as objective interpretation as we can. Are you planning to do more research in this field? I am currently working on paper with Zahra Siddique at IZA on the relationship between U.S. drone strikes in the border regions of Afghanistan and Pakistan and Taliban and al Qaeda attacks in those two countries. There might also be a book in the works on the Second Intifada. You have done research in instrumental variables, education, labor economics, migration and economics of conflict. What other fields do you want to do research in in the future? I am currently working on a project with my wife on the determinants of locations of monasteries in 12th century Germany. I think there is a lot of scope for doing quantitative medieval history, but of course one has to be careful to respect the traditions in other fields. I am very excited to be applying modern methods, with data, to something that happened 800 years ago! What do you like best of working at the university? It’s a combination of things. As you can tell, my research interests are fairly broad (as long as there is data involved), and so I very much appreciate being able to pursue whatever research ideas I want. I also like meeting students and introducing them to all of the cool things

AENORM

vol. 18 (68)

October 2010

13

you can do with econometrics. I like the travel associated with conferences and seminars, and I definitely appreciate the flexibility of an academic job. Last year you were the chairman of the jury of the Econometric Game. Can you tell us something about how you experienced this event? I didn’t know the event existed, but I was very pleased to be asked to be on the jury. I thought it was great – I was impressed with how hard the students worked, with their degree of expertise, and also enjoyed meeting the other jury members. And who doesn’t like spending time in Amsterdam?

14

AENORM

vol. 18 (68)

October 2010

Econometrics

An Econometric View of Conflict: Dynamics of Violence in the Palestinian-Israeli Conflict Summarized by: Chen Yeh The still ongoing conflict between Israelis and Palestinians has been one of the most explosive conflicts in history. While numerous efforts for finding a resolution of this violent dispute have been made, a successful peace process still seems a long way off. Nevertheless to strive for success in understanding the dynamics of violence in the Palestinian-Israeli conflict seems to be a natural first step. While studying scenarios of violence might not seem the domain of the traditional economist, David Jaeger and M. Daneiele Paserman (2008, American Economic Review) empirically analyze fatalities in the Palestinian-Israeli conflict by using econometric methods. Their findings are somewhat surprising as they seem to contradict a popular notion of the conflict: that violence between the Israelis and Palestinians is not likely to be a vicious cycle of ongoing conflict.

Introduction

and not cyclical.

One the main presuppositions of the Palestinian-Israeli conflict is that there seems to be a cycle of violence in a a process of back-and-forth retaliation. Therefore it may seem that the parties’ main reason for acts of violence is vengeance. Simply put: violence leads to more violence. On the other hand violence by one party could also lead to less violence by the other party. Targeted killings during military operations conducted by the Israel Defense Forces are one particular example. The Israel government often argues that these actions are necessary as they limit the capabilities of Palestinian forces that might be a threat to the Israeli public. However the effectiveness of these actions has always been questioned as convincing evidence on this issue is scarce. In their paper Jaeger and Paserman (henceforth J&P) investigate these notions of violence by examining whether violence against one party affects the incidence and intensity of the opposite party’s reaction. They apply econometric methods to a dataset that consists of the daily number of casualties of both groups during the period from September 2000 to January 2005. By using a fairly simple vector autoregression (VAR) setup, J&P conclude that the causality of violence is unidirectional

The B’tselem dataset The B’tselem (an Israeli human rights organization) dataset used by J&P is a time series that describes the daily number of deaths in the Palestinian-Israeli conflict from September 2000 to January 2005. This conflict period is also known as the Second or Al-Aqsa Intifada in which over 3300 Palestinians and more than 1000 Israelis died because of the ongoing violence. Furthermore J&P divide this specific period into seven distinct phases. The choice of this particular dataset over official statistics from the Israeli Ministry of Foreign Affairs or the Palestinian National Information Centre was motivated by its comprehensiveness and the fact that both conflict groups were treated symmetrically. Furthermore the dataset was considered to be accurate and reliable and also contained very detailed information on the demographic distribution of the group’s fatalities. Judging by this particular demographic information, J&P identify the different strategies adopted by the groups. Israel mainly targets members of militant and terrorist groups, whereas Palestinian groups1 consider attacks aimed at Israeli military and civilian targets to be equally important.

In this issue of AENORM, we continue to present a series of articles. These series contain summaries of articles which have been of great importance in economics or have caused considerable attention, be it in a positive sense or a controversial way. Reading papers from scientific journals can be quite a demanding task for the beginning economist or econometrician. By summarizing the selected articles in an understanding way, the AENORM sets its goal to reach these students in particular and introduce them into the world of economic academics. For questions or criticism, feel free to contact the AENORM editorial board at aenorm@vsae.nl

AENORM

vol. 18 (68)

October 2010

15

Econometrics

Figure 1. Monthly number of fatalities (Source: Jaeger and Paserman, 2008.

Frameworks: theory and empirical strategy Before presenting their econometric setup, J&P identify three effects of violence for both sides of the conflict. The first is the incapacitation effect: it simply limits the opposite group’s capability to retaliate. The Israeli example in the introduction is a specific example of this effect. The second effect is identified as the deterrent effect: violence is not used by the opposite group in fear of the consequences of possible retaliation. Lastly and third, violence by one group can cause a reaction by the other side through reasons of vengeance. Instead of constructing a theoretical model, J&P suggest empirical reaction functions of the following form. For the Israelis: Palt = f Isr ( Isrt −1 ,..., Isrt − p , Palt −1 ,..., Palt − p , X t )

The Palestinian reaction is defined similarly: Isrt = f Pal ( Palt −1 ,..., Palt − p , Isrt −1 ,..., Isrt − p , X t )

where Isrt and Palt denote Israeli and Palestinian fatalities at time t respectively. Furthermore Xt is a vector of structural variables that are used as control variables to absorb any effects that are not picked up by lagged versions of Isrt and Palt. J&P mention that the dependent variable is fatalities of the opposite group, since violence exercised by one group directly affects fatalities of the opposite group. Thus their primary interest lies in the effect of a group’s “own” fatalities on fatalities of the opposite group. Thus, when considering Israel’s perspective, J&P are primarily interested by questions of the form: how does Palestinian violence (or Israeli fatalities) affect my own violence “strategy” (or Palestinian fatalities). To estimate these effects of violence, the authors adopt a VAR framework:

Palt − p Palt Palt −1 = A0 + A1 + ... + Ap Isr + BX t + t Isrt Isrt −1 t− p

where Aj for j = 0,1,...,p and B are matrices of coefficients, Xt a vector of exogenous, control variables and εt a vector error term. J&P use two basic specifications: fatalities are either dummy variables2 (the incidence specification) or simply the number of fatalities on day t (the levels specification). All models were then estimated by using ordinary least squares with heteroskedasticity-consistent standard errors. While J&P identify three effects of violence, using such a reduced form VAR approach does not allow the identification of each effect separately. However the net effect can be estimated. If the coefficients of the group’s “own” fatalities are negative, then the incapacitation and deterrent effects dominate (thus Palestinian violence leads to less Israeli violence, ceteris paribus), while when they are positive then vengeance is the dominating rationale for violence (thus similarly Palestinian violence leads to more Israeli violence ceteris paribus). Even though the signs and magnitudes of the regression coefficients are of interest, the authors’ main purpose lies in the fact whether violence by one group causes violence by the other side. To test for causality, J&P adopt the Granger causality test. This is simply testing the joint hypothesis whether the coefficients on one’s “own” fatalities are statistically significant different from zero3.

Results: VAR specifications In their baseline setup, J&P use the abovementioned VAR model where p = 14, thus 14 lags. When looking at the incidence specification, their results for the Israeli empirical reaction function indicate that Israelis retaliate in a regular way that is statistically significant on day one and five after a lethal Palestinian attack. The probability of Palestinian fatalities is increased by 7.6 percent on the first day and 6.7 percent on the fifth day. The results for Granger causality are clear as the joint significance of the coefficients for Israeli fatalities have a p-value of less than < 0.001. Thus Palestinian attacks Granger-causes Israeli violence. To check for the robustness of their results, J&P add indicators for the days of the week and the seven phases of the Second Intifada. Furthermore the cumulative length of the separation barrier dividing the West-Bank from Israel was added as a variable. Judging by their results, the incidence of violence against Palestinians seems to decline relative to the first phase. Similarly to the previous mentioned results, the extra controls lead to higher probabilities of Israeli violence after fatal

These groups contain amongst others: the Hamas, the Palestinian Islamic Jihad and the Al-Aqsa Martyrs Brigades. These variables are then defined as 1 when there was at least one casualty and 0 otherwise. 3 The null hypothesis consists of all the coefficients on one’s “own” fatalities being equal to zero. By rejecting this hypothesis, we establish Granger causality or to put it bluntly statistical causation. 1 2

16

AENORM

vol. 18 (68)

October 2010

Econometrics

attacks by the Palestinians. Most importantly however, the inclusion of these controls does not lead to different conclusions about the fact that Palestinian attacks Granger-cause Israeli violence. In the levels specification, the results are in line with the incidence specification. However J&P do conclude that the magnitude of the Israeli response may vary. Their results indicate that the most violent response comes five days after a Palestinian attack: each Israeli fatality leads to an additional 0.213 Palestinian fatal casualty. Once again, conclusions about Granger causation are not changed by adopting the VAR levels specification. Estimations for the Palestinian empirical reaction function are done in a similar fashion. In both the incidence and levels specifications, no negative and statistically significant coefficients were found which implies that Israeli attacks have a net vengeance effect. However the test for Granger causality is rather surprising: J&P do not find enough statistical evidence to support the hypothesis that Israeli attacks Granger-cause a violent Palestinian response. J&P also do not find coefficients that are negative and statistically significant, which suggest that Israeli attacks on Palesetinians do not have a net deterrent or incapacitation effect. J&P conclude that Israelis react in a significant and predictable way to Palestinian attacks, but not vice versa. This finding contradicts the popular notion that the two parties of the Palestinian-Israeli conflict are engaged in a vicious cycle of violence!

Results: Robustness of the Granger causality tests The finding of unidirectional Granger causality from Palestinian violence to Israeli violence was rather unexpected. However how robust is this particular conclusion? In their paper, J&P mention that results of Granger causality tests are sensitive to the choice of the lag structure, thus the choice of p. However when adopting VAR structures with different lag structures (J&P choose combinations of 4, 7, 14 and 21-day lag structures), the conclusions remain unchanged. Although the results for specifications of 21 lags are not statistically significant, J&P mention that it is well known that adding lags reduces the power of the Granger causality test. Another robustness test is the choice of frequencies for measuring fatalities. By using daily frequencies, certain characteristics of the Palestinian side may not be captured. As J&P put it themselves “the decentralized and factional nature of the Palestinian side may dictate longer or less regular response times that may not be captured at a daily frequency.” Previous specifications are re-estimated at weekly, bi-weekly and monthly frequencies. While J&P still maintain the conclusion that Israeli attacks do not Granger-cause Palestinian violence, results for the other direction of Granger causality are unfortunately not that strong. Using a significance level of five percent, only a significant violent Israeli response

is found at the monthly frequency. Their last sensitivity check consists of the degree of time aggregation. Instead of using 28 lagged regressors (i.e. the 14 lagged variables from both sides), J&P use the sum (over days t–1 to t–7 and to t–14) of the lagged values as regressors. Results are very similar to those of the VAR specifications and their conclusion of unidirectional violence Granger causality still stands.

Discussion A violent reaction by both sides of the conflict is most likely to be motivated by vengeance and to signal that violence is a costly activity. However J&P clearly show that there are differences in the estimated reaction functions. They argue this is mostly due to the decisionmaking process and technology that the two groups have at their disposal. J&P mention that ”the Israeli Defence Force is highly organized and centrally commanded, meaning that Israel has the organizational, logistic and technological capabilities to inflict fatalities on the Palestinian side when it wishes”. According to the authors this fact might explain why Israeli retaliation is predictable. On the other hand, Palestinian attacks are more difficult to predict. In fact, the results indicate that Israeli violence does not (Granger-)cause Palestinian violence. J&P give a similar explanation. Palestinians have limited technological resources and are more decentralized (which is reflected by the various Palestinian group mentioned in footnote 1). However the Palestinians’ unpredictable strategy might be deliberate. According to J&P, the effectiveness of Palestinian violence (like suicide attacks) is greater when they are (to some extent) unpredictable. Reasons other than revenge, deterrence and incapacitation arguments, however must be considered. Resistance to Israeli military occupational forces, rather than Palestinian fatalities, might be the main factor for Palestinian violence. Despite popular notions of the Palestinian-Israeli conflict, violence between the two groups is not likely to be of a “tit-for-that” form. Instead causation seems to be unidirectional from Palestinian attacks to Israeli violence. Therefore the results of J&P suggest that ending Palestinian violence against Israel might lead to an overall reduction of the violence encountered in this everlasting conflict. However as history has showed, this task is easier said than done.

References Jaeger, D. and M.D. Paserman. “The Cycle of Violence? An Empirical Analysis of Fatalities in the PalestinianIsraeli Conflict.” American Economic Review 98.3 (2008):1591 – 1604.

AENORM

vol. 18 (68)

October 2010

17

Operations Research and Management

A Discrete-Time Queueing Model with Abandonments by: Rein Nobel & Suzanne van der Ster A very important characteristic of queueing situations is the phenomenon of `abandonments’: customers waiting in line decide to leave the queue before being served. Traditionally, two types of models dominate the queueing literature, either it is assumed that all customers wait in line until they are served successfully (delay models) or the assumption is made that customers who find all servers busy upon their arrival are rejected and leave the system forever without being served (loss models). Both types of models have been studied extensively and successfully, but it is clear that these models do not grasp the full reality of queueing. For instance, in call-centers it is quite common that customers who upon arrival find all servers busy do not wait in line but leave the system temporarily and try to reenter the system some (random) time later. The queueing models which incorporate this phenomenon are called retrial models and have received much less attention in the literature due to the complicated flow of arrivals: apart from the primary arrivals, we also have to cope with the customers who try to enter the system anew after one or more unsuccessful attempts.

Introduction Nevertheless some results have been obtained, see e.g. Artalejo and Gómez-Corral (2008) and Falin and Templeton (1997) for two monographs on retrial queues and Nobel and Moreno (2008) for the analysis of a retrial model in discrete time. The situation becomes even worse when abandonments are incorporated in the model, and, not surprisingly, also this important aspect hardly shows up in the mainstream of the queueing literature. Palm (1953) has been the first author who studied a queueing model with abandonments. Recently, mainly triggered by the application area of the call-center industry, more authors have studied queueing models with abandonments (see e.g. Garnett, Mandelbaum and Reiman (2002), Mandelbaum, Massey, Reiman, Stolyar and Rider (2002) and Mandelbaum and Zeltyn (2007) and references therein). In this paper we discuss a discrete-time analogue of the so-called Erlang-A model (see e.g. Mandelbaum et al. (2007)). This continuous-time queueing model is a

Rein Nobel Rein Nobel is an assistant professor in operations research at the Free University Amsterdam. He graduated in pure mathematics and in computer science. His main research interests are probability, Markov decision theory, queueing theory and simulation.’

variant of the well-known M/M/c model [i.e. customers arrive according to a Poisson process, the individual service times are exponentially distributed, there are c servers, and the waiting space is unlimited], in which every customer in the queue will abandon the queue after an exponentially distributed `patience time’ if at that time he is still waiting. The discrete-time model studied in this paper is an extension of the standard ‘late-arrival model with delayed access’ as discussed in Bruneel and Kim (1993) in which the customers in the queue are allowed to abandon the queue. In Section 2 we give a detailed description of this model, in Section 3 we present a Markov chain analysis to study the steady-state behaviour of the model using the generating function method. The main problem in this approach turns out to be the calculation of the limiting probability that the system is empty. We will show that this problem can be solved by using `an infinite recursion’. Once this probability is known we can calculate the usual performance measures such as the mean queue length, the fraction of customers which abandons the system, the throughput, et cetera. In Section 4 we will give some numerical results. From these results we can conclude that a system in which the customers have the choice to abandon the queue shows a much better performance than an equivalent system in which no abandonments are allowed and instead the arrival rate is reduced on beforehand with the abandonment rate of the former system.

Description of the model We consider a discrete-time queueing model with c servers and an unlimited waiting space. Before we give a

18

AENORM

vol. 18 (68)

October 2010

Operations Research and Management

detailed description of the arrival pattern and the service requirements we want to stress a few points concerning the way `discrete time’ has to be interpreted. Time is only discussed in discrete terms, so-called time slots, which can be seen as very short (physical) time intervals. Due to this fact more than one event can take place in one time slot [in discrete terms, at the same epoch]. To enable a precise description of the sequence of events which trigger the evolution of the system it is necessary to fix the precedence relations between the different events (in our model, arrivals, departures and abandonments). We take as our starting point that an arrival at time k physically occurs during time slot k. A service starting at time k physically starts at the beginning of time slot k and a departure at time k physically occurs at the end of time slot k. An abandonment at time k occurs before any arrival at time k. As a consequence of this choice an arriving customer at time k sees all the servers who will complete their servers at time k busy and customers who abandon the queue at time k have arrived before time k. We call this choice the late-arrival and early-abandonment set-up. It is also essential to know that any customer who arrives at time k can start his service at the earliest at time k+1. This is called `delayed access’. Summarizing we coin our system with the acronym LAS-DA-EAb, i.e. a late-arrival system with delayed access and early abandonments. We continue with the precise description of the model. In every time slot customers arrive in batches. The batch-size arrival distribution is {ak }∞k = 0 with probability generating function (p.g.f.) A ( z ) :=

∞ k =0

P{ A = k } z = k

∞ k =0

ak z . k

This means that ak is the probability that a batch of size k arrives in a time slot. The number of arriving customers in different time slots are independent. Each customer requires a geometric service time with parameter β from one of the servers. There is a group of c servers, and an unlimited waiting space. Customers who upon arrival find all servers busy are placed in a queue which is served in FIFO order. In every time slot each customer in the queue abandons the system with a fixed probability θ, independent of the other customers in the queue.

The steady-state behaviour of the LAS-DAEAb model To study the steady-state behaviour of the queueing system described in the previous section we define Ck = the number of busy servers at time k–, Qk = the number of customers in the queue at time k–,

{(Ck,Qk) : k = 0, 1, 2, …} is an irreducible aperiodic discrete-time Markov chain (DTMC). The state space is S={(0, 0), (1, 0),…,(c-1, 0)} {(c, n) | n = 0, 1, 2,…}. Due to the abandonments this DTMC is positive recurrent, and so we can define the following limiting joint distribution, ( j , n ) = lim k → ∞ P ( C k = j ; Q k = n ),

( j, n) ∈ S .

To study this limiting distribution we introduce the probability generating function (p.g.f.), ∞

c ( z ) = (c, n) z n . n =0

So, our first objective is to find the probabilities π(0, 0), π(1, 0),…, π(c-1, 0) and the p.g.f. Пc(z). We have the following system of balance equations, k i k −i (1 − ) a j − k + i ( k , 0) k =0 i =(k − j ) i ∞ c (1) c + i (1 − ) c − i i m=0 i =c− j m m k m−k × (1 − ) a j − c + i − m + k (c, m ), + k k =( m − j + c −i ) c −1

( j , 0) =

k

+

c −1 k k (c, n ) = i (1 − ) k − i an + c − k + i (k , 0) k =0 i =0 i ∞ c c (2) + i (1 − ) c − i m =0 i =0 i

×

m

k =(m −n −i)

+

m k m− k (1 − ) an + i − m + k (c, m), k

j = 0, 1, 2,… c - 1;

n = 0, 1, 2,…

After tedious algebraic manipulations it turns out that the first type of balance equations (1) can be rewritten as k i k −i (1 − ) a j − k + i ( k , 0) k = 0 i = ( k − j )+ i c −1

( j , 0) =

k

j −c +i (3) c c (1 − ) r + i (1 − ) c − i a j − c + i − r c( r ) ( ), r! i =0 i r =0 j = 0,1,..., c − 1,

where

Here k– stands for the left boundary of the time slot k. This means that at time k– the abandonments and the arrivals at epoch k have not occurred yet, but all services starting at time k have just begun. With this interpretation

c( r ) ( z ) =

AENORM

d r c ( z) dz r

vol. 18 (68)

October 2010

19

Operations Research and Management

is the r-th derivative of the p.g.f. Пc(z). So we have found a system of c linear equations with 2c unknowns,

Then we can rewrite (7) as [k = 1, 2,...]

1 ( z ( k −1) ) = 1 − + ( k −1) z

π(0, 0), π(1, 0),…, π(c-1, 0); ( 2) ( c −1) c ( ), (1) ( ). c ( ), c ( ),..., c

+ (0, 0)

The second type of balance equations (2) can be put in the p.g.f. format c −1 k k c ( z ) = ( k , 0) i (1 − ) k − i z k − i k =0 i =0 i

[(1 − ) z ] × A( z ) c ( + (1 − ) z ) − r! r =0

(9) (k ) A z − 1 ( ) + (0, 0)∏ 1 − + ( i ) A( z ( i ) ) . z z(k ) k =0 i=0

(4)

r i − r −1

az j =0

j

j

c( r ) ( ) .

(0, 0) = a0 (0, 0) + a0 1 ( ).

Now we send n to infinity. Notice that lim = z ( n ) = 1.and n →∞ П1(1) = 1 – π(0, 0) So we get ∞ 1 ( z ) = ∏ 1 − + ( k ) A( z ( k ) )(1 − (0, 0)) z k =0 k) ∞ k −1 (10) ( i ) A( z ) − 1 . + (0, 0)∏ 1 − + ( i ) A( z ) (k ) z z k =0 i =0

(5)

From (5) we have

A( z ) − a 0 + 1 − + A( z )1 ( + (1 − ) z ) z z (6) a0 − 1 ( ). z

Substituting (5) in (6) gives 1 ( z ) = 1 − + A( z )1 ( + (1 − ) z ) z A( z ) − 1 + (0, 0) . z

This last result asks for iteration!! Introduce z(k) = 1 – (1 – θ)k (1 – z).

(7)

1 ( ) =

1 − a0 (0, 0). a0

So taking z = θ in (10) we get an expression for π(0, 0), the probability that the system is empty, or, in other words, for π(0, 0) = long-run fraction of time slots that the system is empty. We find after θ(k):= 1– (1 – θ)(k+1)]

rearranging

terms,

[introduce

k=0,1,2,....

Table 1. Load ρ = 0.99 and θ = 0.002 Delay model

β 0.1 0.3 0.5 0.7 0.9

20

AENORM

Qdelay

242.104 232.303 222.502 212.701 202.9

vol. 18 (68)

(8)

n −1 k −1

Hence we have to calculate 2c unknown quantities: (i) the c probabilities π(0, 0), π(1, 0),…, π(c-1, 0), and (ii) the values of the c derivatives of the p.g.f. Пc(z) for z = θ, c ( ), c(1) ( ), (c2) ( ),..., c( c −1) ( ). For simplicity, below we only discuss the one-server case, i.e. c = 1. For this simplified model equations (3) and (4) become

1 ( z ) = (0, 0)

z ( k −1)

.

n −1 1 ( z ) = ∏ 1 − + ( k ) A( z ( k ) )1 ( z ( n ) ) z k =0

i

i −1

A ( z ( k −1) ) − 1

So iterating equation (8) n times gives [z = z(0)]

c − k + i −1 j A( z ) − a j z j=0

c + (1 − ) c − i i=0 i z c

( k −1) )1 ( z ( k −1) ) A( z

Abandonment Qabandon

8.20193 14.3522 18.0102 20.4948 22.1971

October 2010

Adapted model

P(Abandonment)

P(Wait)

0.165696 0.096648 0.072768 0.059148 0.049825

0.965192 0.978864 0.983592 0.986289 0.988135

Qadapted

11.2556 19.5737 24.9459 29.0712 32.4573

Operations Research and Management

(

)

a0 ∏ k = 0 1 − + (k ) A ( ( k ) ) ∞

(0, 0) =

∞ A ( (i ) ) − 1 k −1 k = 0 ∏ i = 0 1 − + ( i ) A( ( i ) ) (i ) 1 − a0 − a0 ∞ (k ) (k ) − − + A A 1 ( ) ( ) ( k ) ∏ k =0

( (

) )

(Ab)

.

(11)

__

We can now find Q , the long-run average queue length, by differentiating.

1 ( z ) = 1 − + A( z )1 ( + (1 − ) z ) z A( z ) −1 . + (0, 0) z

1 [ A′(1) − (1 − (0, 0))].

This result is not surprising and has a clear-cut interpretation. Notice that

A(1) the arrival rate the throughput (1 (0, 0))

Q the abandonment rate and of course abandonment rate = arrival rate - throughput. In other `words’ Q = A′(1) − (1 − (0, 0))

Numerical results for one server In this section we will present some numerical results for the ‘one server’ case (c = 1). In all examples the arrival distribution is taken a 2-point distribution: a0 + a5 = 1. The load ρ =A´(1) /β = 5a5/β = 0.99 and 0.95, respectively. The service time parameter β is varied from 0.1 to 0.9. The abandonment probability, i.e. the long-run fraction of customers that leaves the system before being served, and the delay probability are given by, respectively, Table 2. Load ρ = 0.99 and θ = 0.005 Delay model

β 0.1 0.3 0.5 0.7 0.9

Qdelay

242.104 232.303 222.502 212.701 202.9

and

A(1)

(Wait) 1

Qabandon

A(1)

.

__

In the tables below we give the mean queue size Q for three different models • for the standard delay model without abandonments, • for the model with abandonments for various values for θ, • for the so-called adapted delay model, i.e. without abandonments, but with an arrival rate decreased with the abandonment rate of the model with abandonments, i.e.

In Table 1-2 we present the results for a load ρ = 0.99 and two different abandonment parameters θ = 0.002 and θ = 0.005. We see in both tables that for the abandonment model the fraction of customers who abandon the queue, the abandonment probability, is decreasing in the service parameter β, and that the mean queue length is increasing in β, whereas for the standard delay model the mean queue length is decreasing in β [this latter fact follows simply from the Pollaczek-Khintchine formula]. So, as usual, the conclusion is that the system performs better with a smaller variance of the service time (here a higher value for β). In Table 1-2 we also see the phenomenon that in the adapted model, i.e. a standard delay model with an arrival rate equal to the original arrival rate minus the abandonment rate found for the abandonment model, the mean queue length is larger than for the abandonment model. This result can be phrased as follows: when we give the customers the choice to abandon the system when they want, the system as a whole performs better than when customers are not admitted to the system on beforehand, ceteris paribus. Further numerical results can be found in the thesis of the second author Van der Ster (2008).

References Artalejo, J.R. and A. Gómez-Corral. Retrial Queueing Systems. Berlin: Springer Verlag, 2008. Print. Bruneel, H. and B.G. Kim. Discrete-time models for communication systems including ATM. Dordrecht: Kluwer Academic Publishers, 1993. Print.

Abandonment

4.86366 8.82692 11.24770 12.93329 14.1328

(0, 0)(1 a0 )

′ Aadapted (1) = A′(1) − Qabandon .

After rearranging terms we get Q = 1′ (1) =

Q

Adapted model

P(Abandonment)

P(Wait)

0.24564 0.148601 0.113613 0.093329 0.079309

0.949363 0.968577 0.975505 0.979521 0.982297

AENORM

Qadapted

6.8907 12.3122 15.9014 18.7123 21.0642

vol. 18 (68)

October 2010

21

Operations Research and Management

Falin, G. and J. Templeton. Retrial Queues. London: Chapman & Hall, 1997. Print. Garnett, O., A. Mandelbaum and M. Reiman. “Designing a Call Center with Impatient Customers.” Manufacturing & Service Operations Management 4 (2002):208-227. Mandelbaum, A., W. Massey, M. Reiman, A. Stolyar and B. Rider. “Queue Lengths and Waiting Times for Multiserver Queues with Abandonment and Retrials.” Telecommunication Systems 21 (2002):149-171. Mandelbaum, A. and S. Zeltyn. “Service Engineering in Action: The Palm/Erlang-A Queue, with Applications to Call Centers.” Advances in Services Innovations (2007):17-45. Nobel, R.D. and P. Moreno. “A discrete-time retrial queueing model with one server.” EJOR 189.3 (2008):1088-1103. Palm, C. “Methods of judging the annoyance caused by congestion.” Tele 4 (1953):189-208. Van der Ster, S. “Een discrete-tijd wachtrijmodel met ongeduldige klanten.” Bachelor thesis Vrije Universiteit, Amsterdam (2008). In dutch.

22

AENORM

vol. 18 (68)

October 2010

Econometrics

A State-Space Model for Residential Real Estate Valuation by: Marc Francke All property in the Netherlands has to be appraised yearly. Yearly valuation has only been made possible with the help of models. The number of real estate appraisers is simply too small to value the more than 7 million residential properties. This paper describes the statistical model that is used by Ortec Finance to value residential real estate for the local government, housing corporations, and mortgage providers. Transaction prices are explained by housing characteristics, location and time in a hierarchical trend model. In this state-space model the impact of time on transaction prices is modeled in an advanced and flexible manner. Estimation results are provided for Amsterdam.

Introduction In the past fifteen years or so a lot has changed in the way real estate in the Netherlands is being appraised. Models are playing an ever-increasing role in both determining and validating values. Back in the 1990s values for the Real Estate Appraisal Law (Wet WOZ), with its main purpose of levying taxes, were still largely determined without the help of models. Starting in 2008, the periodical re-appraisal was replaced by an annual one, a change only made possible thanks to the utilization of models. Important arguments in support of this change are objectivity, reproducibility, efficiency and cost reduction. Besides the fact that models have by now proven their worth. Yet such valuation models are not only employed for taxation purposes, but also to calculate the indirect returns of housing associations based on the open market value in non-rented state (Investment Property Databank European Social Property Services). These mass appraisals are applying models on a large scale. Model values are furthermore used to obtain financing when purchasing a private residence. Starting January 1st, 2010, it is required that, in order to obtain the National

Marc Francke Marc Francke is head of Real Estate Research at Ortec Finance and full professor Real Estate Valuation at the University of Amsterdam. Francke obtained his PhD in Econometrics in 2006 at VU University Amsterdam and has served there as assistant professor in Econometrics until August 2008. In 2001, he established OrtaX, a commercial venture specialized in the mass-appraisal of real estate, now being part of Ortec Finance.

Mortgage Guarantee, the open market value as determined by an independent appraiser, free from any rent or use, is compared to a model-generated valuation report. This paper describes the hierarchical trend model for real estate valuation. This model has already been operational for more than 10 years with only minor modifications in the model structure. The same model specification has been used to value on a yearly basis more than 1.2 million homes in different municipalities, varying from urban areas like Amsterdam and Zwolle to rural areas like Oldambt and Voerendaal.

The hierarchical trend model In the hierarchical trend model (HTM) the selling prices of homes are explained by housing characteristics, location and time, where time is measured in months, see Francke (2008). The model contains time-invariant and time-dependent components. The time-invariant component contains the specification of the housing characteristics. The time-dependent component consists of three building blocks: a common trend, a districtspecific trends and house type group-specific trends. The model can be paraphrased as followed: Log transaction price = influence of individual characteristics (time-invariant) + level common trend + level district trend + level house type group trend + error term. The district and house type group-specific trends are modeled as deviations of the common trend by random walks. The common trend has a more sophisticated

AENORM

vol. 18 (68)

October 2010

23

Econometrics

specification, namely a local linear trend model. Both specifications do not impose a fixed relation between price and time up front. In the random walk model it is assumed that the expected price level in the coming month is equal to that of the current month and in the local linear trend model it is assumed that the expected price change in the coming month is equal to the change in the current month. The models keep the middle between a linear price change – prices rise or fall always with the same percentage – or no structure at all, modeled by dummy variables per month. But this last approach has the disadvantage that a large number of explanatory variables must be included in the model. The random walk and local linear trend specifications are flexible, but at the same time quite parsimonious with the number of variables needed. Modeling the impact of time with the aid of dummy variables is frequently used in hedonic price models and repeat sales models in order to derive a price index. The implicit supposition in these models is that the price level of the current month does not depend on the price level of preceding and subsequent months. In other words, the price level in a specific month is only determined based on the sales prices in that month. If, however, the number of sales in a particular month is low and/or the sales prices include a few outliers, then estimating the price level in said month is rather unreliable, see Francke (2009). In the HTM, due to the specifying of the random walks and the local linear trend model in establishing the price level, transaction prices from preceding and subsequent periods are taken into account. This reduces the impact of the transaction noise, the deviation between market value and transaction prices. To what degree upcoming and previous periods play a role in the determination of the current price level is estimated from the data, so that an optimal tradeoff between signal and noise is made. The HTM is provided by y t = it + D , t t + D , t t + D ,t + f ( X t , ) + t , t ~ N (0, 2 ),

t +1 = t + t ,

t ~ N (0, 2 ),

t + 1 = t + t ,

t ~ N (0, 2 I ),

t +1 = t + t ,

t ~ N (0, 2 I ),

(1)

~ N (0, 2 I ),

where yt is a nt×1 vector of log selling prices, μt is the common trend, θt is a vector of district trends, and λt is a vector of house type group trends. The vector φ contains time invariant random effects for neighborhoods (a sub classification of districts), and f(Xt ,β) is a (partly) nonlinear function of characteristics Xt with corresponding coefficients β. The matrices D are selection matrices, containing 0 and 1 to select the appropriate district, house type group and neighborhood. We impose the restriction

24

AENORM

vol. 18 (68)

October 2010

Year

1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009

Number of homes

Owneroccupied %

Number of transactions

353,660 2,864 357,767 3,200 359,797 4,179 364,417 4,097 369,064 4,590 371,800 4,951 373,198 5,272 374,952 5,650 377,069 6,577 378,573 8,022 380,143 9,279 381,832 19.4 10,074 383,078 20.7 10,489 387,531 22.3 9,390 391,181 23.5 7,363 Source: Statistics Netherlands

μ1 = λ1,1 = 0 for identification reasons. The HTM can be formulated in state-space format. Conditional on the variance parameters estimates of the state vector αt = (μt , κt , θt’, λt’, φ’, β’)’ can be obtained by the Kalman filter and smoother. The Kalman filter also directly produces the likelihood function, which can be optimized with respect to the variance parameters. A computational efficient estimation procedure, based on ordinary least squares in an initial step and the Kalman filter on a reduced dataset in the subsequent step, is provided by Francke and De Vos (2000). Gauss-Newton regression has been applied to deal with the nonlinear function f(Xt ,β).

Estimation results for the city of Amsterdam

t ~ N (0, 2 I ), t + 1 = t + t + t ,

Table 1. Number of homes and transaction in Amsterdam

This Section presents a simplified version of the HTM that is used for the valuation of residential real estate in Amsterdam. The Real Estate Appraisal Law requires the properties to be appraised yearly on January 1st. Taxes in the current year are based on the assessed value in the preceding year. Valuations on January 1st in year t are primarily based on transactions in year t and t-1, but in the HTM we use transactions from preceding years as well. The main reason is that in a specific year only for a small fraction of the housing stock transactions are available. In specific this holds for the city Amsterdam, where the percentage of owner-occupied housing is far below the national average, 23.5 versus 55.9%, see Table 1. An additional reason for using a long time series of transactions is that the recent crisis reduced the number of transactions by 30%. In 2009 the number of transactions

Econometrics

Table 1. Estimation results from the HTM for Amsterdam Variable

HouseSize HouseSizeRest LotSize Parking Center Parking NoCenter Construction year < 1900 Center Construction year < 1900 NoCenter Construction year > 1899 and < 1920 Construction year > 1919 and < 1945 Age: Transaction – construction year: construction year > 1944 Maintenance poor Maintenance good Location poor Location good Canal Number of observations Number of district clusters Number of house type group clusters Number of random neighborhood effects Number of explanatory variables

is less than 2% of the total housing stock. For the estimation of the HTM a database containing 91,172 transactions in the period January 1986 to November 2009 has been used. These transactions have been cleaned to meet the restrictions of the Real Estate Appraisal law. For example, transactions prices are corrected for the impact of land lease contracts: 60% of the present value of future land lease payments is added to the transaction price. The HTM contains 14 district trends and 7 house type group trends, so in total 98 different trends are distinguished. The districts more or less coincide with an old urban district classification. The house type groups are for multi-family homes: 1) monumental homes, 2) apartments, 3) apartments with shared entrance, 4) gallery apartments, 5) high-rise flats; and for single-family homes: 6) row/corner homes, 7) (semi) detached homes. Note that the average number of observations per house type group and district per month is very small, approximately 3.The HTM includes 304 random effects for neighborhoods. The nonlinear term in the HTM is specified by log( g ( z )) = x ' , g ( z ) = ( HouseSize + 2 HouseSizeRest ) 1 exp( z ' ) (2) + 3 LotSize + 4 ParkCenter + 5 Park NoCenter

δ1 δ2 δ3 δ4 δ5 γ1 γ2 γ3 γ4 γ5 γ7 γ8 β1 β2 β3

Coefficient

std. err.

t-value

0.920 0.569 0.081 10.996 3.645 -0.092 -0.069 -0.058 -0.083 -0.004 -0.049 0.030 -0.038 0.046 0.085

0.001 0.010 0.002 1.326 0.271 0.004 0.004 0.004 0.004 0.000 0.002 0.002 0.003 0.002 0.002

1,209.9 55.0 50.9 8.3 13.4 21.3 16.9 13.9 20.0 60.1 20.0 15.8 12.6 26.2 45.7

91,172 14 7 304 40

σ ση σζ σω σς σφ

0.131 0.005 0.0016 0.0018 0.0023 0.0373

where the variables z only refer to the building and the variables x refer to both the building and the land. Table 2 shows the estimation results of the HTM. Not all variables, including house type dummies, are reported in this Table. The coefficient δ1 shows that the value is less than proportional to the house size; an increase of the house size of 10% gives approximately an increase of the building value with 9%. The value of a parking place inside the city center is almost three times higher than the value of a parking place outside the city center. The age variable indicates that the building value decreases with 0.4% a year, when the year of construction larger than 1944. Dummy variables have been included for construction years less than 1945. Houses built between 1900 and 1919 are 2.5% more expensive than houses built between 1920 and 1944. Note however that the house type classification partly depends on the construction year, so the interpretation of the construction year variables is not straightforward. The difference in building value between poor and good maintenance is about 7.9%. The difference in total value between a bad and a good location is 8.4%. A home situated at one of Amsterdam’s canals is 8.5% more expensive than an equivalent house not situated at a canal. The standard deviation of the measurement error is quite reasonable, 13%. Figure 1 provides examples of the estimated log

AENORM

vol. 18 (68)

October 2010

25

Econometrics

Table 1. Common, district and house type group log price trends.

price trends. The dotted lines indicate 95% confidence intervals. The upper left panel shows the common price trend and the upper right panel the slope of the common trend. The nominal price log price increase between January 1986 and November 2009 is 2.00, corresponding to a price increase of 642%, an average yearly price increase of 8.7%. The recent growth period came to an end in August 2008. The price fall between August 2008 and November 2009 is 7.5%. Note that there was also a price decrease of 5% in the period between August 2002 and December 2004. There are substantial differences in price trends between regions. The lower right panel of Figure 1 shows the district price trends (in deviation from the common trend) for “Oud-West” and “Baarsjes en Bos en Lommer”, two regions in the western part of Amsterdam within the ring road. In 1986 the price level in both districts was almost the same, and in 2009 the price difference is almost 15%. The lower left panel shows that the differences between row houses and highrise flats are less pronounced, and this holds in general for differences between house type group trends within Amsterdam.

Final Remarks This paper presented the HTM for selling prices of homes that has proven its value for almost a decade. The HTM provides reliable appraisal values for individual homes and enables to produce reliable up-to-date constant-

26

AENORM

vol. 18 (68)

October 2010

quality price indices on a very detailed level. Real estate transaction data offers ample possibility to conduct applied research, offering methodological challenges to combine spatial and temporal correlation and crosssectional analysis.

References Francke, M. K. “The Hierarchical Trend Model in T. Kauko en M. Damato (eds.).” Mass Appraisal Methods; An International Perspective for Property Valuers, Wiley-Blackwell RICS Research, 2008. Francke, M.K. “Repeat Sales Index for Thin Markets: A Structural Time Series Approach.” Journal of Real Estate Finance and Economics (2009), http://dx.doi. org/10.1007/s11146-009-9203-1. Francke, M.K. and A.F. de Vos. “Efficient Computation of Hierarchical Trends.” Journal of Business and Economic Statistics 18.1 (2000):51-57.

Acknowledgements I would like to thank the Amsterdam Council Tax Office for providing me the data for this research, and giving me permission to publish the results.

Interview with Judith Lammers by: Ewout Schotanus

Judith Lammers Judith Lammers (1976) has done extensive research in the field of the Aids epidemic in Sub-Sahara Africa. She holds a PhD at Tilburg University on her dissertation “HIV/AIDS, Risk, and Intertemporal Choice.” Her research interests lie in behavioural economics. She studies risk behaviour and risk perception, including the anticipation and prevention of health shocks in Africa. Her article “HIV/AIDS, Risk Aversion and Intertemporal Choice”, which she published with Professor Sweder van Wijnbergen, let to the theme of the Econometric Game 2010.

Could you give me a short summary of your life before you went to university? During my time in high school I spent a lot of time dancing and I was very interested in arts. For a long time I was convinced I wanted to go to either the dance academy or the academy of arts. Instead of pursuing a carrier in one of these two fields, you decided to study Econometrics. Could you explain this change of direction? In high school, my favourite courses together with arts were mathematics and economics. I in particular liked the application of maths in economics and socially relevant issues. That’s why I decided to study Econometrics at Tilburg University. I have never regretted my choice for econometrics. To compensate my dancing aspirations, I teach Pilates in the weekend. Did you do any other activities besides studying econometrics? Next to my study, I have been an active member of the Tilburg’s Econometricians Association (TEV), currently known as Asset Econometrics. In the academic year 19981999, I was vice president of the board and responsible for external affairs. Your bachelor thesis was titled “Empirical Analysis of Labor Supply Decisions of Married Women.” Did you have a particular interest in this field? My mother was a strong supporter of women’s emancipation. Possibly this influenced my choice for this Bachelor’s thesis topic. Although it is an important issue, I am currently not working on this topic.

After you got your bachelor degree, you started a master’s in Mathematical and Strategic Economics. What was the main reason you chose for this specialization? Mathematical and Strategic Economics studies Game Theory in combination with econometrics. The main reason for choosing this specialization was that I wanted to graduate in Game Theory. I was planning to do this at the London School of Economics (LSE). However, due to some bureaucratic reasons, this was cancelled. At that time, I was working for Prof. Magnus1as a student assistant on a Bayesian National Accounts estimation system. After the LSE cancellation, he directly provided me with an alternative subject for my Master thesis and within a week, I was in Mozambique. In Maputo, I had to collect basic data and priors to test out the sophisticated system. The advantage of the new approach was that the system provides unambiguous estimates, while estimates under previous systems suffer from missing data. Moreover, standard errors of all the estimates are computed as well. In practice the system turned out not to work sufficiently, due to a lot of zeros in the matrices. After my Master thesis, the system was improved by using sparse matrices to overcome the problem of many zeros. After your graduation, did you already know you wanted to do a PhD? I was in doubt between doing a PhD and working for the Ministry of Social Affairs and Employment. After my graduation I eventually started working for the Ministry but decided to quit after a couple of weeks as it was not the right job for me at that time. I wanted to obtain more in-depth knowledge. After about six months, I started

Prof. Dr. J.R. Magnus is a teacher in Econometrics at Tilburg University. He was one of the case makers for the Econometric Game 2007. The theme that year was climate change. 1

AENORM

vol. 18 (68)

October 2010

27

my PhD in Economics at Tilburg University. Besides the general M.Phill courses I got a postgraduate diploma on modelling and accounting for sustainable development at ISS and I went to the University of Lugano, Switzerland to learn more about measurement, data collection and data quality. During an information lecture for PhD’s in which the research topics of Professors were explained, I was surprised to see that a lot of research was done in aging, while no one seemed to be investigating the rejuvenation of a population. During my stay in Mozambique, I learned and read a lot about this phenomenon. This became the foundation of my dissertation. In the beginning, I was mainly interested in anticipatory savings in relation to the HIV/AIDS epidemic. In national account compilation in developing countries, it was common practice to set savings equal to zero. Because I saw that poor families have durables, like a refrigerator, I was convinced that also poor households saved as they cannot buy such durables from their weekly payments. Moreover, the poor are more vulnerable for health shocks, which makes the necessity to save stronger. My main question was whether South-Africans’ savings differed across risk perceptions with respect to HIV infection during their lifetime. Savings can however be affected in different ways: the higher expected medical expenses, and the more fluctuating productivity due to expected illness would enhance savings. On the other hand, getting infected might be strongly related to higher risk preferences, more risk taking behaviour and higher discount rates. Together with the reduced life expectancy, this would reduce savings. Initially, most researchers expected the latter effect to be larger. What are the main results of your dissertation? The main result is that I found evidence that people do anticipate the consequences of HIV infection risk. Moreover, the group that is aware of their positive HIV status seemed to anticipate the most. The first effect turned out to be larger then the second effect. People living with HIV save significantly more than others. Also in conversations with HIV positive students the anticipation behaviour turned out to be present. They told that they study harder since their infection, which was also confirmed by their study results. Despite the significant results, I got a lot of criticism about my findings. Major critique was the relatively small sample size, which would be too small to draw conclusions. A valid critique would be a potential selection problem: Only students were in my sample, which means that ex-students - those that dropped out - were not, while potentially HIV positive ex-students have different profiles than HIV positive students that are still in university. To address both potential critiques, I therefore analyzed a large dataset on a representative group of households from Windhoek (the capital of Namibia), after finishing my PhD and found evidence for anticipatory behaviour there as well. In particular, households with a HIV positive notes

28

AENORM

vol. 18 (68)

October 2010

adult invested significantly more in the education of their children, while their income was already decreasing. Also this research receives critique, but employing different estimation models results in the same conclusions. What were the main obstacles during your research? There is always the issue of money. The household survey data I use for my current research include a large number of observations, which means that every additional behavioural question results in longer interviews. In addition, the training of interviewers - usually used to standard questions - is challenging too. An obstacle I had to pass during my fieldwork in South Africa when collecting experimental data for my PhD, was payments of the participants with small cash. Because bank systems were down for several days, we ran out of small cash. The only way to solve this was to go from store to store and buy a bottle of water with large notes in order to gather a lot of small change. Unfortunately, some white students were difficult in cooperation, because of racial discrimination. The experiments were conducted in English and this was thought to be the inferior language of the native Africans. What are the main research projects you are working on now? At the moment I am working on various projects in Namibia, Tanzania and Nigeria, all related to risk behaviour. I am also working on a project in Rwanda on the impact of ‘provider initial testing’ (PIT). This means that every patient that pays a visit to a hospital is obliged to let himself get tested on HIV. The advantage is that with this method a lot of people get tested. In usual practice of VCT (Voluntary Counselling & Practice), research has shown that it is not necessarily those at risk that come for a test. Testing is a risk averse activity. PIT would overcome this problem. In addition, by testing all clinic clients HIV stigma could potentially be reduced. What kind of research are you planning in the near future? There are several potential topics waiting for me on the shelf. One of these is an in depth analysis of PIT and HIV stigma, but also the potential negative effects of PIT. Does it refrain households from not going to a clinic while in need of healthcare? Another topic is moral hazard in micro health insurance, which I can start analyzing as soon as panel data are available. I think it is important to provide information on both positive and negative effects of interventions in developing countries in order to optimize foreign aid. I just started cooperating with different universities and disciplines on eliminating malaria in Rwanda. After implementation of by the community chosen strategies, we will continuously review both the positive and negative impact of the interventions on behaviour, health and

poverty in order to improve the strategies. The purpose is to revise the situation over and over again to finally come to the best solution. I am curious how this community based methodology will improve the situation, because usually one intervention is implemented and its impact measured, without focussing on continuously improving the type of intervention.

medicine in the future is taken into account now.

Obviously, your research is of great importance, but what is the main goal you want to achieve? There are a few things I hope my research will accomplish. One of these things is that the element of behaviour will be taken into account by decision makers. For example, the perception of risk (or underestimation of risk) regarding unsafe sex is still ignored in most HIV prevention strategies which are often focussed on transferring facts. One of the purposes of my research is to show that taking behavioural statistics into account is essential in fighting the Aids epidemic. The same applies to other interventions in developing countries. I am happy to say there are already NGOs that are getting more and more convinced that behaviour statistics have to be taken into account. Second, I intend to improve survey designs by deviating from standard practices and adding vital elements that are needed for answering basic research questions. What are the biggest concerns for the future of SubSahara Africa? Let me focus on HIV/AIDS. There are a lot of projects that focus on improving the knowledge on HIV/AIDS among the population. However, I think the problem lies in the fact that people are still not aware of the risks of getting infected with HIV. A lot of people underestimate the risk. A positive thing is that in many countries in SubSahara Africa medicines for HIV are provided for free. However, there are not enough medicines to provide to everyone infected directly from the first stage of HIV infection. This sharply increases the chance of getting infected with another illness for which you have to pay the medical expenses yourself. In addition, productivity drops due to stigma or the inability to work. Therefore it is of great importance that people will get their medicines against HIV in an earlier stage of the decease. The provision of medicines is no doubt a good thing, but it also leads to a financial problem. Someone infected with HIV will die after approximately ten years without medication. On the other hand, a person infected with HIV who is provided with medicines on a regular basis can potentially live as long as someone who is not infected. If the spread of HIV does not decline, within a few decades, a growing number of people will be infected with aids for which medicines need to be provided. Currently, there are already countries who consider providing only women and children with medicines, because there are not enough financial resources available for all. Therefore, it is of great importance that this growing desire for notes

AENORM

vol. 18 (68)

October 2010

29

Econometrics

Statement: Publication Bias by: David Hollanders and Daniëlla Brals It is generally more difficult to publish non-significant results than significant findings. Indeed in small samples -when tests have low power- a null-finding is arguably less convincing than a significant finding. However, when testing a true null hypothesis with several data sets, a significant result will arise sooner or later. Not publishing the non-findings, can then lead to the misleading impression the published result is significant. Consequently, if only significant results are published, results published are not significant (as it is unknown if there were any unpublished non-findings). So, a higher willingness to publish null-findings is desirable.

Answer Jan Kiviet: Over the last few decades a clear tendency can be observed - especially in the better journals – in which isolated estimation results are no longer just mechanically supplemented by indications such as: one star means significant at the 10% level, two stars at 5% level and 3 stars at 1%. More and more, the emphasis is on clearly stating and defending all the assumptions made that underlie the employed inference technique, supplementing these where possible by diagnostics, which when insignificant establish supporting evidence for the interpretability of the obtained results. In addition, often robust variance estimators are being used, and their finite sample properties improved by bootstrapping, all in order to enhance trust in confrontations of parameter estimates and their estimated standard errors with prior beliefs when the (in)significance of the outcomes is being expressed and discussed. Moreover, many empirical papers do present these days results under conflicting sets of assumptions and do report findings over various stages in a specification search, in which often initially hardly significant outcomes are shown to become (possibly seemingly) more significant after imposing particular non rejected restrictions. Readers having different judgments on the plausibility ofwthe assumptions that unavoidably have to be made, and on the restrictions that might be imposed, may then draw diverging conclusions from such publications.

I think that these are all positive developments, but they call for a proficient readership. The interpretation of the result of a test statistic, significant or not, published in isolation, without information on its genesis, simply requires the unconditional belief by the reader in all the assumptions made by its author, and no well educated self-respecting critical scientist should ever be willing to do that.

Answer James Davidson: I expect it is true that there is a publication bias against insignificant findings, given that referees and editors are human and fallible. However, one could equally well say there is a bias in favour of “significant” findings. Papers using dubious methodologies may find favour because they make interesting claims. How many researchers run a hundred regressions, from which they pick out just one to report in their paper? What was wrong with the other 99? In both cases, the problem is that judgement is being influenced by the outcome of the empirical tests, while it ought to focus only on the virtues of the model and the research methodology. In an ideal world, editors would delete the empirical results from papers before sending them out for review, so that they are judged on the only relevant criteria. We can hope! send your query to aenorm@vsae.nl

Jan Kiviet

James Davidson

Jan F. Kiviet was appointed to the chair of econometrics at the University of Amsterdam (UvA) in 1989. He is a Fellow of the Tinbergen Institute, which is the joint graduate school of the University of Amsterdam, Erasmus University and the Free University. He teaches in Econometrics and is director of UvA-Econometrics. His research focuses on improving inference obtained from small samples - especially panel data - on dynamic simultaneous relationships, and on enhancing Monte Carlo simulation methodology.

James E.H. Davidson is professor of econometrics at the University of Exeter, and he has also taught at the London School of Economics and Cardiff University. He is chiefly interested in econometric time series analysis and much of his recent research has dealt with long memory models. In addition to two books on econometric theory he is the author of the software package Time Series Modelling. His website is at http://people.ex.ac.uk/jehd201/ .

30

AENORM

vol. 18 (68)

October 2010

Puzzle On this page you find a few challenging puzzles. Try to solve them and compete for a price! But first we will provide you with the answers to the puzzles of last edition.

15+ 16+

Answer to “Cycling or walking?” The distance between Jacks home and his sport club is 2200 meters. This can be found as follows. Jacks cycling speed is 5,5 times his walking speed. Let’s say his walking speed is 1, his cycling speed 5,5 and the distance between his home and the sport club is x. After 900 meters it does not matter if he goes home and gets his bike or if he just keeps on walking. This means that 900+x/5,5 must be exactly the same as x-900. This equation solves for x=2200. And here you find the puzzles of this edition:

24-game The rules of the 24-game are as follows: you get four numbers between zero and ten and you have to make 24 with these four numbers. You can use every number only once and you have to use them all four. All the different orders of calculations are allowed. This means we have ((a . b) . c) . d, (a . (b . c)) . d, a . ((b . c) . d), a . (b . (c . d)) and (a . b) . (c . d), where a, b, c and d stand for the numbers and you can put only +, -, x and / on the dots. The numbers for this puzzle are 1, 4, 5 and 6. There are two solutions and you have to find them both.

4/

12x

96x

11+ 9+

521+

1-

224x

8+

Answer to “Shuffle the cards” There are 4 loops when the cards are shuffled. If we look at the places of the cards before and after they are shuffled, we can find these loops. The card in place 8 stays in place 8, which is a loop of 1. The card in place 1 goes to place 11 and the card in place 11 goes to place 1, which is a loop of 2. The card in place 4 goes to place 13, the card in place 13 goes to place 10 and the card in place 10 ends up in 4, which is a loop of 3. Then there are 7 cards left which form the fourth loop, the card in place 2 is back in place 2 after 7 shuffles. Between that it has been in place 7, 3, 12, 6, 5 and 9. Because 2, 3 and 7 are all prime numbers, we have to shuffle the cards 42 (=2x3x7) times to get the cards in the same order again.

5

8

4/ 7+

18+

11+

14+

5+ 5/ 1

3

12+

21+

13+

2-

Kendoku Fill in each row and column with digits from 1 to 8. Digits can and must be used only once per row and column. Thick lines indicate ‘cages’ - each cage has an aritmetic puzzle, which must be solved by the digits in the case, eg ‘5+’ means all digits in that cage add to 5. Digits can be used more than once per cage, so long as there is only one of a digit per row or column. For the solution, you have to hand in the eight numbers of the second row of the kendoku.

Solutions Solutions to the two puzzles above can be submitted up to November 1st 2010. You can hand them in at the VSAE room (E2.02/04), mail them to aenorm@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 68, Roeterstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English and Dutch.

AENORM

vol. 18 (68)

October 2010

31

For last few months it has been very quiet at the VSAE, because of the holidays. In June we had our last activities for our members. We went with thirty members to a camping place close to Noordwijk for one night. In addition, we organized the last monthly drink and the alumni drink in June which both were a lot of fun. In August the moment was finally there. Three years ago already it had been announced and now it has finally happened, the VSAE has moved to another room in the UvA building. The room is two times as big and is divided into two separate rooms. One room for the board to do some serious work and one for the members to do their work for the committee they are in or just for hanging out. We are convinced this new room will help us to get more connected with our members. The holidays are over now and there is a lot to look forward to. In September the VSAE organizes the Career Days, in November we will go with 50 people to Berlin for a weekend and in December we organize the Actuarial Congress. Besides that, there are of course the monthly drinks, the pool tournament in October and the bowling tournament with Kraket in November.

Another year has come to an end. With the closing off activity on the beach in Zandvoort the last activity of the old board took place. After a beachvolleyball tournament we supported the Dutch soccer team through their match against Cameroon. Thanks to the good weather and the victory against Cameroon this actifity was a great end of the year. In June also the traditional Kraketweekend took place. Except from the board nobody knews the location, so it was a big surprise when we ended in Marknesse, a small village in the Noordoostpolder. In a very nice accommodation and with sport, fun and relax activities this weekend will remembered for a long time. By the writing of this, the new board has not announced the calender of the next year, so new activities are not known at this time. Of course there is no doubt that the new year will also be a great year in the history of Kraket.

We hope to see you at one of these activities.

Agenda

Agenda

• 23 September Monthly Drink

•

23-26 August

•

3-5 September

• 29-30 September Beroependagen, a carreer event where students can meet with almost 30 companies in presentations, case studies, lunches, diners and so on. • 9 and 11 November Consultancy Event, students can learn more about four strategy consultants doing real life cases. • 19-22 November Short trip abroad to Berlin with 50 students of our department.

32

AENORM

vol. 18 (68)

October 2010

IDEE-week, introduction period for new students Introduction weekend for new members

• 14 September General members assembly • 21 September LEVT, National Soccer Tournament students Econometrics

ÂŠ 2010 KPMG N.V., alle rechten voorbehouden.

KPMG a n d e r h a l f U U r v o o r d e e i n d b e s p r e k i n g va n d e j a a r r e k e n i n g va n e e n g r o o t r e c l a m e b U r e a U

a u d i t J ta x J a dv i s o r y

w w w.g a a a n . n U

Towers Watson

Towers Watson. Een helder perspectief voor concrete oplossingen.

Towers Perrin en Watson Wyatt zijn nu samen Towers Watson. Een wereldwijde onderneming met een eenduidige focus op klanten en hun succes. U kunt vertrouwen op 14.000 ervaren professionals die over zowel lokale als internationale expertise beschikken. Onze aanpak is gebaseerd op samenwerking en betrouwbare analyses. Wij bieden een helder perspectief dat uw specifieke situatie koppelt aan het grotere geheel. Zo leiden wij u naar betere bedrijfsresultaten. Towers Watson. Duidelijk resultaat.

towerswatson.nl

Benefits | Risk and Financial Services | Talent and Rewards ÂŠ2010 Towers Watson. Alle rechten voorbehouden.