Volledig 12 juni

Page 1

Magazine voor studenten Actuariaat, Econometrie & Operationele Research

Een publicatie van

Nr. 83 Volume 22 Juni 2014

AENORM DEZE EDITIE: WK VOETBAL 2014 Predicting poverty with incomplete data: tackling measurement error with a pseudo copula approach | Comparing long and short term behaviour on the stock markets and investigate the effects of the financial crisis | Predicting FIFA World Cup 2014: a simulation based study

Column: Wetmatigheid | Uitwisseling met New York University | Econometric Game 2014


MSC-LEVEL | econometrics

Rust in de financiële markt, dat is keihard werken. Begin je carrière bij DNB. Ontdek de mogelijkheden op werkenbijdnb.nl

Bij DNB werk je in het zenuwcentrum van onze economie. Iedere beslissing die we nemen, wordt kritisch besproken door de hele financiële wereld. Een dynamische wereld die we tot rust moeten brengen. Dat vraagt om aanpakken en volhouden. Want iedere dag krijg je te maken met een ander complex vraagstuk en moet je de actualiteit zien voor te blijven. Daarmee lever je een belangrijke bijdrage aan financiële stabiliteit en vertrouwen. Kun jij die druk aan en zie je het als een uitdaging om onze economie vooruit te helpen? Denk dan eens aan een carrière bij DNB. Kijk voor meer informatie en de mogelijkheden op werkenbijdnb.nl.

Werken aan vertrouwen.


Nieuwe ronde, nieuwe kansen

Leo Huberts Hoofdredacteur

Colofon Hoofdredactie Leo Huberts Kasper van Vliet Redactie Jozef Battjes Irene Doelman Marc van Houdt Bas Koolstra Florian van der Peet Kevin Sipin Simone Spierings Oplage 700 De artikelen in dit blad zijn niet noodzakelijkerwijs de mening van het VSAE bestuur of de redactie. Niets uit dit blad mag worden gedupliceerd zonder toestamming van de VSAE.

Adverteerders DNB Towers Watson Design United Creations, 2013 Adres redactie VSAE Roetersstraat 11 Kamer E3.25 1018WB Amsterdam Tel. 020 - 5254134

Kalender Jaarafsluiting | 13 & 14 jun BORREL | 11 jul Introductiedagen | 20-22 Aug

3

voorwoord

Met de zomer voor de deur, gaat het Wereldkampioenschap voetbal in Brazilië bijna beginnen. Voor sommigen hét evenement van het jaar en voor anderen totaal niet interessant. Of je nou van voetbal houdt of niet, het is voor iedereen duidelijk dat dit evenement ergens om gaat. Of het nou de passes van Sneijder, de one-liners van van Gaal, de protesten in Rio of de torenhoge kijkcijfers zijn; het gaat aan niemand voorbij en staat in alle kranten. Daarom staat ook deze editie van de Aenorm deels in het teken van het WK in Brazilië. Hoewel misschien cliché, maar niet minder waar; voetbal brengt mensen samen. Vooral voetbal dat op internationaal niveau wordt gespeeld zorgt voor een gevoel van samenhang in een land. Deze functie vervult de VSAE op kleinere schaal ook. Onze vereniging brengt de studenten van Econometrie, OR en Actuariaat samen tijdens de studie en daarna. Zo ook het team dat achter deze editie van de Aenorm zit. Een grotendeels nieuw team, aangevuld met wat wijsheid van de oude redactie. Ik ben ervan overtuigd dat er een hoop mooie Aenorms aan gaan komen en wij gaan ons uiterste best doen om jullie al die edities van top tot teen met plezier te laten lezen. Het is mijn eer om als voorzitter te pogen de talenten van de redactieleden optimaal te benutten en de mooie nieuwe ideeën die klaar staan tot uiting te laten komen. Nu dan ook een Aenorm met een thema, een thema dat hopelijk velen van jullie aanspreekt. In het door Bas Koolstra geschreven artikel achterin deze editie vind je een uitgebreide econometrische analyse van het voetbalfestijn. Lees deze aandachtig als je kans wilt maken in de WKpoultjes met je vrienden of op je werk. Verder hebben wij uitgezocht wat VSAE-ers denken over het WK en schreef Arjen Krom een column over de wetmatigheden in het voetbal. Als je even genoeg hebt gelezen over voetbal, biedt de masterscriptie van Maurits Malkus uitkomst. Dit onderzoek naar de bewegingen van de aandelenmarkt en invloed van de financiële crisis hierop is voor iedereen goed om te lezen. Een evenement als het WK brengt vaak ook problemen in de wereld aan het licht. Zoals de homorechten in Rusland tijdens de olympische spelen, is nu de ongelijkheid in Brazilië veelvuldig onderwerp van gesprek. Het thema van de Econometric Game en het hieruit volgend artikel geschreven door de winnaars sluit goed aan bij deze wereldwijde problematiek. Ook vind je in het groene deel een verslag van de afgelopen Econometric Game, een interview met de geniale casemaker, het verslag van het Actuariaatcongres, een levendig verhaal over Joël van Kesteren’s exchange in New York en natuurlijk de puzzel. Afsluitend wil ik in het kader van samenzijn nog even noemen dat de jaarafsluiting 13 juni zal plaatsvinden en we aldaar samen de wedstrijd Nederland - Spanje zullen kijken. Met dank aan allen die bij hebben gedragen aan het tot stand komen van deze editie wens ik jullie veel plezier bij het lezen van Aenorm 83! Haal ´em uit het plastic!


Inhoudsopgave WETMATIGHEID

PREDICTING FIFA WORLD CUP 2014: A SIMULATION BASED STUDY BY: BAS KOOLSTRA

COLUMN | PAGINA 5

ECONOMETRIC GAME 2014

EVENEMENTVERSLAG | PAGINA 6

ENGLISH | ECONOMETRICS | BSC-LEVEL | PAGINA 20

DE CASEMAKER AAN HET WOORD INTERVIEW | PAGINA 8

PREDICTING POVERTY WITH INCOMPLETE DATA: TACKLING MEASUREMENT ERROR WITH A PSEUDO COPULA APPOACH

WAT DENKT DE VSAE OVER HET WK VOETBAL VSAE ONDERZOEK | PAGINA 10

BY: ALESSANDRO MATRINELLO, ANDERS MUNK-NIELSEN, DANIEL SAFAI & VALERIA ZHAVORONKINA ENGLISH | ECONOMETRICS | MSC-LEVEL | PAGINA 25

VOLLEDIGE TRANSPARANTIE OP HET ACTUARIAATCONGRES EVENEMENTVERSLAG | PAGINA 12

PUZZELPAGINA

COMPARING LONG AND SHORT TERM BEHAVIOUR ON THE STOCK MARKET AND INVESTIGATING THE EFFECTS OF THE FINANCIAL CRISIS BY: MAURITS MALKUS

INHOUDSOPGAVE

PUZZEL | PAGINA 14

UITWISSELING MET NEW YORK UNIVERSITY EXCHANGE VERSLAG | PAGINA 16

Magazine voor studenten Actuariaat, Econometrie & Operationele Research Volume 22 Juni 2014

ENGLISH | ECONOMETRICS | MSC-LEVEL | PAGINA 31

Nr. 83

Een publicatie van

4


Wetmatigheid Door: Arjen Krom Nog even en dan verschijnen met een aan zekerheid grenzende waarschijnlijkheid oranje straten, oranje gadgets en oranje etenswaren.Daar is geen (economisch) model voor nodig, dat is een wetmatigheid.

Ook ditmaal zal het klassiek zijn. Zijn rush langs de rechterflank, soepele kapbeweging en een spurt richting middenveld. Ploeggenoten die schreeuwen om de bal, maar hij die niets hoort. Die bal voor zijn linkervoet, één uithaal, dat is het enige wat telt.

Maar met een WK neemt ook de spanning onder voorspellers en speculanten toe. Van de liefhebber tot en met de matchfixer. Allen hebben plezier of belang bij de prestaties van ons Oranje.

Terwijl de leren knikker al richting kruising dwarrelt zal Iker afdwalen. Die lange reis van zijn spelershotel naar hier, het bezoek van zijn vrouw de avond ervoor, maar ook die Braziliaanse dames aan de rand van het zwembad. De samba zwelt aan terwijl Arjen al uitzinnig richting cornervlag rent. Zijn shirt over zijn hoofd getrokken.

Is het spelershotel niet te ver van de wedstrijdlocaties verwijderd? Mogen spelersvrouwen hun mannen bezoeken tijdens het toernooi? Of is er ook een zwembad bij het hotel?

Een wetmatigheid, geen model voor nodig. Als resultaten uit het verleden garantie bieden voor de toekomst, dan zijn dit geen gunstige signalen per se. En wat te denken van Braziliaans exhibitionisme aan de Praia’s, de luchtvochtigheid van de Zuid-Amerikaanse zomer of de samba die vierentwintig uur per dag klinkt? Omstandigheden die er niet om liegen. In de groepsfase zal al heel wat worden afgereisd. Van Salvador via Porto Allegre naar São Paulo. Allemaal relatief dichtbij de kust, dus wellicht voordelige zeelucht voor gezonde Hollandse jongens. Maar de Atlantische oceaan of de grijsbruine Wadden- of Noordzee, mag je ze wel vergelijken?

column

Behalve twijfels zullen er ook zekerheden zijn. Op de Oranjecamping zal het beregezellig zijn, Wolter Kroes scoort de hit van het toernooi en Jan Mulder vindt het spel ver beneden de maat. Kevin Strootman bekijkt nors zijn ploeggenoten thuis op televisie, Louis van Gaal zal eerder dan verwacht botsen met de pers en zoals gewoonlijk eist Yolanthe weer alle aandacht op. Alhoewel, die eerste wedstrijd, de ultieme revanche. Over Euro 2012 heeft niemand het meer (Oekraïne staat intussen in een ander daglicht). Maar regelmatig denkt Arjen Robben nog wel terug aan die bewuste avond. Juli 2010, ’s Avonds laat in Soweto.

Arjen Krom (1982) studeerde bedrijfskunde in Groningen en is tegenwoordig bedrijfsadviseur en columnist/copywriter. Voor meer van zijn verhalen, zie www.arjenkrom.nl

Oog in oog met Iker Casillas, de wereld aan zijn voeten. Het had zo mooi kunnen zijn.

5


Econometric Game 2014 By:

This year the study association for Actuarial Science, Econometrics and Operations Research and Management (VSAE) of the University of Amsterdam organised what is already the fifteenth edition of the Econometric Game.

Kees Ouboter

On the 15th, 16th and 17th of April 2014, the most talented econometricians from all over the world came to Amsterdam to compete against each other in a challenging and socially relevant case that could change the public view on solutions to worldwide poverty. With the support of the Faculty of Economics and Business of the University of Amsterdam, the college fund of the University of Amsterdam, the Royal Economic Society and our sponsors ING and SAS we managed to organise a successful World Championship of Econometrics.With a lot of hard work of both committee and board members of the VSAE we succeeded in meeting the high standard the Econometric Game is well known for.This Econometric Game brought 150 students from universities from all over the world to Amsterdam, from South Africa to Canada, from Korea to Russia, from Spain to America. They all came to contribute with their knowledge to the econometric science and the social challenges the modern world is dealing with. The Social challenge of this year, provided by Professor Menno Pradhan, was to predict poverty numbers in Indonesia with an incomplete dataset.

evenementverslag

The starting day After a year of preparation, 15th of April, the start of the Econometric Game was finally there. In the historically impressive Tuschinski theatre, located at Reguliersbreestraat 26-34, the chairman of the committee officially opened the Econometric Game 2014. Thereafter Han van Dissel welcomed the contestants in Amsterdam and Professor Pradhan elaborated about the topic of this year’s case of the Econometric Game, “Poverty analysis with incomplete consumption data”. A case in which the 120 students had to make poverty predictions in Indonesia with a limited amount of data. Finally, John Poppelaars of Ortec explained the importance of econometric research in developing countries and the impact econometricians can have on the further development of these countries. After lunch the participants were guided to the University of Amsterdam where they worked in the Amsterdam Business School on the case for the next three days. At the first day the participants didn’t receive the data so they could start to specify different models, but were not able to perform any real analysis. This was left for the second day. After they joined diner, the participants went back to the hotel after a very tiring day to prepare themselves for the real work.

The second day of hard work Wednesday the 16th of April the participants received the Indonesian data so they could finally perform their analysis and they could check whether their hypotheses were correct. After the casemaker gave a short introduction about the case, the teams had until 6 pm to hand in their paper. After this long day of hard work and some stressful moments at the end the committee guided all the participants to the restaurant. While the

6


participants were enjoying a well-deserved diner, the hard work for the four jury members began. Around 11 pm the judges joined the participants in the Heeren van Aemstel, near Rembrandtsquare. They announced the top 10 universities who proceeded to the finals. The selected universities were:Aarhus University, University Carlos III Madrid, Erasmus University Rotterdam, Harvard University, Tilburg University, University of Amsterdam, University of Bristol, University of Copenhagen, University of St. Gallen, Warsaw School of Economics, Tilburg University. This was the second year in a row that the team of University of Amsterdam made it through to the finals. It didn’t come as a surprise to them and with good confidence for the final day they went back home. Also most other finalists immediately returned back to their hotel. For the non-finalists, and also some of the finalists the night had only just begun, and the partying went on till the morning light.

non-finalists. Constant Thoolen, senior manager of ING, announced the winners: Sacha van Duren, Bas Vonk, Andrez Mendez Ruiz and Caroline Goedhart. Furthermore he gave an enlightening speech about working at ING and his own experiences in the risk management field. At 6 pm the finalists were finally done with their work and started their presentations, six minutes per team. After a dinner at Tara they went to Werck for some drinks awaiting the jury for their final judgement.Around 11 pm the exciting crowd was served. Professor Menno Pradhan, Maurice Bun and Chris Elbers named the University of Copenhagen winner of the Econometric Game edition 2014. Aarhus University came second and the shared third place was given to the University of Bristol and the University of Amsterdam. A great result for our university in becoming third in this very strong field of contestants. After some final words of the casemaker and chairman the Econometric Game 2014 was officially ended. Both the committee and board of VSAE can be proud of this amazing event and it is now up to the next year’s chairman, Wibrand de Reij, to organize again a successful and great event.

The Final day and VSAE mini-case

evenementverslag

In the early morning on Thursday the 17th of April, while the non-finalists tried to get some more sleep from the night before, the finalists were already on their way to the University. At 8.30 am, the casemaker handed out the final case and after a short introduction the finalists began. They had until 5 p.m. to work on a new paper and to prepare a presentation. Meanwhile a group of non-finalists got a guided tour from some of our committee members. Nikki van Ommeren and Joppe Arnold brought them even to the other side of “het IJ” to the EYE movie museum and the group of non-finalists loved it. Also a new element of the Econometric Game was introduced in this year’s event. ING sponsored the VSAE mini-case, a side event of the Econometric Game in which twenty master and bachelor students of the University of Amsterdam also tried to solve the case about poverty in Indonesia.This side event was initiated as a pilot to give the members of the VSAE a feeling of the Econometric Game and challenged them in applying their econometric skills in a concrete and real situation. It was on forehand hard to say whether the time available for the case was sufficient and therefore they received a couple of tips from Professor Maurice Bun to help them on their ways. After a short lunch at cafe Koosje, Alicia van Waveren and Rob van der Kruijs of ING gave a STAR training in which the participants of the VSAE mini-case could practice for potential job interviews. After this training the students had to finish the case and prepare a presentation about their results. Although the time was still too limited to solve the case properly the participants were enthusiastic and the event appeared to be quite successful. This experimental VSAE mini-case can become, with some adjustments, a proper side event to the Econometric Game. At 4:30 pm the participants of the VSAE minicase presented their results for each other and the

7


De casemaker aan het woord door:

De identiteit van de casemaker van de Econometric Game wordt altijd goed geheim gehouden door de commissie. Nu alle drukte rond de dagen over is, wordt het echter tijd om de casemaker van 2014, Prof. dr. Menno Pradhan, beter te leren kennen. Daarom gingen Kasper en Irene langs bij zijn kamer op de UvA voor een interview.

Irene Doelman & Kasper van Vliet

Menno is een van de weinige hoogleraren die zowel aan de VU als aan de UvA werkzaam is. Hij heeft een soort brugfunctie tussen beide universiteiten vanwege de gezamenlijke master in development economics. Doordat zowel studenten van de VU als van de UvA deze master volgen, is de onderzoeksrichting goed vertegenwoordigd in Amsterdam. Dit is wel eens anders geweest, zegt Menno: “Vroeger was het een beetje een geitenwollensokkengebied.Toen is er echter heel veel data verzameld en zijn er goede onderzoeken gedaan. Deze ontwikkeling heeft geleid tot de huidige situatie, waarin development economics een groot onderzoeksgebied is waar ook veel PhD-studenten zich mee bezig houden.” Er zijn twee types die in dit gebied werkzaam zijn, mensen die de wereld willen verbeteren en mensen die reizen leuk vinden en daar hun werk van maken. “Ik ben

interview

De casemaker, Prof. dr. Menno Pradhan, tijdens de introductie van de case.

8


een beetje van beiden. Het reizen is leuk, maar qua onderzoek zijn de ontwikkelingslanden ook heel leuk omdat er veel gaande is. Het effect van schokken of beleidsveranderingen is daar allemaal veel bepalender. Ook is er veel meer te doen op economisch gebied en is het makkelijker om onderzoeken op te zetten.” Zelf heeft Menno na zijn PhD gewerkt in de academische wereld, om vervolgens aan de slag te gaan bij de wereldbank in Jakarta. Dit is de reden dat hij zeven jaar in Indonesië heeft gewoond. Hij deed daar onderzoek naar armoede en zocht naar beleidsmatige strategieën om dit terug te dringen. Tijdens zijn verblijf daar kwam hij tot een bijzonder inzicht. “Ik zat in het café, waar we op vrijdagmiddag wel eens borrelden, en raakte aan de praat met de marketingmanager van Coca Cola. In eerste instantie dacht ik dat ik weinig met deze man te bespreken zou hebben over ons werk. Het bleek echter dat wij heel veel dezelfde dingen deden. We hadden natuurlijk een heel andere focus en probleemstelling, maar we deden wel dezelfde soort analyses van consumptiepatronen. Ik had nooit verwacht dat heel soortgelijk werk relevant is voor de verkoop van producten én het maken van beleid.” Indonesië vormde ook inspiratie voor de Econometric Game. Vier jaar geleden was hij al jurylid, maar nu mochten de deelnemers zich buigen over een case die hij zelf gemaakt had. Doel van de case was om te onderzoeken of proxies of wealth goede voorspellers zijn voor welvaartsstatistieken. De teams kregen drie dagen de tijd om al hun kennis los te laten op deze casus. “Als casemaker is het heel leuk dat er zo een enthousiasme is voor econometrie. Het motiveert heel

erg. Je kunt het zien als iets heel saais, maar als je er op deze manier mee bezig bent wordt het heel leuk. Je krijgt een probleem en probeert daar je kennis op los te laten. Daarnaast is er ook nog een competitieelement. Je steekt elkaar aan in je enthousiasme. Dit is hoe studeren hoort te zijn.” Hij vindt het dan ook jammer dat UvA-studenten zelf niet massaal mee kunnen doen. “Het is nu geprobeerd met de mini-case, maar het zou mooi zijn als studenten er nog meer voordeel aan zouden kunnen hebben. Misschien dat er een bewerkte versie van zo’n case gebruikt kan worden tijdens de studie. Nadenken over wat je aan econometrie hebt en waar je het op kan toepassen, leer je heel veel van. Je kijkt wat het allemaal betekent en of het logisch is wat er uitkomt. Dan komt de econometrie wat meer tot leven.” Tijdens de Econometric Game bleek ook dat mensen soms de werkelijkheid vergaten. Menno was naast casemaker ook jurylid en heeft dus veel van de papers gelezen. “Uit sommige berekeningen bleek dat de armoede 2% of juist 50% was. Als je van tevoren weet dat dit niet waar kan zijn, moet je jezelf bij het hoofd krabben. Maar misschien was dat ook tijdnood.” Al met al kijkt Menno als casemaker met veel enthousiasme terug aan deze editie van de Econometric Game. De deelnemers aan de andere kant hebben met deze case mogen ervaren dat je als econometrist in de ontwikkelingseconomie heel goed je ei kwijt kunt.

9

interview

De winnaars van de VSAE mini-case


Wat denkt de VSAE over ..... Het WK voetbal Om achter de mening van Nederland komen is er onderzoek gedaan onder een groep econometristen, die natuurlijk een goede weerspiegeling geeft voor de gehele bevolking. In deze editie bekijken we de voorspellingen van 43 VSAE’ers over het aankomende Wereldkampioenschap voetbal.

Vier jaar geleden, op het WK in Zuid-Afrika, leek ons geluk niet op te kunnen, de oranje machine denderde maar door en uiteindelijk greep ons nationaal elftal net naast de titel. Dit jaar heeft de VSAE weinig hoop op een beter resultaat, slechts 4 mensen denken dat Nederland het WK gaat winnen. Dit heeft misschien te maken met de ‘poule des doods’ waarin Nederland zich bevindt. Slechts 60% van de ondervraagden denkt namelijk dat ons elftal überhaupt de poulefa-

Welk land zal wereldkampioen 2014 worden?

Simone Spierings & Florian van der Peet liefst 6 Nederlanders (en een Deen). Gemiddeld verwacht de VSAE vier verschillende doelpuntenmakers. De eerste goal voor Oranje in Zuid-Afrika was gemaakt door een Deen. In totaal werden er in Zuid-Afrika slechts twee eigen goals gemaakt. Deze editie gaat dat meer worden als we de VSAE’ers mogen geloven. Er wordt een stijging verwacht van maar liefst 150%, want gemiddeld verwacht de VSAE 5 eigen doelpunten. Marc van Houdt heeft weinig vertrou-

“Tim verwacht dat er 11 verschillende Nederlanders gaan scoren komend WK”

Duitsland

Vsae onderzoek

Door:

Brazilië Spanje Nederland Argentinië Frankrijk

se doorkomt. Verder is het opvallend dat huidige titelhouder Spanje geen topfavoriet is om de titel te prolongeren. Het gastland Brazilië is daarentegen wel topfavoriet en moet alleen Duitsland net boven zich laten gaan. Een opvallende outsider is Ecuador die ook een stem heeft gekregen. Het lijkt erop dat er eindelijk een einde komt aan de Spaanse heerschappij. Deze eindstrijd wordt in ieder geval goed bekeken in ons land. Maar liefst 6 miljoen Nederlanders gaan gemiddeld volgens de VSAE’ers kijken naar de finale, al gaan er volgens Tim Hoogland maar liefst 16 miljoen Nederlands kijken. Misschien heeft dit te maken met het feit dat Tim verwacht dat er 11 verschillende Nederlanders gaan scoren komend WK. Hoe realistisch dit is zullen we zien. Vorig WK scoorden maar

10

wen in de slimheid van voetballers, want hij verwacht zelfs 7 eigen doelpunten over het hele WK. Daarnaast denkt de VSAE dat er in één wedstrijd maximaal 7 goals gescoord gaan worden. Dit is evenveel als het maximaal aantal goals in één wedstrijd vorig WK. Niet Iedereen rekent op een doelpuntenfestijn, Ruben Walschot verwacht maximaal drie goals in één wedstrijd. Dan komt nu natuurlijk de vraag wie deze goals gaan maken voor oranje. Er zijn volgens de VSAE twee grote kanshebbers op de eretitel om topscorer van het Nederlands elftal te worden. Arjan Robben heeft ruim een kwart van de mensen overtuigd, maar dat valt eigenlijk in het niets bij Robin van Persie, die ruim de helft van de stemmen heeft veroverd. Wat opvallend is, aangezien Robben Van Persie heeft uitgeschakeld in de Champions League. Ook hebben er ook nog een paar mensen vertrouwen in Wesley Sneijder en Klaas-Jan Huntelaar. Opvallend is dat Robben, Sneijder en Huntelaar ook hoog op de lijst staan voor de voetballer die het meest gewisseld zal wor-


“Milan Schinkelshoek is het ermee eens dat hij zelf het minste voetbal talent heeft” hierover is sterk verdeeld. De helft van de mensen verwacht een heel sportief toernooi met minder dan 150 kaarten, met vooraan deze sportieve beweging Lisa Schonk. Zij verwacht dat er in totaal 8 kaarten op het hele toernooi worden uitgedeeld. Lijnrecht tegenover deze sportieveling staat Florian Kroese die verwacht

dat er 600 kaarten worden uitgedeeld, wat een kleine 10 kaarten per wedstrijd betekend. Als dat blijkt te kloppen zal de finale gespeeld worden met halve teams. Natuurlijk is er ook gekeken naar voetbaltalent binnen de VSAE. Wie moet van Gaal zeker thuislaten en welk voetbaltalent binnen de VSAE zou van Gaal eigenlijk mee moeten nemen? Opvallend is dat de meeste VSAE’ers vinden dat ze zelf het minste voetbaltalent hebben. Toch is er altijd iemand die er net iets minder van kan dan de anderen, namelijk Milan Schinkelshoek. Gelukkig was Milan het daar zelf ook mee eens, want hij stemde ook op zichzelf. De hoop van de VSAE is daarentegen gevestigd op Gijs Overgoor en Rozemarijn Veldhuis. Houd de Arena dus goed in de gaten, want wie weet kom je ze daar binnenkort tegen. Of de VSAE’ers ook daadwerkelijk verstand hebben van voetbal is nog even afwachten tot de zomer, de gekste voorspellingen kunnen zomaar de juiste worden. Als je deze keer je kans hebt gemist om je mening te doen gelden, wees er dan volgende keer snel bij!

Verwacht aantal rode en gele kaarten op het WK

Aantal kaarten

> 225 150 - 225 75 - 150 0 - 75 0%

5%

10%

15%

20%

11

25%

30%

35%

40%

45%

50%

Vsae onderzoek

den. Maar de favoriet voor de wisselbeker is Van der Vaart, die momenteel ook de all-time wissellijst aanvoert. Het vertrouwen in van Persie is is ook hier weer groot, slechts drie mensen denken dat de topspits het meest gewisseld zal worden komend toernooi. Op het wk in 2010 zijn er maar liefst 92 rode kaarten en 245 gele kaarten uitgedeeld. De mening van de VSAE


Volledige transparantie op het Actuariaatcongres 2014

Evenementverslag

Op woensdag 5 maart 2014 vond de veertiende editie van het Actuariaatcongres plaats in het filmmuseum EYE. Het thema van dit jaar was Transparantie binnen Financiële Producten en Instellingen. Het doel van deze dag was om het begrip transparantie vanuit verschillende invalshoeken te belichten. Naast actuarissen en studenten, was ook Omroep MAX geïnteresseerd in deze editie van het congres. Transparantie is namelijk ook het thema van een nieuwe documentaire waar de omroep mee bezig is als vervolg op ‘Zwarte Zwanen’.

Door: Kasper van vliet & Irene doelman

De commissie en het bestuur stonden ’s ochtends vroeg op het pontje richting EYE. Een frisse wind in de haren zorgde ervoor dat elk commissielid echt wakker werd en scherp aan de dag kon beginnen. Nadat het IJ veilig was overgestoken begon de commissie dan ook direct met de laatste voorbereidingen. De laptops werden geplaatst, de ontvangsttafel werd klaargezet en de garderobe werd bemand. Al snel kwam het besef dat we met EYE een prachtige locatie hadden voor het congres. De positieve geluiden over de locatie die gedurende de dag klonken hebben ons gevoel meer dan bevestigd. Deze ochtend stond elk pontje vol met een groep enthousiaste actuarissen waarvan de eerste groep zo rond half 9 arriveerden. Na een lekkere kop koffie in de foyer verplaatsten de bezoekers zich naar de Cinema 1 zaal en de dag kon nu echt beginnen! Nuria Boot verzorgde op de opening van het congres een kort welkomstwoord. Vervolgens was het aan dagvoorzitter Jan-Huug Lobregt om het thema inhoudelijk te introduceren. Dit deed hij met de nodige humoristische opmerkingen. Hoe transparant wil je zijn? Willen we precies weten wat onze actuarissen doen op zondagmiddag? Liever niet toch? Zo vroeg Jan-Huug zich af. In het kader van het thema introduceerde Jan-Huug de sprekers met een op Google gebaseerd verhaaltje. Zo weten we nu van alle sprekers onder andere de leeftijd, geboorteplaats, burgerlijke staat en het aantal kinderen. We nodigen u dan ook van harte uit om de namen van de sprekers even in te typen om een idee te krijgen van de manier van introduceren. Om kwart voor tien was het de beurt aan Romke van der Veen om de eerste plenaire sessie te verzorgen. Jan-Huug benadrukte nog even dat Romke een uitwedstrijd speelde, vanwege het feit dat hij werkt bij de Erasmus Universiteit in Rotterdam. Nadat hij aangaf geen voetballiefhebber te zijn en zondagavond om zeven uur zeker niet naar studio sport te kijken (erg transparant), begon hij met zijn lezing over complexiteit en transparantie van de zorgverzekeringswet. Na veel nieuwe inzichten op

12


Na wat persoonlijke transparantie begon hij zijn boeiende en inhoudelijke lezing over concurrentie en transparantie op de hypotheekmarkt. Zelfs nadat hij al een kwartier langer dan gepland aan het woord was, was de zaal aandachtig aan het luisteren.We kregen een inzicht in het effect van de prijsleiderschapsverboden op de Nederlandse hypotheekmarkt en de bijzondere werkwijze van de Nederlandse Mededingingsautoriteit op dit gebied. Nadat Maarten Pieter zijn verhaal had afgerond, kon hij gelijk plaatsnemen in het discussiepanel. Hij ging in discussie met Hans de Goeij, Dirk Jan Sloots en Johan de Groot over stellingen die door de dagvoorzitter waren bedacht. Deze vier heren hadden allen een uitgesproken mening en ook de bezoekers kregen de kans om hun mening te laten gelden. Met de afronding van het discussiepanel kwam het inhoudelijke programma ten einde. Al snel bleek dat zowel studenten als actuarissen behoefte hadden aan een alcoholische versnapering. De borrel vormde dan ook een mooi einde aan een geslaagde dag. Rond 19.00 uur stonden commissie en bestuur met een voldaan gevoel op het pontje terug naar het centrum.Wij hopen dat alle bezoekers een leerzame en leuke dag hebben gehad en zien iedereen volgend jaar graag terug!

13

evenementverslag

te hebben gedaan was het tijd voor een welverdiend kopje koffie of thee. De volgende spreker werd door Jan-Huug niet ge誰ntroduceerd met wat er allemaal op internet te vinden was, want dan was hij daarmee de rest van de dag bezig geweest. Antoinette Hertsenberg was namelijk aan de beurt om een plenaire sessie te verzorgen. Ze nam ons mee in een overzicht van mensen die bij haar in het programma Radar waren geweest, om aan te tonen hoe belangrijk transparantie is en waar het de afgelopen jaren fout is gegaan. Ze riep op tot een terugroepactie om gemaakte fouten te herstellen, zoals Toyota eens al zijn Prius-wagens heeft teruggeroepen vanwege mogelijke defecten. Uiteraard kwamen er een aantal kritische vragen uit de zaal, deze werden door Antoinette kundig beantwoord. Daarna werd het thema vanuit een heel andere kant belicht door Richard Weurding, directeur van het Verbond van Verzekeraars. Hij begon met een historisch perspectief van het vertrouwen van mensen in verzekeraars en legde vervolgens uit welke cultuurverandering er is geweest binnen de sector sinds de woekerpolisaffaire. Mede vanwege een aantal filmpjes en geluidsfragmenten was het een boeiende en dynamische presentatie. Na een lekkere lunch was het de hoogste tijd voor de interactieve sessies. Verspreid over vier zalen in EYE werd er gedurende twee rondes van elk vier workshops dieper ingegaan op verschillende specifieke onderwerpen. Zo werd er gesproken over transparantie in de solvabiliteit, het financieel toetsingskader en het realisme over pensioenambities. Meerdere discussies kwamen op gang en de sessies hebben zeker stof tot nadenken gegeven. Aansluitend werd het plenaire gedeelte van het congres vervolgd. Het woord werd gegeven aan Maarten Pieter Schinkel, die als professor aan de Universiteit van Amsterdam zeker wel een thuiswedstrijd speelde.


Puzzelpagina Oplossing puzzel aenorm 82 De winnaar van de vorige puzzel is geworden Liselotte Siteur. Gefeliciteeerd Liselotte!

Oplossing puzzel:

pUZzel puzzEL

Hieronder de oplossing van de puzzel uit de vorige editie van de Aenorm.

4

9

1

3

8

6

7

2

5

7 4 3 8 9 2 6 1 5

1 6 9 3 5 8 7 4 2

5 7 4 1 8 9 3 2 6

9 3 1 2 6 7 4 5 8

3 8 6 4 1 5 2 9 7

8 5 2 9 7 3 1 6 4

2 9 8 6 4 1 5 7 3

4 2 7 5 3 6 9 8 1

6 1 5 7 2 4 8 3 9

14


Puzzelpagina Inzendingen puzzel Op deze pagina is een uitdagende puzzel te vinden. Oplossingen kunnen tot en met 30 juni 2014 worden ingeleverd. Dit kan in de VSAE kamer (E3.25-E3.27), per mail via aenorm@vsae.nl of per post naar de VSAE t.a.v. Aenorm puzzel 83, Roetersstraat 11, 1018 WB Amsterdam, Nederland. Er zal een VVV-waardebon ter waarde van €10,- worden verloot onder de correct ingezonden oplossingen.

meer PUZZELs Krijg je geen genoeg van de puzzels in de Aenorm. Kijk dan op www.aenorm.nl/puzzels voor meer puzzelplezier!

Every row, column,cluster, including the

Uitleg puzzel: fragmented green cluster, and the red 'X' must

Elke rij, kolom en cluster, inclusief het gefragmenteerde groene cluster, en de the numbers 8.8 bevatten. rode ‘X’contain moeten de nummers 1 tot1entomet

66704

2 3

4 PuzzEL puzzel

6 8 7

1 5 3 5

4

© Stephen Jones, Muddled Puzzles www.sudokion.com

15

36356317


Uitwisseling met New York University

eXCHANGE VERSLAG

De laatste tijd gaan steeds meer studenten als onderdeel van hun studie op exchange naar het buitenland. In februari vond u een artikel over het avontuur van Mila Harmelink in Kopenhagen, dit keer vindt u een verslag over studeren in New York!

Door: Joël van Kesteren

24 December 2012, kerstavond, ik kan me het nog goed herinneren. Ik zat bij m’n vader op de bank, m’n broer naast me, sippend aan een wijntje met een laptop op schoot. Op die laptop viel een wereldkaart te zien, met daarop oranje stippen weergegeven bij alle mogelijke uitwisselingsuniversiteiten van de UvA. Samen gingen we door de continenten heen. Australië? Poe, da’s wel wat ver. Zuid-Amerika? Mwah, mijn Spaans is nihil. Azië? Trekt me ook niet echt.Voor mij was het eigenlijk wel een leuk spelletje, waarbij mijn broer – als doorgewinterde reiziger - een nuttige hulplijn was. Ik wist dat ik aan een avontuur wilde beginnen, maar veel concreter was het dan ook niet. Uiteindelijk viel mijn blik op een viertal samengeklonterde stippen net aan de andere kant van de Atlantische Oceaan, en opeens wist ik het: New York. De grote appel, het centrum van de wereld, de stad die nooit slaapt; door de ietwat arrogante lokale New Yorker ook wel gewoon The City genoemd.Wat was er nou mooier dan na drie jaar in Amsterdam gewoond te hebben, een tijdje te leven in de stad die onze Hollandse voorvaderen al tot ‘New Amsterdam’ hadden omgedoopt. Ik zou die stad, die we door een ietwat onhandige ruil waren verloren aan de Britten, eens terugveroveren. Ik kreeg er ineens heel veel zin in.

De aankomst Precies acht maanden later was het zover. Eigenlijk voelde het allemaal nog steeds zo onrealistisch als toen ik die digitale wereldkaart voor me had. Ik stond op Schiphol, had afscheid genomen van vrienden en familie, en stond op het punt om voor het eerst in mijn leven de nieuwe wereld te ontdekken. Ik kwam aan op Newark, vanwaar ik met de trein en metro naar mijn kamertje in de East Village moest afreizen. Over het algemeen vrij op m’n gemak met het ov en de weg vinden, had ik niet echt van tevoren nagedacht hoe ik dit zou gaan doen, vertrouwend op de bewegwijzering en mijn eigen gezonde verstand. Ondertussen was ik aangekomen op Grand Central, het grootste en meest imponerende station dat Manhattan kent, maar nog steeds vrij ver verwijderd van mijn bestemming. Moe van de reis stond ik daar, terwijl mensen zich om mij heen strak en chagrijnig vooruitkijkend in een recordtempo voortbewogen - later vernam ik dat dit mijn eerste kennismaking met de typische New York face respectievelijk New York pace was. Nergens kon ik een metrolijnen overzichtsbord vinden, en de bordjes die er waren, kwamen ergens uit de jaren 40 en gaven informatie waar ik op dat moment echt helemaal niks mee kon. Ik besloot het dan maar te vragen. De eerste persoon die in een enigszins

16


behapbaar tempo voorbij liep, een kalende man van een jaar of 40, klopte ik op de schouder: “Excuse me, do you know where….”. “Do I look like a f*cking map?”, kreeg ik spontaan in m’n gezicht gesnauwd zonder dat ik maar in staat was mijn vraag te stellen, waarop de man zo mogelijk nog chagrijniger doorliep. Welcome to New York. Uiteindelijk heb ik mijn kamertje, welke ik spoedig vanwege haar grootte liefkozend zou omdopen tot ‘cel’, gevonden, en is alles dus goed gekomen. New Yorkers bleken eigenlijk zelfs heel aardig. Een dag na aankomst had ik mijn eerste kennismakingsactiviteiten op de universiteit waar ik mijn colleges het komende halfjaar zou mogen volgen, New York University (NYU), en vanaf toen was het één grote achtbaanrit. New York is zo’n gigantisch bruisende, dynamische en inspirerende stad, dat ik ineens over een twintigvoud aan energie leek te beschikken. Ik wilde veel nieuwe mensen leren kennen, de stad ontdekken, het uitgaansleven proeven, nieuwe en rare dingen doen, elke seconde slaap was eigenlijk zonde. Bijna drie weken werd ik hier volledig in meegenomen, daarna moest ik ook weer drie volle dagen slapen om enigszins bij te komen. Voordat ik verder vertel over New York, laat ik eerst iets over het studeren vertellen. Daarvoor kwam ik natuurlijk in New York, studeren. NYU is een kolossale universiteit midden in Manhattan, die gigantisch veel aan haar studenten verdient (die 60k dollar per jaar betalen) en zich met dat geld als een olievlek over Manhattan aan het verspreiden is. Ik was echt verbaasd hoe luxe de universiteit was, van 24/7 privétaxi’s waarmee je altijd gratis van A naar B kon tot 28 inch iMac’s of MacBook pro’s die overal stonden en voor studenten beschikbaar waren. Daarnaast is NYU echt een community, inclusief bussen, een kledinglijn, een aantal sportscholen, ziekenhuizen, enzovoorts. Veel studenten deden dan ook eigenlijk niks in New York buiten NYU. Qua vakken besloot ik me een halfjaartje niet met econometrie bezig te houden en een bonte verzameling te volgen: Wiskunde, Filosofie, Spaans en Muziek. De colleges op NYU waren heel goed. Veelal inspirerende docenten, een interessante, interactieve manier van lesgeven en actief participerende studenten tijdens de lessen. Alle studenten werkten onwijs hard en waren onvoorstelbaar gemotiveerd en ambitieus: als je terugkwam van een lange nacht uitgaan om 4:00AM en langs de bibliotheek liep, zat deze nog steeds angstaanjagend vol met mensen die een ‘all-nighter’ aan het ‘pullen’ waren. Zoiets had ik in Nederland nog nooit gezien. Daar waar ik in Nederland bijzonder goed was in het met een schuldgevoel gepaard gaande SOG’en, was het in New York moeilijk om zonder schuldgevoel te studeren. Ik vond het eigenlijk altijd zonde om in New York de bibliotheek in te duiken. En dat terwijl het tempo bij al mijn vakken best wel hoog lag. Het semester

17

eXCHANGE VERSLAG

Het studeren


duurde maar van september tot half december, maar in die tijd werd meer stof behandeld dan tijdens een anderhalve maand langer durend semester aan de UvA. Ook was het schoolser, veel tussentijdse opdrachten, essays, mondelinge overhoringen en quizjes: best wel hard werken dus.

eXCHANGE VERSLAG

De stad New York Buiten het studeren heb ik mijn uiterste best gedaan om alle facetten van New York en Amerika te ontdekken. Een greep uit een oneindige opsomming: de meest bizarre artiesten en muzikanten spotten in het persoonlijk favoriete Washington Square Park, strandhangen en tegelijkertijd in achtbanen stappen op Coney Island, je ogen uitkijken in een slechtere wijk als the Bronx, uitgaan als nooit tevoren tussen de hipsters in Williamsburg, vissen uit aquaria eten in Chinatown, continu nekpijn vanwege het turen naar alle wolkenkrabbers, de gekte van Times Square, live muziek in één van de vele cafeetjes in de East Village, ’s ochtends hardlopen langs de Hudson river, een fenomenale skyline vanuit het Brooklyn Bridge Park, verdwalen in het Central Park, je de koning van de wereld voelen op het Empire State Building, enzovoorts, enzovoorts. Daarnaast ben ik nog met vrienden gaan roadtrippen naar Toronto, Niagara, Washington, Philadelphia, Boston, en de koude van NYC z’n winters ontsnapt door naar partywalhalla Miami en tropisch Puerto Rico te vliegen. Wat ik uiteindelijk het allertofste aan New York zelf vond was alle verschillende culturen en mensen die zo geweldig goed kunnen samenleven en stuk voor stuk alles uit hun leven willen halen. Dat was heel inspirerend. Iedereen kent waarschijnlijk wel Chinatown of Little Italy, maar New York kent ook een Little Thailand, India, Brazil, K-town (Korea), orthodox Joodse wijken waar uitsluitend mannen met hoge hoeden en bakkenbaarden

rondlopen, Griekse en Russische wijken, alles en iedereen is vertegenwoordigd. Tijdens mijn tweede dag ging ik uit eten met twee jongens uit Brazilië en Australië en een meisje uit India, wat ik op dat moment echt heel bizar vond. Ondertussen heb ik vrienden van overal ter wereld, en raak ik niet uitgepraat over de onderlinge verschillen in cultuur, type mensen en taal. New York is echt een microkosmos, een wereld op zichzelf. Terugkijkend op deze uitwisseling voelt het eigenlijk allemaal als te mooi om waar te zijn. Het klinkt een beetje slap, maar ik kan bijna niet geloven dat ik dit echt heb meegemaakt, het is net een droom. Alles, maar dan ook echt alles, is even anders: je huis, je vrienden, je universiteit, de taal, de cultuur. Je bént even iemand anders. Een uitwisseling is best wel een beetje spannend, maar hierdoor moet je je echt niet laten weerhouden. Het was voor mij ongetwijfeld de mooiste tijd tot nog toe in mijn leven. Als je meer wilt weten over mijn uitwisseling of plannen hebt om naar New York te gaan en tips wil, neem dan zeker contact met me op. Ik vind het alleen maar fantastisch om erover te vertellen. Voor nu, als je toevallig binnenkort naar New York gaat, zal ik als tip willen meegeven: vergeet even het Vrijheidsbeeld of Times Square, pak een willekeurige bus, zet Lou Reed’s ‘NYC Man’ op je oren, stap uit bij een halte waar je een goed gevoel over hebt, wandel de eerste kroeg binnen die je ziet, bestel een biertje, en zie wat er gebeurt. Ik beloof je dat het niet zal tegenvallen, want dat kan niet in New York.

18


PREDICTING FIFA WORLD CUP 2014: A SIMULATION BASED STUDY The national football competitions have come to an end and coaches have announced their squads for the World Cup. Football fans from all over the world are getting excited about the upcoming tournament. Based on statistics Bas Koolstra modeled a simulation system to predict the outcomes of the World Cup.

PREDICTING POVERTY WITH INCOMPLETE DATA: TACKLING MEASUREMENT ERROR WITH A PSEUDO COPULA APPOACH The winners of the Econometric Game 2014 (University of Copenhagen) summarized the paper they wrote during the competition. In the resulting article they present their most important results.

COMPARING LONG AND SHORT TERM BEHAVIOUR ON THE STOCK MARKET AND INVESTIGATING THE EFFECTS OF THE FINANCIAL CRISIS The 2008 financial crisis caused a global collapse of stock prices. In this thesis we investigate stock market dynamics. To do this, we modify a heterogeneous agents model described by Boswijk, Hommes and Manzan1 and extend this to include different time horizons and additional agent types.


Predicting FIFA World Cup 2014: A simulation based study By:

Methodology

Recommended for readers of XXx-level

This research is simulation based, I modeled the upcoming World Cup in Excel. Using international ratings and rankings all participating nations are given a value representing their strength. For every match the relative strength between the home and away team is used to determine the expected amount of goals per team per match. Then using a stochastic process the matches are simulated. After all group stage matches the remainder of the tournament is simulated. In case of penalties during one of the final rounds, both teams have a 50% chance of winning. The conclusions of this study are based on 10000 simulations of the FIFA World Cup. The next paragraphs are devoted to the rankings and modeling of the goal scoring process.

ELO rating There are several rankings to consider when looking at international teams, of which the FIFA ranking is probably best known. However, the ELO rating has a few advantages over the FIFA ranking and therefore the ELO rating1 is used. The system uses a weighting for the kind of match (World Cup finals are far more important than friendly matches), an adjustment for the home team advantage, and an adjustment for goal difference in the match result. Another advantage is the existence of an expected result formula, to determine the chances to win for both teams. An important feature of the expected result is the advantage for teams playing home. They get a bonus score if they play a home match, causing a higher probability to win the match. I decided to award Brazil

Bas Koolstra

a bonus of 60 points, because the tournament takes place in Brazil. Furthermore statistics show that World Cups are very often won by a country from the same continent as where the tournament takes place. Therefore teams from South-America receive a bonus of 30 points.

Goals In the end football is always about scoring goals and that is why modeling the amount of goals per team per match is so important. Literature learns us that goals in football matches have the tendency to follow a Poisson distribution, this was shown by Norman (1998) and used in several simulations, for example by Dyte & Clarke (2000). To verify this information I studied all goals during the FIFA World Cup in 2010 (figure 1). Over the entire tournament teams scored an average of 1,13 goals per team per match. The distribution of the goals looks very similar to a Poisson distribution with an average of 1,13. The only exception was an outlier of seven goals, which was caused by the match Portugal North Korea.

Goals World Cup 2010 0,4 0,3

Goals per team per match

0,2

Poisson distribution with mean 1,13

0,1 0

0

1

2

3

4

5

Figure 1: Goals World Cup 2010

1. For more information see: www.eloratings.net

20

6

7

Recommended for readers of XXx-level

BSC-LEVEL | ECONOMETRICS | specialty | NL

The national football competitions have come to an end and coaches have announced their squads for the World Cup. Football fans from all over the world are getting excited about the upcoming tournament. Based on statistics I modeled a simulation system to predict the outcomes of the World Cup. In this article I present my methodology and the most important results.


This provides me with the expected amount of goals for the home team and away team for a specific match. Using this as input for a draw from a Poisson distribution with mean Goalshome and a draw with mean Goalsaway the real amount of goals for both teams are determined. In this way teams with a lower amount of expected goals still have chances to win the match, however they are not favorite to win the match. The bigger the rating difference, the less likely it is that the stronger team will loose.

Cup? Which nations might better stay home because they will never win and do certain countries have a disadvantage because of strong opponents in the group stage? The average results of the World Cup simulations are shown in figure 2 (page 22), the most interesting and most important outcomes are discussed per stage in this section.

Group stage Traditionally the host of the World Cup is placed in group A, Brazil proves to be firm favorite in this group. With an average of 8.2 points (out of 9), a highly positive goal difference and an impressive percentage of 93.91% first places they are expected to win this group. The highly desired second place is mainly a competition between Croatia and Mexico, with Mexico as most likely candidate to win. Cameroon has to hope for a big surprise, because their chances of making it to the next round are very low. Group B is often considered to be the toughest in this year’s World Cup. Spain and the Netherlands both made it to the final in the previous World Cup and Chile is performing very well at the moment. This is a disaster for Australia, because with an average of 1.1 points they are the worst performing team in the simulations of the group stage. Spain, as winner of both FIFA World Cup 2010 and UEFA Euro 2012 is most likely to take the first place in this group. The battle for the second place in this group is extremely tight, but Chile has a slightly higher chance to make it to the next

“There is no doubt about it that Brazil is the biggest favorite to win the World Cup this year” Results It shouldn’t be much of a surprise that 10000 simulations of the FIFA World Cup lead to a lot of different scenarios. Although I didn’t compare every single simulation to every other simulation, it is very unlikely that two simulations are exactly the same. Even the chance that one of the simulations will be replicated exactly at the real World Cup is relatively small, because with eight groups consisting of four teams, the round of 16, quarterfinals, semifinals and final there are a lot of possible developments of the tournament. However, based on 10000 simulations there is more than enough to tell about general expectations of the tournament. What team is favorite to win the World

round: 59.49% (Chile) versus 54.02% (Netherlands). This is definitely going to be an interesting group! The countries in group C are of less prestige than the nations in group B. Colombia seems to profit from this. They have high chances of progressing to the next round, although it should be noted that a lot of their good results in the past were inspired by key player Falcao and as he is currently injured this could be a big problem for Colombia. The others in the group have to fight for place two, with Greece as light favorite but Japan and Ivory Coast definitely have the possibilities to surprise. Group D also has a country that almost certainly will return home after the three group stage matches,

21

BSC-LEVEL | ECONOMETRICS | specialty | NL

Recommended for readers of XXx-level

| specialty | NL

Unfortunately the expected result formula of the ELO rating only predicts the probability to win and doesn’t take into account the amount of goals during the match. Combining the expected win formula of the ELO ratings and the knowledge of the distribution of goals enables me to make a new model to predict the tournament. Using several optimization techniques to mininize the difference between my own winning probabilities and the ones of the ELO rating system and constraining the average amount of goals between boundaries based on previous World Cups I came up with a formula for the expected amount of goals for both teams:

Recommended for readers of XXx-level

Predictions


BSC-LEVEL | ECONOMETRICS | specialty | NL Recommended for readers of XXx-level

Costa Rica only makes it to the next round once per eight tournaments. Italy and England are still important countries in football, but both have only performed very average over the last years and that’s visible in the expectations for this tournament in Brazil. The two European countries have to battle for a second place, behind strongly performing Uruguay. In group E France is considered to be the strongest. Ecuador and Switzerland perform very similar and have about the same chances to make it to the round of sixteen. Honduras is going to have a hard time in this group, but a surprise should definitely be possible. Group F has a very clear favorite, Argentina is not expected to have much problems with Bosnia and Herzegovina, Iran and Nigeria. Those last three teams have about equal chances to win the battle for the second ticket to the next round. The best team in group G is Germany, with a first place percentage of about 70%. Portugal might be a problem for the Germans, and is favorite for place two in the group. The United States should be hoping for a small surprise and Ghana for a huge one. The last group, H, is not the strongest in the tournament. Belgium and Russia shouldn’t have too much trouble, an average performance of them is enough to progress. However, it will be very interesting to see which of them is going to win this group.

Group

Team

Round of 16 As the first and second team of every group in the first stage make it to the round of 16 it is not a surprise to see the same teams in this round as earlier mentioned to be favorites in their group. Countries like Brazil, Argentina and Germany are almost certain to make it to this round, while important teams like Italy, the Netherlands, England, Ecuador, Switzerland and Chile have serious chances of missing this round. Australia is least likely to make it to the second round, followed by Cameroon and Ghana.

Quarterfinals Not surprisingly the teams that were present in the second round in most cases are also the teams that are mostly playing in the quarterfinals. However, there are some very interesting remarks to make about this. Let’s have a look at a small comparison between Brazil and Germany. In the round of 16 Brazil played 99.23% of the matches, but they only played 67.55% of the quarter finals. This means that they lost about 32% of their matches in the second round. Germany played 91.38% of the matches in the second round, and 78.25% of the quarter finals.They only got eliminated in approximately 14% of their matches in the second round. A logical explanation for this would be the World Cup schedule, because the winner of group A always plays number two of group B. As earlier mentioned group B is a very

Points

GF

GA

GD

Grp 1

Grp 2

Grp 3

Grp 4

Q16

Quarters

Semis

Final

1

2

3

4

8.7 1.4 2.4 2.8 1.2 3.6 3.4 4.8 4.4 2.6 2.4 2.0 1.6 3.2 2.9 4.0 3.1 3.6 1.7 3.1 5.9 2.2 2.3 2.2 5.4 1.4 3.4 2.7

0.9 6.5 4.2 3.7 6.2 2.5 2.6 1.7 1.6 2.9 3.2 3.7 4.6 2.5 2.7 1.9 2.5 2.1 4.3 2.6 1.2 3.8 3.7 3.8 1.4 5.6 2.6 3.3

7.8 -5.1 -1.8 -0.9 -5.0 1.1 0.8 3.1 2.8 -0.3 -0.9 -1.6 -3.0 0.6 0.2 2.2 0.5 1.5 -2.5 0.5 4.6 -1.6 -1.4 -1.6 4.0 -4.2 0.8 -0.6

93.91% 0.25% 2.12% 3.72% 0.61% 23.64% 20.93% 54.82% 64.00% 16.20% 11.99% 7.81% 3.29% 25.73% 20.19% 50.79% 26.01% 43.02% 4.92% 26.05% 82.45% 5.57% 6.64% 5.34% 69.41% 0.95% 19.84% 9.80%

5.32% 9.37% 35.39% 49.92% 3.35% 35.85% 33.09% 27.71% 22.84% 31.16% 26.54% 19.46% 9.20% 32.57% 30.08% 28.15% 30.93% 28.68% 11.30% 29.09% 13.06% 27.65% 31.07% 28.22% 21.97% 6.19% 43.12% 28.72%

0.74% 24.21% 42.02% 33.03% 12.16% 34.15% 38.17% 15.52% 9.26% 29.66% 31.12% 29.96% 22.54% 28.32% 33.33% 15.81% 28.08% 19.51% 23.79% 28.62% 3.52% 32.10% 31.70% 32.68% 7.50% 19.75% 28.84% 43.91%

0.03% 66.17% 20.47% 13.33% 83.88% 6.36% 7.81% 1.95% 3.90% 22.98% 30.35% 42.77% 64.97% 13.38% 16.40% 5.25% 14.98% 8.79% 59.99% 16.24% 0.97% 34.68% 30.59% 33.76% 1.12% 73.11% 8.20% 17.57%

99.23% 9.62% 37.51% 53.64% 3.96% 59.49% 54.02% 82.53% 86.84% 47.36% 38.53% 27.27% 12.49% 58.30% 50.27% 78.94% 56.94% 71.70% 16.22% 55.14% 95.51% 33.22% 37.71% 33.56% 91.38% 7.14% 62.96% 38.52%

67.55% 0.57% 6.37% 11.29% 0.43% 29.27% 26.22% 58.30% 48.84% 15.94% 11.74% 7.50% 4.21% 32.38% 26.94% 52.45% 24.66% 38.83% 4.05% 24.65% 70.01% 11.96% 14.18% 11.66% 78.25% 2.70% 44.19% 23.95%

51.43% 0.03% 1.78% 3.81% 0.09% 17.03% 15.34% 41.57% 18.57% 3.64% 2.58% 1.32% 0.65% 10.19% 8.24% 23.73% 8.79% 16.14% 0.84% 8.69% 44.42% 2.78% 3.37% 2.46% 59.56% 0.60% 23.44% 10.43%

35.97% 0.01% 0.25% 1.19% 0.02% 8.82% 7.42% 26.41% 9.33% 1.18% 0.65% 0.33% 0.17% 4.33% 3.30% 12.90% 2.70% 5.09% 0.13% 2.54% 24.37% 0.61% 0.48% 0.50% 33.33% 0.17% 9.86% 3.61%

23.95% 0.00% 0.03% 0.20% 0.00% 3.60% 2.97% 14.67% 4.12% 0.27% 0.15% 0.04% 0.05% 1.44% 1.01% 5.94% 0.75% 1.77% 0.00% 0.68% 11.63% 0.13% 0.08% 0.08% 20.66% 0.04% 3.71% 1.02%

12.02% 0.01% 0.22% 0.99% 0.02% 5.22% 4.45% 11.74% 5.21% 0.91% 0.50% 0.29% 0.12% 2.89% 2.29% 6.96% 1.95% 3.32% 0.13% 1.86% 12.74% 0.48% 0.40% 0.42% 12.67% 0.13% 6.15% 2.59%

11.23% 0.00% 0.37% 0.70% 0.04% 4.32% 4.26% 9.49% 4.83% 0.79% 0.46% 0.29% 0.12% 2.32% 2.13% 5.84% 2.24% 4.85% 0.10% 2.19% 11.11% 0.33% 0.57% 0.46% 18.18% 0.09% 6.24% 2.61%

4.23% 0.02% 1.16% 1.92% 0.03% 3.89% 3.66% 5.67% 4.41% 1.67% 1.47% 0.70% 0.36% 3.54% 2.81% 4.99% 3.85% 6.20% 0.61% 3.96% 8.94% 1.84% 2.32% 1.50% 8.05% 0.34% 7.34% 4.21%

A

Brazil Cameroon Croatia Mexico

B

Australia Chile Netherlands Spain

C

Colombia Greece Ivory Coast Japan

D

Costa Rica England Italy Uruguay

E

Ecuador France Honduras Switzerland

F

Argentina Bosnia and Herzegovina Iran Nigeria

G

Germany Ghana Portugal United States

8.2 1.6 3.3 3.8 1.1 4.8 4.6 6.1 6.2 3.8 3.4 2.8 1.9 4.5 4.2 5.7 4.4 5.2 2.2 4.4 7.2 3.0 3.2 3.0 6.7 1.5 4.6 3.7

H

Algeria Belgium Russia South Korea

2.6

1.9

3.9

-1.9

6.68%

15.48%

29.80%

48.04%

22.16%

2.71%

0.70%

0.13%

0.00%

0.13%

0.05%

0.52%

5.4 5.1 3.1

3.8 3.5 2.2

2.0 2.1 3.5

1.8 1.4 -1.2

44.97% 37.68% 10.67%

31.44% 34.03% 19.05%

16.11% 19.17% 34.92%

7.48% 9.12% 35.36%

76.41% 71.71% 29.72%

23.60% 20.26% 4.34%

9.23% 7.41% 1.14%

2.47% 1.63% 0.10%

0.59% 0.41% 0.01%

1.88% 1.22% 0.09%

2.06% 1.57% 0.16%

4.70% 4.21% 0.88%

Figure 2: Average simluation results

22


Semifinals As the final gets closer surprises are getting rarer. Germany, Brazil, Argentina and Spain are the big four that we need to watch this summer, they make it to the semifinals most often. Under the condition that all four of them win their group during the first stage of the tournament, it is even possible that those four teams make it to the semifinals all together. In that case Germany would play Brazil and Argentina would play Spain.

Winner There is absolutely no doubt about the favorite to win the World Cup this year, Brazil is the most likely contender. With years of strong performance, home advantage and a relatively easy start in the group stage they are top ranked to take the trophy home. Germany is a strong candidate as well, Brazil seems to be their biggest obstacle towards the final. Spain needs to have a good start, but if they succeed in that then there are great possibilities for them as well. Round of 16 Brazil - Chile Colombia - England Spain - Mexico Uruguay - Greece France Iran Germany - Russia Argentina - Ecuador Belgium - Portugal

Quarterfinals Brazil - Colombia France - Germany Spain - Uruguay Argentina - Portugal

Most likely tournament People that like to make a bet on not only the winner, but the entire tournament need to be careful. Although Germany wins the tournament most often after Brazil, they are not very likely to play the final against Brazil. This has to do with the structure of the tournament, both teams have large chances of taking the first place in the group stage. If they both do so, they will already play against each other in the semifinals, making it impossible to both make it to the finals. Another important factor is the absence of the Netherlands in the round of 16, they are victim of their strong group. Chile and Spain are more likely to make it through the group stage. The complete ‘most likely tournament’ is shown in figure 3.

Relative performance Figure 4 (page 24) shows the relative performance of teams, based on their rating and the amount of tournament wins in the simulation process. On first sight this comparison shows that most countries perform like expected. Chile, The Netherlands, Japan and Croatia seem to underperform, while Columbia and Portugal perform better than expected. Top teams like Brazil, Germany and Spain don’t seem to overperform or underperform when we only look at the difference between rankings based on ratings and on tournament wins. However, a closer look shows something remarkable. Germany’s rating is only 0,6% higher than the rating Spain has, however Germany wins the tournament 599 times more than Spain, which is an astonishing difference considering that Spain wins 1467 out of 10000 simulations.

Semifinals Brazil - Germany Spain - Argentina

Figure 3: Most likely tournament

23

Final Brazil - Spain

1 2 3 4

Brazil Spain Germany Argentina

BSC-LEVEL | ECONOMETRICS | specialty | NL

strong one, so Brazil is almost certain to have a difficult match in the second round, whereas Germany is not likely to have much trouble with opponents like Russia, Belgium, Algeria or South Korea. Another important lesson from the teams in the quarterfinal is that if teams make it to the second round by surprise they have very little chance of causing another surprise. Cameroon doesn’t make it to the second round often, but if they do they only have a 6% chance of making it to the quarterfinal. Australia performs a little better, they have an expectation of about 11%.

Recommended for readers of XXx-level

| specialty | NL

“Australia is least likely to make it to the next round!”


Conclusions

About the author Bas Koolstra Bas (23) completed his BSc Actuarial Science at the University of Amsterdam in the summer of 2012. Then he spent a semester abroad at McMaster University, Canada. After returning to the Netherlands he took a full-time position as chairman in the board of study association VSAE. He recently started his MSc Financial Econometrics and is expecting to finish mid 2015. Next to economics Bas is interested in sports, especially football and cycling.

The combination of the expected win formula of the ELO rating and the Poisson distribution of goals in football enable me to simulate the upcoming World Cup. These simulations prove that Brazil has high chances of winning the World Cup. Other strong opponents are Germany, Spain and Argentina. The simulations show that Colombia and Portugal have high chances of overperforming in this tournament, probably because of an advantegeous schedule. Chile, Japan and Croatia are expected to underperform based on these simulations. A deeper analysis also shows that Spain performs worse than their direct opponents for the top places. The simulations also confirm the low expectancy about the Netherlands for this World Cup. Their group is very strong and the possibilities of playing against Brazil in the round of 16 are high. However, according to the model they still have about 3% chance to win the World Cup. This is more than for example England, Italy and France. In the end it is important to note that we only have one tournament this summer and all countries have chances to win. This model predicts chances based on very large numbers, so the chances of the tournament developing like the ‘most likely tournament’ are relatively small.

References Dyte, D., & Clarke, S. R. (2000). A ratings based Poisson model for World Cup soccer simulation. Journal of the Operational Research society, 51(8), 993-998. Norman, J.M. (1998). Soccer. In: Bennett, J. (ed). Statistics in Sport. Arnold: London, pp 105-118.

24

Recommended for readers of XXx-level

BSC-LEVEL | ECONOMETRICS | specialty | NL Recommended for readers of XXx-level

Figure 4: Relative performance per country


Predicting poverty with incomplete data: tackling measurement error with a pseudo copula approach By:

Editorial staff: In April this year the fifteenth edition of the Econometric Game took place. During three days, teams of thirty different universities worked on a social economic case and wrote a paper about the results they found. The team of the University of Copenhagen submitted a paper that proved to be the best of them all. Their creative and inventive approach was rewarded: Alessandro, Anders, Daniel and Valeria became the winners of the Econometric Game 2014. The article below is based on the paper that they wrote during the Game.

The measurement of poverty using household survey statistics presents a twofold econometric problem: minimizing prediction error in the means and the tails. This study aims to estimate poverty measures and consequently focuses on the tails. Using survey data from 1996, 1999, and 2002 on household expenditure, demographics, health, education, and labor participation, this study estimates average per capita expenditure, the poverty headcount ratio, and the poverty gap ratio for Indonesia in 2002. Pradhan (2001) presents a wealth proxy from aggregated expenditure data that underscores the tradeoff between more detailed (referred to as the module) and less costly questionnaires at a higher level of aggregation (referred to as the core), as well as the impact of measurement error on estimation results. Building on this study, we improve our initial expenditure prediction based on household characteristics and expenditure data in 1996 and 1999. Our main contributions are the correction of the distributional properties of expenditure predictions through a copula-inspired approach and detecting and estimating non-classical measurement error through a semi-parametric general method of moments (GMM) approach. The fact that measurement error in wealth proxies causes our estimated prediction to be concentrated around the mean motivates our copula-inspired method. Attenuation bias originating from measurement error dampens the explanatory power of our regressors, thus

making us underestimate the poverty headcount and gap ratios. We improve the distributional properties of our predictions by combining information from our predictions with the aggregated core data; this improves the fit of the moments of the predicted distribution. The adjustments are derived from applying Kernel Density Estimation (KDE) to the distributions of predicted module data, actual core data, and actual module data from 1996 and 1999. KDE offers us parameters – a mean and a variance – to effectively compute a “copula” that best matches the observed, true consumption.The observed consistent relationship between the measures lends support for using these parameters. This method allows us to account for measurement error that would have otherwise lead to underpredicting the tails in our distribution. Despite the greatly improved fit and ability to predict poverty headcount ratios through our approach, we incorrectly assign individuals to either side of the poverty line around 20% of the time. Internal validation checks demonstrate that our adjustments considerably improve the distributional properties of our predictions of consumption in the module data in years 1996 and 1999. The assignment issue arises from non-classical measurement error in the core sample of our data. We therefore use the GMM approach to detect and estimate this error.

25

MSC-LEVEL | econometrics

Introduction

Alessandro Martinello Anders Munk-Nielsen Daniel Safai Valeria Zhavoronkina


This approach fully exploits the randomness of the assignment of the expenditure questionnaire and explains why, despite the improved prediction of the distribution, we underperform at correctly assigning observations above and below the poverty line. Regarding our data, we obtain it from SUSENAS, a socioeconomic survey including variables related to household expenditure, demographics, educational attainment, health care, household assets and characteristics, and labor force participation. We have data for 1996, 1999, 2002. The data is collected across Indonesia and presented at the provincial and district levels. The core/module design we refer to is described in Pradhan (2001) and Surbakti (1997).

(a)

(b)

Recommended for readers of XXx-level

MSC-LEVEL | econometrics | specialty | NL

The Phantom Menace: Measurement Error A standard approach to compute poverty measures when the expenditure distribution is unknown is to predict individual expenditure values, and then use them to compute poverty ratios as if they were observed data. While this approach is simple and intuitive (in a simple OLS setup it consists of predicting ), the predictive power of given the model the model can be low, particularly in the tails of the distribution and in the neighbourhood of the poverty line cutoff, if • the data generating mechanism is strongly non-linear across the expenditure distribution • the variance of the independent portion of the unobservable is relatively large • the explanatory variables X are measured with error While we can accommodate the first two points by targeting the estimation on relevant subsamples, in the absence of validation or panel data, our estimated will suffer from attenuation bias. This bias attenuates the explanatory power of X and shrinks the distribution of around its mean. The upper panel of Figure 1 shows that such a bias likely occurs in our predictions, resulting in a severe underestimation of the poverty rate. The goal of this paper is to use information from the core sample of the SUSENAS dataset to correct for this underestimation. This section assesses how we can best use it. The lower panel of Figure 1 compares the distributions of expenditure in the core and module sample for 1996 and 1999. While the core sample underestimates average consumption with respect to the module sample, the dispersion of the two distributions appears to be similar. However, the empirical variances being similar does not imply that the two datasets are equally precise. Aggregated measures of consumption might exhibit non-classical measurement error.

Figure 1: Kernel density estimates comparisons in years 1996 and 1999. In (a): Module sample data vs. OLS predictions. In (b): Module sample data vs. core sample data.

In a simple, standard measurement error model of the type

(1) non-classical measurement error occurs when the is not equal to zero. coefficient Most survey validation studies find that measurement error is non-classical (Bound and Krueger, 1991; Bound et al., 1994; Bollinger, 1998; Kane et al., 1999; Kapteyn and Ypma, 2007; Bricker and Engelhardt, 2008; Abowd and Stinson, 2013) and negatively correlated with the variable of interest. Such measurement error decreases the variance of with respect to the variance of y. If measurement error in the core sample is non-classical and, as Figure 1 suggests, the dispersion of expenditure in the core and module sample is similar, then the variance of the classical measurement error component must necessarily be large enough to compensate for the shrinking of the distribution around the mean, which in turn would indicate that the core sample has a very poor signal-to-noise ratio. Because the module sample was randomly assigned to respondents, we can test whether the measurement error process differs across the two samples. The intuition of this test is straighforward. As the module questionnaire was distributed randomly, the relationship between any variable and the different and should be identical in the measures of absence of measurement error. Any difference in the relationship between expenditure and any given variable in the dataset must then necessarily stem from different degrees of non-classical measurement error in the two samples. Figure 2 shows that, despite the randomness of the sample, the relationships between the two consumption measures and different variables in our dataset, controlled for year- by-urban area fixed effects

26


and population-weighted, differs sharply. This suggests the core sample suffers from non-classical measurement error (Pradhan, 2001). a(

(a)

that the core sample is measured with non-classical, mean-reverting error (or at least more so than the module sample). Generally, non-classical measurement error would shrink the distribution of a variable around its mean. Because the standard deviations of the consumption distribution in the core and module sample are very similar, the variance of the classical must be large measurement error component enough to compensate for this shrinking.

(b)

27

Recommended for readers of XXx-level

As a consequence, the signal-to-noise ratio of the core sample expenditure data must be poor. This suggests not only that the core sample data cannot significantly improve our individual predictions, but Figure 2: Semi-parametric relationships for core and module sample also that the ordinal succession of observations in the In (a): Number of adults in household. In (b): Working hours core sample, from the poorest to the richest, can be We use this difference to estimate the relative non- misleading. With non-classical measurement error, a classicality of measurement error in the core sample very poor person will tend to give a response biased with respect to the module sample. towards the mean. Then, because the overall noise-toFormally, the covariance between our expenditure signal ratio is large, we might observe this very poor measure and a given variable x can be written as an person above the poverty line in the core sample. affine function in c as Because the goal of this paper is to estimate aggregate poverty measures, we choose to shift our (2) focus from predicting individual expenditures to predicting their distribution. Instead of using individual (3) expenditure information from the core sample to improve our individual predictions, in the next section we show how we use information from the distribution where c indicates the core sample, and and are the mean-reverting, non-classical measurement of expenditure in the core sample to directly adjust the error components in the module and core sample, distribution of our prediction. respectively. Intuitively, this model regresses the sample covariance between and x on the sample selection A New Hope: A Pseudo Copula Approach indicator c. We do not assume the module data is In this section,we present how we combine demographic exactly measured. However, as the full, general model is information and the distribution of expenditure for the not identified solely by the comparison in covariances core sample to improve our poverty predictions for between the samples, we normalize the model by 2002 in Indonesia. Namely, we shift our focus from estimating the quantity trying to predict the individual expenditure levels to properly fitting the whole distribution of expenditure (4) in the module sample. We do this by combining information from the distribution of the 2002 core that identifies the relative non-classicality of expenditure data with that of predicted expenditure measurement error in the core sample. Then, we values from our baseline OLS estimates. We map these only need to identify two parameters (Cov(yi , x) and distributions into a prediction of the true consumption ). Because we observe the random variable c, the by using these distributions in 1996 and 1999 to covariance between and x provides two moments compute the “copula� (parametric combination of the for identification. We estimate this model with GMM two distributions) that best matches the expenditure on a random 10% of the dataset to minimize comput- as observed in the module sample. ing time, and report the estimates in Table 1. Using either of the two variables reported in Figure A. Intuition 2, we estimate a negative and significant , suggesting Intuitively, our prediction of the distribution of

MSC-LEVEL | econometrics | specialty | NL

Table 1: GMM estimates by variable, on a random 10% sample


consumption is a linear combination of the distribution of the noisy consumption measure and the distribution of the OLS predictions.We combine these distributions in the years 1996 and 1999, and extrapolate the mapping we observe in these years for 2002. The underlying assumption behind our approach is thus that the relationship between these three distributions is constant across years.

(1996)

B. Method

In this section, we formally present how we map the two distributions of consumption (noisy and predicted) into our final prediction of the consumption distribution. In all years, we observe the expenditure measure for the core households, {yci |i C}. In 1996 and 1999, we also observe the expenditure measure for the module households, i M. Therefore, we can estimate from our preferred OLS estimation of ymi on xi on i M. Given these estimates, we can compute the parametric prediction on the full sample, { mi(xi)|i C U M}, where mi(xi):= xi . Next, we assume that these distributions are normal, and parametrize them as (5) and that the three distributions are related such that

Recommended for readers of XXx-level

MSC-LEVEL | econometrics | specialty | NL

+ noise, + noise. (1999)

(6)

We then use the fact that in 1996 and 1999, we observe all three distributions, and choose parameters such that this correspondence holds. Our final predicted variable then follows the distribution (7)

Figure 3: Consupmtion density estimatses for year 1996 and 1999 for the module, predicted and core sample.

While this assumption cannot be tested, Figure 3 suggests that the relationship between these three distributions does not change considerably between 1996 and 1999. Figure 3 presents the estimated density for expenditure in 1996 and 1999.The pattern between the two periods seems constant. In the period from 1996 to 1999, the increase in the expected value of household expenditure in the module sample distribution was captured by the core and predicted samples, as well. Similarly, while the dispersion of the module sample expenditure distribution seems to decrease slightly between the two years, so does the dispersion of the core sample expenditure distribution. As shown in Figure 1, our OLS estimates do not properly capture the tails of the real expenditure distribution, which results in a severe underestimation of the poverty headcount ratio. Our solution is to then adjust the characteristics of this distribution, using information from the core sample expenditure distribution. In practice, we scale the mean and variance of this distribution parametrically by factors that, to capture dynamics in the differences between the measures, we compute in a regression-based setup.

The intuition behind this approach is that under the hypothesis of normality, all distributions are completely characterized by the first and second moments. We choose our prediction to be the specific linear combination of prediction and noisy consumption measures that, for 1996 and 1999, gives the best fit of the true consumption distribution in these years, given the sample means and standard distributions we compute in each of the three distributions. Namely, given 1996 and 1999 sample means and standard deviations, we solve the systems (8) and (9)

The solutions to these systems are shown in Table 2. Table 2: Mapping parameters.

Slightly abusing standard notation, we call the parameters even though they are not estimated (the systems above are exactly identified,

28


(1999)

Figure 5: Out-of-sample Prediction Figure 4: In-sample fit fot 1996 and 1999.

As the parameters for the mean- and varianceshift were chosen so that the normal distribution approximations of the grey shaded graphs would get as close as possible to the true distribution, the fit is much better than that of the baseline OLS prediction. While we are slightly overpredicting poverty in 1996, Figure 4 shows that by focusing on adjusting the distributional properties of our predictions rather than improving individual predictions — thereby overcoming individual prediction inconsistencies from the different measurement error structures of the core/module datasets — we can use information from a cheaper, noisier consumption measure to improve our poverty predictions, which strongly depend on the left tail of the distribution.

Return of the Out-of-sample 2002 Predictions In this section, we compare our new approach described above with our initial predictions using OLS. Figure 5 shows our out-of-sample prediction for 2002 using information from both the observed, but noisy, consumption measure and the OLS prediction, estimated on the historical data where the true consumption measure was available. The variance of

The latter method is thereby in greater accordance with our expectations, although we are likely strongly overestimating poverty headcount ratios in some of the provinces. This overestimation likely arises from structural differences by province, which our standard approach cannot take into account. However, our regional poverty predictions can be easily improved by applying our proposed method separately, provinceby-province. This approach, while time consuming, is simple and intuitive, and is equivalent to adjusting the distributions non-parametrically by province. We expect this approach to considerably improve our regional predictions.

Conclusions This study shows how we can improve poverty estimates obtained by the standard OLS model of expenditure on wealth proxies by incorporating outof-sample information on aggregated consumption measures. Our analysis acknowledges that our data is likely contaminated with measurement error. In particular, we show through a semi-parametric GMMestimation that aggregated expenditure measures are contaminated with non-classical measurement error, which would undermine attempts to use core sample data to improve our individual estimates of poverty for

1. Technically speaking, we could just return the normal given by the mean and variance computed, but we choose to perform the transformation on each predicted mi(xi) individually. We could also have done this on yci, This way, we preserve more of the data’s shape.

29

MSC-LEVEL | econometrics | specialty | NL

(1996)

our adjusted estimates is much larger than that of the baseline OLS prediction, but smaller than that of the noisy consumption measure. Table 3 shows the OLS and copula estimates for the overall welfare measures and poverty headcount ratio, respectively. The overall headcount ratios change significantly. Through our approach, we estimate much larger poverty headcount rates than through our baseline OLS estimates. The result is similar for the poverty gap ratio. At the same time, the estimation of the average per capita expenditure is stable across the two predictions. Looking at the headcount ratio by province predicted for Indonesia in 2002, shown in Table 3, we see that the estimates differ considerably.

Recommended for readers of XXx-level

thus loosely speaking, have an R2 of exactly 1). We then use these parameters, shown in table 2, to adjust the mean and variance of our 2002 parametric prediction m (x ) 1.Table 2 shows that the model rests most strongly i i on the noisy consumption data for correcting the mean income, and most strongly on the OLS prediction for correcting the standard deviation. Figure 4 shows the in-sample fit of our procedure, comparing the adjusted distribution to the precise expenditure measure.


Recommended for readers of XXx-level

MSC-LEVEL | econometrics | specialty | NL

the module sample. Therefore, we develop a method that focuses on using information from aggregated expenditure measures to adjust the distribution of our expenditure predictions, rather than individual estimates. We find that by exploiting the consistent relationships between consumption from the module, the core sample and our predictions, we strongly improve standard OLS predictions for the years 1996 and 1999.We then apply our model to 2002 data and predict poverty measures in Indonesia as a whole and by province. Our analysis stresses the need for further econometric research on effective poverty estimation using different levels of survey aggregation and taking into account the impact of both classical and nonclassical measurement error on the estimation of consumption mean and tails.

Bound, J. and A. B. Krueger (1991). The extent of measurement error in longitudinal earnings data: Do two wrongs make a right? Journal of Labor Economics, vol. 9, no. 1:1–24. Bricker, J. and G. V. Engelhardt (2008). Measurement error in earnings data in the health and retirement study. Journal of Economic & Social Measurement, vol. 33, no. 1:39 – 61. Kane, T. J., C. E. Rouse, and D. Staiger (1999). Estimating returns to schooling when schooling is misreported. Working Paper 7235, National Bureau of Economic Research. Kapteyn, A. and J. Y. Ypma (2007). Measurement error and misclassification: A comparison of survey and administrative data. Journal of Labor Economics, vol. 25, no. 3:513–551. Pradhan, M. (2001). Welfare analysis with a proxy consumption measure–evidence from a repeated experiment in Indonesia. Technical report, Tinbergen Institute Discussion Paper. Surbakti, P. (1997). Indonesia’s national socio-economic survey: a continual data source for analysis on welfare development. The World Bank Economic Review.

Table 3: Predicted national measures and headcount ratio by province in 2002.

References Abowd, J. M. and M. H. Stinson (2013). Estimating measurement error in annual job earnings: A comparison of survey and administrative data. Review of Economics and Statistics: forthcoming. Bollinger, C. R. (1998). Measurement error in the current population survey: A nonparametric look. Journal of Labor Economics, vol. 16, no. 3:576–594. Bound, J., C. Brown, G. J. Duncan, and W. L. Rodgers (1994). Evidence on the validity of cross-sectional and longitudinal labor market data. Journal of Labor Economics, vol. 12, no. 3:345–368.

30

Alessandro Martinello, Anders Munk-Nielsen, Daniel Safai,Valeria Zhavoronkina. These four bright students are the winners of the Econometric Game 2014. In the picture above you can see them working on the case. They worked three days long to eventually come up with a paper that earned them the unofficial world title in Econometrics.


Comparing long and short term behaviour on the stock market and investigating the effects of the financial crisis By:

Introduction With the recent financial crisis and the huge impact it has on the world economy, it seems extremely relevant to be able to understand stock market dynamics. Stock market behaviour can be explained using different economic theories. Economic theory was historically based on the assumption of rationality. During the 1950’s the term bounded rationality was introduced, which essentially states that economic agents do not act rationally under all circumstances. During the long period of unprecedented growth of the eighties and nineties and even more after the collapse of the internet related stock bubble around 2000 and the financial crisis, starting in 2008, bounded rationality gained popularity. One of the main reasons is that rational economic theory is not able to explain the formation and collapse of observed economic bubbles, which can be explained using bounded rationality. As explained in for example Diba and Grossman (1988), rational bubbles actually can form under very strict conditions; however they cannot burst. As such, theories of rational bubbles are not able to explain historically observed financial bubbles. Boswijk, Hommes and Manzan (2007) ‘BHM’ use a heterogeneous agents model (HAM) to explain the stock price movements during bubbles using boundedly rational agents. General idea of the model is that investor behaviour is driven by past performance. Given a specific belief, if the investment strategy resulting from that belief performed well during the recent past, investors will be more likely to adjust to that belief than to a belief which led to less favourable financial results. In this thesis we extend the HAM used by BHM to include different time horizons and we investigate whether the inclusion of additional agent types improves the performance of the model.

Rationality Classical economics is based on the concept of utility maximization; for any economic agent it should hold that given his/her available resources he/she will always allocate his/her resources in such a way that the utility of that agent is optimal. This concept is closely related to rationality, since it also

31

MSC-LEVEL | econometrics | specialty | NL

Maurits Malkus

Recommended for readers of XXx-level

The 2008 financial crisis caused a global collapse of stock prices. In this thesis we investigate stock market dynamics. To do this, we modify a heterogeneous agents model described by Boswijk, Hommes and Manzan (2007) and extend this to include different time horizons and additional agent types.


assumes that given all available information, an agent will make an optimal choice. In classical economic theory, free markets lead to an optimal equilibrium guided by the ’invisible hand’ described by Adam Smith (2003). The assumption of rationality was later formalized by Muth (1961). The Rational Expectations Hypothesis (REH) states that economic agents have perfect rational expectations regarding future variables, meaning that they have model consistent forecasts. Building on the REH, Fama (1970) introduced the Efficient Market Hypothesis (EMH), which states that financial markets are informationally efficient, i.e. financial markets reflect all available information.

Recommended for readers of XXx-level

MSC-LEVEL | econometrics | specialty | NL

Bounded rationality The concept of rationality is very appealing. Not only because as human beings we think of ourselves as being rational, but also because from an economic point of view it is much easier to model rational behaviour than boundedly rational behaviour. The term bounded rationality was first used by Herbert Simon (1957). Recall that rationality assumes that agents will make the best choice given all available information, this does not only imply that all information is available to all agents, but also that all agents are able to process this information without making any mistakes. Bounded rationality weakens the assumptions made for rationality, in general by introducing informational and cognitive limits. It is for example very reasonable that information itself has a price and that partially because of that, there might be an information asymmetry. During the last decades bounded rationality or behavioural economics increased in popularity. This was amongst others due to the following causes: • During laboratory experiments it was found that typically humans do not act rationally. See for example Tversky and Kahneman (1982). • Financial markets appear to be affected by excess volatility. The term excess volatility was introduced by Shiller (1981) and describes the phenomenon that stock price movements are much more severe than would be expected based on the movement of dividends or earnings. • The no trade theorem described by Milgrom and Stokey (1982) argues that no trade would take place if agents have rational expectations, are risk-averse and start with a Pareto optimal allocation. • The assumptions of rationality are very strict, in particular the assumption that a rational agents knows the beliefs of all other agents is unrealistically strong if it is assumed that part of the agents behaves boundedly rationally. This complication is pointed out by for example Hommes (2001).

Heterogeneous agent models To model bounded rationality, often Heterogeneous Agent Models (HAM) are used. HAMs reflect bounded

rationality by allowing for multiple agent types with different beliefs about the current and future state of the world. Hommes (2006) gives an overview of analytically tractable HAMs and LeBaron (2006) gives in the same book an overview of HAMs that depend heavily on computational tools. Often two types of agents are used in HAMs, fundamentalists who believe prices will revert to their fundamental value, and chartists or analysts that expect a trend (either negative of positive) to continue. Fundamentalists are typically identified with more rational behaviour and analysts are typically associated with bubble formation. One of the first models that uses both fundamentalists and analysts is by Zeeman (1974). HAM models have been based on different time series. For example ter Ellen and Zwinkels (2010) investigated oil pricing dynamics, and de Grauwe and Grimaldi (2006) and de Jong (2010) investigate foreign exchange markets. This thesis will expand on work performed by Boswijk et al. (2007), who investigates a HAM for financial markets.The model uses a fundamentalists and analysts agent type to describe behaviour of the S&P500 index using both the price-earnings and price-dividend ratios. Agents are allowed to change their behaviour or agent type and do so based on an evolutionary selection model that looks at past performance. A switching parameter is estimated to reflect the intensity of choice. Further work on this model has been performed by Tromp (2005), who used quarterly and semi-annual data but used a fixed switching parameter. In Hommes and in ’t Veld1 also quarterly data is used and additionally a memory parameter is introduced, here also the switching parameter is fixed. In this thesis we compare different time horizons, ranging from one to twelve months and additionally we investigate the added value of introducing new agent types.

Model In this thesis we build upon the model of Boswijk, Hommes and Manzan (2007) ‘BHM’. The equation below summarizes the model, for a full derivation see Boswijk, Hommes and Manzan (2007). Equation 1 describes that the value of an economic variable (the price-earnings or price-dividend ratio) at time t is determined by the expectation of the future state of this variable by the market. The market is reflected by different agent types h that each represent a part of the market (nh,t ). R* is used to correct for inflation and the (risk-free) interest rate. In Equation 2 it is reflected that the more successful an agent type is, the bigger part of the market it represents. Note that it holds that nh,t = 1. Success is described in Equation 3, where represents the relative recent performance of agenttype h versus agent type i.

1. C. Hommes and D. in ‘t Veld. Behavioural heterogeneity and the financial crisis. Draft.

32


=

(3)

In our study we will differentiate between different types of agents. The fundamentalists, believing stock prices will return to their fundamental value and chartists or technical analysts, who use technical analysis and in general have an extrapolative belief are included in the BHM paper. Additionally we include a neutral agent type and an agent type that reacts to volatility.The neutral agent has neutral expectation and can be associated with an agent that does not base its expectation on fundamentals, does not actively invest or is only interested in longer term behaviour. The volatility agent has expectations that are related to stock price volatility, assuming that if volatility increases so do stock prices. This agent type can be interpreted as an agent that believes it can somehow profit from increased volatility. We will define expectations using the deviation from the fundamental as shown in (1). Thus the simplest way is to denote expectations as follows: (4) where denotes the information set at time t. To keep things simple we will simply use = {xs−1 | s ≤ t}, which means that agents only use the deviation from the fundamental from the previous periods for their future expectations. Note that xt , because an agent will base his demand on realized data. Let us now define the expectations for the fundamentalist, analyst and neutral agent types as follows: (5) This means that agents types are defined by . Note that if one agent is used, the model reduces to a linear model. Typically we can discern the following agent types: • <1 • =1 • >1 The volatility agent will be defined as follows (6)

1. Linear model. The linear model or one agent-model will be used as benchmark. 2. Two-agents model. The two agents model uses both the analyst and fundamentalist agent types, which is similar to BHM. 3.Three-agents model, neutral.The neutral three agents model is based on the two- agents model, with a neutral agent type added. 4. Three-agents model, volatility. The volatility three agents model is similar to the neutral three-agents model. However instead of a neutral agent type, a volatility agent type is added. The BHM model is based on yearly data. We will use monthly data instead and compare different time intervals, this means some adjustments have to be made. Since we are using monthly data, it is necessary to correct for seasonal patterns, this is done by regressing the fundamental ratios on monthly dummies. We will maintain the use of yearly yields in the price to yield ratio. This allows us to easily compare results for different time horizons. However, we do have to adjust the growth rate of yields g and the risk free rate r. This means that we will have to adjust R* for different time horizons as follows: (7) If we look at yearly data, we find R* = R*12 as we would expect. For earnings and dividends we have to average over a time interval to get useful data, since the thing that matters is the amount of value accrued. For earnings we will take an average over a ten years interval to correct for fluctuations following Campbell and Shiller (2003) . For dividends we will average over the time interval of months to get the total yield acquired over that specific period, we get: (8) If we look at for example estimations based on dividend with a time horizon of 3 months, we use the P-D ratio that is created by dividing prices by an average of dividends of the the current and previous two

33

Recommended for readers of XXx-level

(2)

MSC-LEVEL | econometrics | specialty | NL

The first part corresponds to the expectation of the neutral agent. The second part consists of , the agents’ parameter and the squared difference between the last two observations which acts as a proxy of volatility.As can be seen, the volatility agent is essentially an adjustment to the neutral agent. Note that we can expect that specifically during bubble formation the volatility agent performs well because of the belief that rising stock prices are caused by volatility. In this thesis we will look at the following models:

(1)


months. Then in terms of the model depicted in (1), the previous period is the data point 3 months before the current month. We do however use all data to get better estimation results, this does mean that there is an overlap in the data for all but the 1 month ahead expectation estimations. For a linear model it is possible to derive the relationship between the ’s for different time horizons. If there is only one agent type our model will reduce to a first order autoregressive model: (9) Where l,m is the parameter for a linear model with a time horizon of m months. Extrapolating expectations we get:

Recommended for readers of XXx-level

MSC-LEVEL | econometrics | specialty | NL

(10) And we find that the following should hold for the linear model: (11)

Interpretation of the model If we estimate the model, we will get several estimation parameters out of it. Namely the parameters that describe the belief for different agents type, (corresponding to the analyst or chartist) and (corresponding to the fundamentalist). For the neutral three agent types model (corresponding to the neutral agent) is fixed at 1, for the volatility three agent model we have (corresponding to the volatility agent). Furthermore we have the parameter that is related to the switching rate. For the analyst, fundamentalist and neutral agent, we describe the different behaviour by their deviation from the fundamental as shown in (5) and the parameters can easily be interpreted as autoregressive parameters as seen in (9). Where of course 0 < < 1 corresponds to a stationary autoregressive model, which is linked to fundamentalists belief. For values of greater than 1, the behaviour is not stationary, which is related to the analysts trend following belief. A value of 1 corresponds to a neutral belief, where deviation from the fundamental is not expected to influence future stock prices. It should be noted that negative values of are not feasible, there would be no reason to believe that stock prices would invert around their fundamental value. Furthermore the farther the deviates from 1, the more extreme the behaviour is. A of zero for example corresponds to the belief that the stock price will completely revert to its fundamental, while a of 2 corresponds to the belief that the deviation from the

fundamental will double next period. For the volatility agent the does not have the same interpretation as the other ’s since it does not resemble an autoregressive parameter. Instead the volatility agent acts similar to a neutral agent with the addition of increases expectations if the volatility increases. The parameter can be interpreted as the sensitivity of investors towards strategy performance based on a specific belief regarding their belief for the next period. In other words, if is high, an investor is more likely to adapt to a strategy that was successful in the past.

Data We use an updated version of the dataset used by Shiller (2000)2. This dataset consists of monthly data from the S&P500 index and the corresponding earnings and dividend which can both be used to determine the fundamental value of the index, see also Figure 1. The dataset starts in 1870, but it is questionable how accurate the older data is, so we use only data from 1950 onward. The data are adjusted by the Consumer Price Index to correct for inflation, furthermore seasonality is also filtered out by regressing the data on monthly dummies. 2,400

120

2,000

100

1,600

80

1,200

60

800

40

400

20

0

50

55

60

65

70

75

80

85

90

95

00

05

10

0

Figure 1: This figure shows the data used for the analysis. The blue line depicts the S&P500, the green line the related earnings and the red line the related dividends. The data is corrected for inflation.

Model estimation Note that we estimate the model using monthly data, while we differentiate between time horizons ranging from one to twelve months. This is done in such a way that our model should provide similar results to the BHM results, namely by increasing the length of the time steps to increase the time horizon. The approach we chose is that for a time horizon of we use historical variables that are also time steps in the past, this is illustrated in Figure 2. For example for a time horizon, , of twelve months we use also time steps of twelve months, this implies that we have overlap for eleven ( − 1) observations.

2. http://www.econ.yale.edu/~shiller/data/ie_data.xls.

34


It should be noted that the two agents model is less complex and could therefore be the preferred approach.

Figure 2: The blue curved arrow illustrates how the model works for a one month time horizon. When the time horizon is increased to two months, as indicated by the green arrows, all the time steps will be doubles which results in overlapping observations.

(a) Two agents

Results

(c) Three agents, volatility

Figure 3: This figure shows estimates for the different models. The black and red dots (the outer dots) represent results earnings, the green and blue dots (the inner dots) represent the results using dividends. The black and green dots represent chartists, while the blue and red dots represent fundamentalists. The horizontal axis denotes the time horizon in months.

35

MSC-LEVEL | econometrics | specialty | NL

(b) Three agents, neutral

Recommended for readers of XXx-level

In Figure 3 an overview is given of the estimates for the different models. For time horizons ranging from one to four months both and are close to 1 and the null-hypothesis that the ’s are different from 1 is not rejected in most cases. Moreover the difference in value between the two ’s appears to be approximately constant. In the range from eight to twelve months the ’s differ significantly from 1 and the difference between the two ’s appears to be approximately constant similar to the time horizons ranging from one to four months. In the intermediate range, with time horizons ranging from five to seven months, the difference between the ’s increases significantly with each timestep. The results obtained using dividends are similar to the results obtained using earnings, with the main difference being that for earnings the difference in the ’s is around 50% larger than for dividends. For the analysis based on the three agents model using a neutral agent, the resulting ’s are similar to the two agents model for earnings, however for dividends this does not hold. For dividends, the analyst agent has a that is not significantly higher than 1 at a 95% confidence level, except for the one month time horizon. The parameter for the volatility agent is in most cases significantly higher than zero at a 95% confidence level, except for the one month time horizon and the four months time horizon for the model using dividends. Note that a positive coefficient implies that if volatility increases this will have a positive effect on the fundamental ratio. Because of the asymmetry of this effect, positive bubbles are more likely to occur than negative, this is in line with the historical occurrences of bubbles. Based on test results, we can conclude although the volatility three agents model seems to perform slightly better. However, the difference is not large enough to prefer one model over the other. One of the merits of the volatility three agents model is that an asymmetry is introduced that favours the occurrence of positive bubbles over negative bub- bles if a positive parameter is found for the volatility agent.


In Figure 5, which reflects the start of the financial crisis, we find that the forecasts of all models are well outside the 95% confidence levels. Again in the preceding year the ratios were relatively stable. This suggests that the HAMs are not able to predict the start of a crisis. The period around the collapse of the dot-com bubble is depicted in Figure 6. Here we find that the single agent, which is a first order autoregressive model, performs badly, while the HAMs perform quite well. For the single agent model only two of twelve observations are within the 95% confidence level, while for the HAMs one or two are outside of the 95% confidence level. This suggests that while HAMs are not able to predict the start of a crisis, they perform relatively well in predicting the continuations of a bubble collapse.

(a) Single agent

(b) Two agents

Recommended for readers of XXx-level

MSC-LEVEL | econometrics | specialty | NL

(a) Single agent

(c) Three agents, neutral

(b) Two agents

(d) Three agents, vol.

(c) Three agents, neutral Figure 4: Forecasted PE-ratios for different model specifications. The black lines indicate realized ratios and the coloured lines indicate forecasted ratios with a 95% confidence level.

Forecast and backtest Using the model outcomes, it is possible to forecast future PE- and PD-ratios. Furthermore it is possible to remove part of the most recent observations and asses whether or not realized values are within a 95% confidence level of predicted values. We will qualitatively assess the forecasts of the most recent one year period, for the year 2000 and the year 2008, which correspond to the collapse of the dot- com bubble and the start of the financial crisis re- spectively. Note that we only show forecasts based on the PE-ratio; forecasts based in the PD-ratio yield similar results are thus not shown. In Figure 4 we find forecast for the most recent years, as can be seen the forecasts for the different models do not differ significantly, this can be explained the relative stability (in terms of ratios) of the preceding years.

(d) Three agents, vol.

Figure 5: Forecasted PE-ratios for different model specifications and the coloured lines indicate forecasted ratios with a 95% confidence level.

36


(c) Three agents, neutral

(d) Three agents, vol.

Figure 6: Forecasted PE-ratios for different model specifications. The black lines indicate realized ratios and the coloured lines indiacte forecasted ratios with a 95% confidence level.

Conclusion & discussion In this thesis we looked at an extended version of the BHM model, which is used to explain the behaviour of the S&P500 index using the fundamental PE- and PDratios. Where the BHM model used yearly data and two different agent types, fundamentalists and analysts. We investigated the model behaviour for different time horizons using monthly data and the introduction of new agent types; namely an neutral agent type and an agent type sensitive to volatility. First we compared our results to the results obtained by BHM. Although we find point estimates that differ, the difference is not significant, this shows that our model using monthly data is consistent with the BHM model using yearly data. For the time horizons ranging from one to four months, the behaviour of the different agent types is close to that of a neutral agent. We conclude that the model is not fit to describe investor behaviour on

References H. P. Boswijk, C. H. Hommes, and S. Manzan, Behavioral heterogeneity in stock prices, Journal of Economic Dynamics and Control, vol. 31, pp. 1938–1970, June 2007. B. T. Diba and H. I. Grossman, The theory of rational bubbles in stock prices, Economic Journal, vol. 98, pp. 746–54, September 1988. A. Smith, The Wealth of Nations (Bantam Classics). Bantam Classics, March 2003. J. A. Muth, Rational Expectations and the Theory of Price Movements, Econometrica, vol. 29, no. 6, pp. 315– 335, 1961.

37

MSC-LEVEL | econometrics | specialty | NL

(b) Two agents

these short time horizons. This suggests that investor behaviour is not based on fundamental ratios for short time horizons. For the longer time horizons, ranging from eight to twelve months, we find a significant behavioural difference between the two agent types. This suggests that investor behaviour on those time horizons is affected by fundamental ratios. For the time horizons ranging from five to seven months we find a transition that links the behaviour between short and long time horizons. In this thesis, we investigated the addition of two new agent types. The neutral agent type has a neutral belief and was hypothesised to be able to help explain more stable periods of the history of the S&P500. Results show however that the neutral three agents model does not perform significantly better than the original two-agents model suggested by BHM. The second agent type we investigated was the volatility agent type. This agent type responds to increases volatility, the rationale here is that the investor believes he is able to profit from increased volatility. The F-test indicates that the addition of this agent type slightly improves performance, it should however be note that the addition of this agent type also introduces a new parameter. For the volatility agent we found a positive parameter, suggesting that an increase in volatility leads to an increase of the fundamental ratio. The volatility agent introduces an asymmetry in the model that, given a positive parameter for the volatility agent, favours positive bubbles over negative bubbles; this is in line with the historical occurrence of bubbles. As part of this thesis sample forecasts were performed.These forecasts indicate that if recent history is relatively stable, the HAMs perform similar to a simple first order autoregressive model. However, during a crisis, the HAMs perform significantly better than a simple first order autoregressive model.

Recommended for readers of XXx-level

(a) Single agent


E. F. Fama, Efficient capital markets: A review of theory and empirical work, Journal of finance, vol. 25, no. 2, pp. 383–417, 1970. H. A. Simon, Models of man - social and rational. John Wiley and Sons, 1957. A. Tversky, D. Kahneman, and P. Slovic, Judgments of and by representativeness, pp. 84–98. New York: Cambridge University press, 1982.

J. Y. Campbell and R. J. Shiller, Valuation Ratios and the Long-Run Stock Market Outlook: An Update, in Advances in Behavioral Finance, Volume II rm forthcoming (N. Barberis and R. Thaler, eds.), New York, NY: Russell Sage Foundation, 2003. http://www.econ.yale.edu/~shiller/ data/ie_data.xls. R. J. Shiller, Irrational Exuberance. Princeton University Press, 2000.

R. J. Shiller, Do stock prices move too much to be justified by subsequent changes in dividends?, American Economic Review, vol. 71, no. 3, pp. 421–436, 1981.

Recommended for readers of XXx-level

MSC-LEVEL | econometrics | specialty | NL

P. Milgrom and N. Stokey, Information, trade and common knowledge, Journal of Economic Theory, vol. 26, pp. 17–27, 1982. C. H. Hommes, Financial markets as nonlinear adaptive evolutionary systems, Quantitative Finance, vol. 1, no. 1, pp. 149–167, 2001. C. H. Hommes, Heterogeneous agent models in economics and finance, in Handbook of Computational Economics (L. Tesfatsion and K. L. Judd, eds.), vol. 2 of Handbook of Computational Economics, ch. 23, pp. 1109– 1186, Elsevier, 2006. B. LeBaron, Agent-based computational finance, in Handbook of Computational Economics (L. Tesfatsion and K. L. Judd, eds.), vol. 2 of Handbook of Computational Economics, ch. 24, pp. 1187–1233, Elsevier, 2006. E. C. Zeeman, On the unstable behaviour of stock exchanges, Journal of Mathematical Economics, vol. 1, pp. 39–49, March 1974. S. ter Ellen and R. C. Zwinkels, Oil price dynamics: A behavioral finance approach with heterogeneous agents, Energy Economics, vol. 32, pp. 1427–1434, November 2010. P. De Grauwe and M. Grimaldi, Exchange rate puzzles: A tale of switching attractors, European Economic Review, vol. 50, pp. 1–33, January 2006. E. de Jong, W. F.Verschoor, and R. C. Zwinkels, Heterogeneity of agents and exchange rate dynamics: Evidence from the ems, Journal of International Money and Finance, vol. 29, pp. 1652–1669, December 2010. J. Tromp, Heterogeneity in investors’ behavior, Master’s thesis, University of Amsterdam, the Netherlands, 2005. C. Hommes and D. in ’t Veld, Behavioural heterogeneity and the financial crisis. Draft.

38

About the author Maurits Malkus Currently Maurits is working as a Senior Consultant at the Deloitte Financial Risk Management team. Maurits obtained his Masters degree Financial Econometrics at the UvA and has a Bachelors degree Physics from Leiden University. This article summarizes part of his master thesis, which he wrote under supervision of Prof. H.P. Boswijk.


39


Hi, I’m

Mark

I work at Towers Watson, and today I did something extraordinary.

yourimagination. You’ve nearly completed your degree, and you’re ready for what’s next: a job that will inspire you, make you think and put your skills to the best use. But don’t you really want more than that? Go beyond your expectations at Towers Watson. If you join us, you’ll often be challenged to do something extraordinary. From the start, you’ll team with senior associates to learn on the job and interact with clients on projects that help improve their business. And along the way, you’ll be in charge of your own career, working with your manager to decide what’s next and how to get there. Sound good? Then plan to Go Beyond at Towers Watson.

Towers Watson. A global company with a singular focus on our clients.

Benefits Risk and Financial Services Talent and Rewards Exchange Solutions towerswatson.com


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.