Aenorm 75

Page 1

This edition:

The Struggle for Beating Christofides: Approximation of Metric TSP

75

vol. 20 may ‘12

And: Sensitivity Analysis of Quantiles One Man’s Breath... Longevity Swaps: Hedging Longevity Risk Strategic Risk Management and Risk Monitoring for Pension Funds


Jij ziet overal cijfers...

…en de bijbehorende uitdagingen. Want jij ziet dingen die anderen niet zien. Juist dat maakt je zo’n uitmuntende consultant. Bij Mercer waarderen we dat. Werken bij deze internationale autoriteit in financieel-strategische dienstverlening betekent werken in de voorhoede. Terwijl jij samen met je enthousiaste collega’s financiële HR-vraagstukken meetbaar en tastbaar maakt, zorgt Mercer voor een ongeëvenaard klantenpakket én een direct toegankelijk, internationaal kenniscentrum. Ook onze ontspannen werksfeer – even informeel als inhoudelijk – is een begrip in de branche. Allemaal kenmerken die, volgens je toekomstige collega’s, van Mercer een topbedrijf maken.

Junior consultants m/v Die positie willen we graag behouden. We zijn voortdurend op zoek naar junior consultants die zowel individueel als in teamverband kunnen excelleren. Jonge, hoogopgeleide talenten met een flexibele geest, cijfermatig inzicht, kennis en gezond verstand. Menselijke professionals die, net als Mercer, niet terugdeinzen voor uitdagingen. Voldoe jij aan dit boeiende profiel? Dan vind je bij Mercer volop mogelijkheden. Kijk op www.werkenbijmercer.nl of bel 020-4313768.

IT’S TIME T0 CALL MERCER Consulting. Outsourcing. Investments.


Colofon Editorial Board Milan Schinkelshoek Ruben Walschot Editorial Staff Tamer Dilaver Hugo Evers Linda de Koter Milan Schinkelshoek Ruben Walschot Sina Zolnoor Design United Creations © 2009 Lay-out Milan Schinkelshoek Ruben Walschot Kevin Weltevreden Cover design ©iStock/sekulicn Edit by United Creations Circulation 2000 A free subscription can be obtained at www.aenorm.nl. Advertisers DNB Mercer Towers Watson Triple A - Risk Finance Information about advertising can be obtained from Kevin Weltevreden at info@vsae.nl Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine. ISSN 1568-2188 Editorial Staff adresses VSAE Roetersstraat 11, E2.02/04 1018 WB Amsterdam tel. 020-5254134 Kraket De Boelelaan 1105 1081 HV Amsterdam tel. 020-5986015

What can we learn from the crisis? by: Cees Diks Five years ago hardly anyone expected that a financial crisis was about to unfold, let alone one of the magnitude that we have witnessed. On most of us, the developments during the financial crisis have made deep impressions. For instance, bank runs were something you might have read about in textbooks, but weren’t something you should expect to occur in modern, well-developed, economies during your lifetime. Now, with the financial crisis and the bank-runs on Northern Rock in the UK and on the DSB-bank in The Netherlands still fresh in our memories, we know better. But do we also know better how to avoid, or at least predict, the next financial crisis? From an academic research point of view, we are living in extremely interesting times. By forcing economists, as well as their sponsors and policy makers, to re-order their priorities, the crisis has re-defined the research agendas of many economists and econometricians alike. Why was this necessary? In the words of Jean-Claude Trichet: `In the face of the crisis we felt abandoned by conventional tools’. This has led to a more open attitude to alternative approaches to tackling economic problems. For my colleagues and me at the Center for Nonlinear Dynamics in Economics and Finance (CeNDEF, founded by Cars Hommes) this turned out to be good news. We started to observe an increased interest from funding agencies such as NWO and policy makers such as De Nederlandsche Bank, in the alternative approach to economic modelling we were already pursuing. Since the late 90’s we have been developing heterogeneous agent models in which the decision makers are no longer assumed to be rational, but, more realistically, boundedly rational. By following this bottom-up approach, one arrives at nonlinear dynamical systems with multiple equilibria and multiple types of complex dynamic behaviour. These models often display realistic market collapses, as well as sudden transitions from one type of behavior to another. By building nonlinear heterogeneous agent models along these lines, CeNDEF got involved in a large European project (called CRISIS) for the development of a decision making tool for policy makers, based on a simulator for the economy to be developed together with the ICT sector. The results from economic experiments performed in the laboratory will increase the quality of the assumptions regarding the expectation formation and decision making processes of the agents in this model. One of our recent interests is to investigate the role of network properties; the resilience of the models to shocks turns out to depend largely on the network structure, and hence investigating the role of network structure is important for understanding financial instabilities. Besides economic modelling, the crisis also forced us to re-think econometrics. Most of the advanced econometric/statistical techniques have been developed over the relatively calm period between the late 80’s and 2007. With the current financial crisis the econometric methods have been challenged with new data that may require us to re-consider our assumptions. For instance, in time series analysis it is common to assume that the data generating process does not change over time, apart from, perhaps an occasional structural break where a parameter changes its value. How realistic is this in face of the crisis and what our agent-based models tell us? It seems high time to shift focus from studying the properties of stationary time series to developing new tools for dealing with non-stationary time series from complex systems.

AENORM

vol. 20 (75)

May 2012

1


00 75 The struggle for beating Christofides: approximation of metric TSP

04

by: René Sitters and Leen Stougie Every student in Operations Research knows the Travelling Salesman Problem. Even its abbreviation TSP says enough for insiders. Solving the problem is NP-hard and it is a benchmark problem for testing new techniques invented for hard combinatorial optimization problems.

Actuarial consequences of the proposed new Dutch pension contract

08

by: Ester Lamerikx This article is a summary of an article about the actuarial consequences of the proposed new Dutch pension contract. (Veerman 2012). With the new proposed Dutch pension contract an attempt is made to find a solution for the challenges that are currently faced in the Dutch pension system. Among those challenges, there are longevity, volatile investment markets and low discount rates. In the past couple of years the coverage ratio of pension funds has declined because of unexpected longevity developments and bad investment returns.

Strategic Risk Management and Risk Monitoring for Pension Funds

12

by: Bert Kramer, Sacha van Hoogdalem and Guus Boender The worldwide credit crisis has also led to financial problems for pension funds. In the Netherlands, in addition to the cost of longevity, pension funds were hit hard by dropping investment returns and low interest rates. Mid 2010, around 65% was underfunded (that is, had a funding ratio below 105%). As a consequence, pension premiums are raised, and pension rights cannot be indexed and in the near future even have to be lowered. The public illusion of a guaranteed pension is shattered.

IBIS UvA: combining theory and practice in the area of quality and efficiency improvement

17

by: Marit Schoonhoven The institute for Business and Industrial Statistics of the University of Amsterdam (IBIS UvA) is an independent consultancy firm. The institute sees the interaction between scientific research, on the one hand, and the application of technology in business and healthcare, on the other, as its core. Research focuses on statistical methodology for quality and efficiency improvement, while consultancy focuses on support during the implementation of Lean Six Sigma programs at companies and healthcare organizations. All IBIS UvA staff have a background in econometrics or mathematics and combine research with consultancy activities. The purpose of this article is to elucidate Lean Six Sigma and explain the author’s PhD research area, namely control charting.

2

AENORM

vol. 20 (75)

May 2012

vol. 20 00 may m. y. ‘12


BSc - Recommended for readers of Bachelor-level MSc - Recommended for readers of Master-level PhD - Recommended for readers of PhD-level

Lee Carter model; Modeling and Forecasting U.S. Mortality

22

by: Linda de Koter Hereby you find the first in a new sequence of articles in which a historical paper is being evaluated. First you get an extensive summary of the selected article, and after that a professor gives his (critical) view on the subject. This time the subject of the article is the for actuaries well known Lee Carter model, written by Ronald Lee and Lawrence Carter. A model that is really important for the determination of stochastic survival rates nowadays. The commenting professor is Michel Vellekoop.

Sensitivity Analysis of Quantiles

26

by: Warren Volk-Makarewicz and Bernd Heidergott Quantiles play an important role in modeling quality of service in the service industry and in modeling risk in the financial industry. While estimating/computing quantiles is already a challenging task we argue in this paper that a sensitivity analysis of quantiles should be performed as well. The paper explains how, using quite natural arguments, a singlerun sensitivity estimator for quantiles can be established. Numerical examples taken from option pricing theory will illustrate the estimator.

One man’s breath...

32

Longevity swaps: hedging longevity risk by: Maaike Schakenbos

Pension funds run a variety of risks regarding their pension obligations - interest rate risks and inflation risks come to mind -, which can be hedged through interest rate swaps and inflation swaps, respectively. Longevity risk is another pension relating risk, and its costs increase rapidly when the age of the participants increases 1.

Generalized Autoregressive Score Models

37

by: Drew Creal, Siem Jan Koopman, AndrĂŠ Lucas To capture the dynamic behavior of univariate and multivariate time series processes, we can allow parameters to be time-varying by having them as functions of lagged dependent variables as well as exogenous variables. Although other approaches of introducing time dependence exists, the GAS models, Generalized Autoregressive Score, particular approach have become popular in applied statistics and econometrics. Here we discuss a further development of Creal, Koopman, and Lucas (2012) which is based on the score function of the predictive model density at time t.

Puzzle

43

Facultive

44 AENORM

vol. 20 (75)

May 2012

3


Operations Research

The struggle for beating Christofides: approximation of metric TSP by: René Sitters and Leen Stougie Every student in Operations Research knows the Travelling Salesman Problem. Even its abbreviation TSP says enough for insiders. Solving the problem is NP-hard and it is a benchmark problem for testing new techniques invented for hard combinatorial optimization problems. Still, let us define the problem rigorously.

Introduction Given a complete undirected graph G = (V, E) on n vertices with a length function, or cost function, on the edges , find a shortest, minimum cost, tour that visits all the vertices of the graph exactly once and returns in the starting point. Such a tour is called a Hamilton Cycle in graph terminology. Thus, TSP is the problem of finding a shortest Hamilton Cycle in a complete graph. The metric TSP is a natural restriction of TSP, in which the lengths on the edges satisfy the triangle inequality, i.e. when for all . This problem students in Operations Research and Computer Science learn when studying approximation algorithms. In contradiction to the general TSP, the metric TSP admits a constant approximation ratio guarantee. It is easy to see that finding a Minimum Spanning Tree and doubling it gives an Euler graph of length no more than twice the length of an optimal TSP tour. Shortcutting it, due to the triangle inequality, gives a TSP-tour of no longer total length. In 1976, Christofides noticed that in order to get an Euler graph it is enough to match the vertices of odd degree in the minimum spanning tree only, adding only at most half the length of an optimal TSP tour. This gives an approximation ratio of 3/2. Christofides thought

that this was just a minor observation and never published it in a journal. Today, more than 35 years later, it is still the best approximation algorithm for metric TSP in terms of approximation guarantees. Improving it is seen as one of the prominent research challenges in combinatorial optimization.

Graph-TSP

In 2005, Gamarnik et al. (the journal version appeared a few years later) were the first to make a small victory in the battle of improving on Christofides. The authors give a 1.487-approximation for a very restricted case of the problem. Graph-TSP is a special case of metric TSP, where, given an undirected, unweighted underlying graph G = (V, E), a complete weighted graph on V is formed by dening the cost between two vertices as the number of edges on the shortest path between them. This new graph is known as the metric completion of G. Equivalently, this can be formulated as the problem of finding a spanning Eulerian multi-subgraph H = (V,E’) of G with a minimum number of edges. The result of Gamarnik et al. is on graph-TSP on 3-edge connected cubic graphs. A graph G = (V,E) is cubic if all of its vertices have degree 3, and subcubic if they have degree at most 3. A graph is 3-edge connected if removal of any two edges keeps the graph connected. A bridge in a connected graph is an Leen Stougie edge whose removal disconnects the graph. A Prof. dr. L. Stougie is Professor of Econometrics graph is called simple if there is at most one at the Vrije Universiteit Amsterdam. His expertise edge between any pair of vertices. Gamarnik comprise Combinatorial Optimization, algorithms, et al. gave a polynomial-time algorithm that control systems and theoretical Computer science. finds a Hamilton cycle of cost at most n for ( ⁄ ⁄ ) He has completed research projects in Combinatorial for graphalgorithms in bioinformatics, Haplotypes and TSP on 3-edge connected cubic graphs. Since Phylogenetic networks, Polyhedra of network n is the obvious lower bound for the optimal problems. His current research consists of Operations value for graph-TSP on such graphs, any tour Research and Information Technology and Mixed of length n, for any value of , results in a Criticality Scheduling. -approximation for the graph-TSP.

4

AENORM

vol. 20 (75)

May 2012


Operations Research

Figure 1: Example of a cubic graph on which Cristofides may attain a ratio of 3/2

Figure 1 shows a cubic graph where the 3-2 ratio of Christofides’ algorithm is tight. Hence, this Christofides’ algorithm is not better than 3-2-approximate even when we restrict to cubic graphs.

TSP Research

The metric TSP is well-known to be NP-hard1. Even stronger, it is APX-hard, i.e., unless P=NP there exists some small number such that no polynomial time algorithm can approximate the TSP problem within a factor 1 + . This even holds for the graph-TSP problem 2 . It is unknown if this applies to cubic graphs as well. However, we do know that it is NP-hard since the Hamilton cycle problem is NP-complete in this case. This was the state of the art when in 2010 Sylvia Boyd visited us at the Vrije Universiteit. Sylvia spent a signicant part of her research on studying the TSP, and specically a question on TSP tightly related to improving Christofides’ result. This related approach for finding approximated TSP solutions is to study the integrality gap α (TSP), which is the worst-case ratio between the optimal solution for the TSP problem and the optimal solution to its linear programming relaxation, the socalled Subtour Elimination Relaxation (henceforth SER) (see 3 for more details), given by

min

c e xe

e∈E

s.t.

xe = 2,

e v

e={u,v} u∈S,v ∈S /

xe ≥ 2,

∀v ∈ V ; ∀S ⊂ V ;

∀e ∈ E. xe ≥ 0, α The value (TSP) gives one measure of the quality of

the lower bound provided by SER for the TSP. For metric TSP, it is known that α(TSP) is at most 3/2 (see 4, 5),

and is at least 4/3 (a ratio of 43 is reached asymptotically by the family of graph-TSP problems consisting of two vertices joined by three paths of length k; see also 6 for a similar family of graphs giving this ratio), but the exact value of (TSP) is not known. A constructive proof for value α(TSP) would most likely provide an α (TSP)-approximation algorithm for the TSP. There is the following well-known conjecture: For the metric TSP, the integrality gap (TSP) for SER is 4/3.

Conjecture 1:

As with the quest to improve upon Christofides’ algorithm, the quest to prove or disprove this conjecture has been open for almost 30 years, with very little progress made. Encouraged by the result of Gamarnik et al., during the visit of Sylvia we set ourselves the goal to settle the SER-conjecture for general cubic graphs, on the way deriving a 4/3-approximation algoroithm for this subclass of graph- TSP. The paper that eventually emerged from our joint research studies the graph-TSP problem on cubic and subcubic graphs. Our main result indeed improves upon Christofides’ algorithm by providing a 4/3-approximation algorithm as well as proving 4/3 as an upper bound in Conjecture 1 for the the special case of graph-TSP for which the underlying graph G = (V,E) is a cubic graph. Like Gamarnik et al.7, our approach is based on the following two observations: 1. Since n is a lower bound for the optimal value for graph-TSP as well as the associated SER8, it is enough to look for a polynomial-time algorithm that nds a Hamilton cycle of cost at most n for some < 3/2. This gives a -approximation for the graph-TSP, as well as a proof that the integrality gap α(TSP) is at most for these graphs.

1 E.L. Lawler, J.K.Lenstra, A.H.G. Rinnooy Kan, and D.B. Shymos, 1985 2 M. Frigni,E. Koutsoupias and C. Papadimitriou 1995; C. Papadimitriou and M. Yannakakis, 1993 3 Sylvia Boyd, René Sitters, Suzanne van der Ster and Leen Stougie, 2011 4 D. Shmoys and D. Williamson, 1990 5 L. Wolsey, 1980 6 G. Benoit and S. Boyd, 2008 7 D. Gamarnik, M. Lewenstein and M. Sviridenko, 2005 8 To see that n is a lower bound fo SER, sum all of the so-called “degree contraints” for SER. Dividing the result by 2 shows that the sum of the edge variables in any feasible SER solution equals n.

AENORM

vol. 20 (75)

May 2012

5


Operations Research

A well-known theorem by Petersen9 states that every bridgeless cubic graph has a perfect matching. 2.

Suppose that we remove a perfect matching from a given cubic graph. Then the remaining graph is a set of cycles which cover all vertices. If the number of cycles is k then we can form a TSP tour of length at most n+2(k-1) by connecting the cycles by a doubled spanning tree. The difficulty is to find a perfect matching that gives a cycle cover with a small number of cycles. We proved as a basic building block for our results the following theorem. Theorem 1: Every bridgeless simple cubic graph G=(V,E)

with n 4/3n-2.

6 has a graph-TSP tour of length at most

Our proof of this theorem is constructive, and provides a polynomial-time 4/3-approximation algorithm for graph-TSP on bridgeless cubic graphs. The proof uses polyhedral techniques in a surprising way, which may be more widely applicable. The result also proves that Conjecture 1 is true for this class of TSP problems. The theorem is indeed central in the sense that the other results in our paper are based upon it. One of them is that we show how to incorporate bridges with the same guarantees. For subcubic graphs it appeared to be harder to obtain the same strong results as for cubic graphs. For this class of graph-TSP we obtain a 7/5- approximation algorithm and prove that the integrality gap is bounded by 7/5, still improving considerably over the existing 3/2 bounds. It is known that 4/3 is a lower bound for α(TSP) on subcubic graphs. We conjectured that 4/3 is the correct ratio to be obtained also for subcubic graphs. Our paper was submitted to the IPCO 2011 conference in November 2010 and was accepted for presentation at the conference in June 2011 at the IBM Watson Labs in Yorktown Heights, USA (conferences like IPCO, SODA, STOC, FOCS typically have acceptance rates of 1 out of 4 papers). We have by now written a journal version.

Other Works As often, rather intriguingly, happens, the problem seemed to have zoomed around in 2010. In January 2011, independent of our work, Aggarwal et al.10 announced an alternative 4n/3 approximation for 3-edge connected cubic graphs only, but with a simpler algorithm. Their algorithm is based on the idea of finding a triangle- and square-free cycle cover, then shrinking and “splitting off” certain 5-cycles in the cover. Their result is restricted 9 J. Petersen, 1891 10 N. Aggerwal and N. Garg and S. Gupta, 2011 11 S.O. Gharan and A. Saberi and M. Singh, 2011 12 Tobias Mömke and Ola Svensson, 2011 13 Marcin Mucha, 2011 14 András Sebö and Jens Vygen, 2012

6

AENORM

vol. 20 (75)

May 2012

to 3-edge connected cubic graphs though. Almost simultaneously with the previous paper, Gharan et al.11 announced a randomized (3/2-e)approximation for graph-TSP for some tiny but strictly positive e > 0. However, this is the very first polynomialtime algorithm with an approximation ratio strictly less than 3/2 for graph-TSP on general graphs. Their approach is very dierent from ours. In the spring of 2011 a real breakthrough in this research area emerged. Mömke and Svensson12 came up with a powerful new approach, which enabled them to prove a 1.461-approximation for graph-TSP for general graphs. In the context of the present paper it is interesting that their approach led to a bound of (4n/3 - 2/3) on the graph-TSP tour for all subcubic bridgeless graphs, thus improving upon our above mentioned (7n/5 - 4/5) bound and settling our conjecture armatively. Their approach was inspired by ours and also used our polyhedral technique. In the meantime, end of 2011, the analysis of Mömke and Svensson was tightened by Mucha13 to attain a -approximation ratio for general graph⁄ TSP. In the very beginning of 2012, Sebö and Vygen14 took a slightly different approach and managed to get the approximation ratio down to 7/5 for general graph-TSP. The battle to beat Christofides has not been come to an end. At least after more than 30 years some progress has been made. There is a victory on graph-TSP. This is not a minor victory, since we suspect that graph-TSP provides the most difficult instances for metric TSP. But also on graph TSP, the battle is not over yet. The quest for a 4/3 approximation ratio and integrality gap for graph-TSP will continue. For general metric TSP not even an e has been scraped off the 3/2. Christodes’ 3/2 approximation still stands firmly and keeps challenging researchers in combinatorial optimization.

References

N. Aggarwal and N. Garg and S. Gupta: “A 4/3-approximation for TSP on cubic 3-edge-connected graphs“, manuscript (2011) G. Benoit and S. Boyd: “Finding the exact integrality gap for small travelling salesman problems“ Math. of Operations Research 33 (2008): 921-931 Sylvia Boyd, René Sitters, Suzanne van der Ster and Leen Stougie: “TSP on Cubic and Subcubic Graphs“ Proceedings of the 15th conference of Integer Programming and Cominatorial Optimization (2011)


Operations Research

Sylvia Boyd, René Sitters, Suzanne van der Ster, and Leen Stougie: “The traveling salesman problem on cubic and subcubic graphs“ http://arxiv.org/abs/1107.1052 (2011) B. Csaba, M. Karpinski, and P. Krysta: “Approximability of dense and sparse instances of minimum 2-connectivity, tsp and path problems“ Proc. 13th ACM-SIAM Symposium on Discrete Algorithms (2002): 74-83 N. Christofides: “Worst case analysis of a new heuristic for the travelling salesman problem“ Report 388, Graduate School of Industrial Administration, Carnegie Mellon University, Pittsbrugh (1976) D. Fulkerson: “Blocking and anti-blocking pairs of polyhedra” Math. Programming 1 (1995): 640-645 D. Gamarnik, M. Lewenstein and M. Sviridenko: “An improved upper bound for the TSP in cubic 3-edgeconnected graphs” OR Letters 33 (2005): 467-474 S. O. Gharan and A. Saberi and M. Singh: “A randomized rounding approach to the traveling salesman problem” Proceedings of the 52nd Annual IEEE Symposium on Foundatiions of Computer Science (FOCS) (2011): 550-559 M. Grigni, E. Koutsoupias and C. Papadimitriou: “An approximation scheme for planar graph TSP” Proc, 36th Annual Symposium on Foundations of Computer Science (1995): 640-645

E.L.Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Schmoys “The Traveling Salesman Problem-A Guided Tour of Combinatorial Optimization” Willey, Chichester (1985) Tobias Mömke and Ola Svensson: “Approximating Graphic TSP by Matchings” Proceedings of the 52nd Annual IEEE Symposium on Foundatiions of Computer Science (FOCS) (2011): 560-569 Marcin Mucha: “A 13/9-approximation for Graphic TSP”http://arxiv.org/abs/1108.1130 (2011) C. Papadimitriou and M. Yannakakis: “The traveling Salesman Problem with distances one and two” Math. Oper. Res. 18 (1993) 1-11 J. Petersen: “Die Theorie der regulären graphen” Acta Math 15 (1891) 193-220 András Sebö and Jens Vygen: “Shorter tours by nicer ears: 7/5-approximation for graphic TSP, 3/2 for the path version and 4/3 for two-edged-connected subgraphs” http://arxiv.org/abs/1201.1870 (2012) D. Shmoys and D. Williamson: “Analyzing the Held-Karp TSP bound: A monotonicity property with application” Information Processing Letters 35 (1990):281-285 L. Wolsey: “Heuristic analysis, linear programming and branch and bound” Math. Programming Study

AENORM

vol. 20 (75)

May 2012

7


Actuarial Science

Actuarial consequences of the proposed new Dutch pension contract by: Ester Lamerikx

This article is a summary of an article about the actuarial consequences of the proposed new Dutch pension contract. (Veerman 2012) With the new proposed Dutch pension contract an attempt is made to find a solution for the challenges that are currently faced in the Dutch pension system. Among those challenges, there are longevity, volatile investment markets and low discount rates. In the past couple of years the coverage ratio of pension funds has declined because of unexpected longevity developments and bad investment returns. The main difference between the current pension contract and the proposed new contract is that within the new contract it will be easier to adjust the benefits if the coverage ratio of the pension fund has decreased as a result of unexpected developments in longevity and investment returns. The main goal of this new pension contract is that pension contributions will be less volatile and that contributions not necessarily need to be increased in case of a too low coverage ratio. What needs to be sorted out is the effect of the new pension contract on risk sharing between generations. In the full article we consider the longevity adjustment mechanism, the stabilization of contributions, the investment return adjustment mechanism, the discount rate and the buffers. In this summary we mainly focus on the effect on risk sharing between generations of the investment return adjustment mechanism. When the coverage ratio of pension find has decreased as a result of bad investment return, this will, under the new pension contract no longer lead to extra contributions but to al lowering of the accrued benefits. This will be accomplished by application of the investment return adjustment mechanism (IRAM).

Ester Lamerikx Ester Lamerikx works as a senior risk consultant at Triple A-Risk Finance. She studied econometrics at Maastricht University and Acuarial Science at Amsterdam University. After finishing the post graduate course actuarial science she also completed the Financial Risk Manager course of GARP. In her role as risk consultant she gives advice to pension funds and corporates about pension policy. She also works as a certifying actuary. In her spare time she likes to sport, running and cycling and listening and playing music.

8

AENORM

vol. 20 (75)

May 2012

The IRAM can be applied in two ways: • based on a nominal coverage ratio • based on a real coverage ratio

In case of a real coverage ratio

When the real coverage ratio is below 100%, the indexation of the accrued benefits will be lowered with an equal percentage during the next ten years. In case of a very low coverage ratio the indexation can also be negative. In that case the accrued benefits will be reduced instead of indexed. The pension fund can construct her indexation, and investment policy in such a way that the chance of decreasing the accrued benefits is lowered. The pension fund can, for example, adjust the indexation scale in such a way that indexation will increase less rapidly when the coverage ratio increases and decreases less rapidly when the coverage ratio decreases.

In case of a nominal coverage ratio

When the nominal coverage ratio is higher than the required coverage ratio (assets are higher than the pension liability including buffers) the pension fund can grant a full indexation. When the coverage ratio is below the required coverage ratio the accrued benefits will be partially indexed. In case the coverage ratio is below 100% the accrued benefits will be reduced in such a way that the coverage ratio is, within period of 5 years, equal to 100%. In the proposal for the new pension contract the pension fund has a maximum period of 10 years to return to the required coverage ratio. In the further of this article we assume that this means that the coverage ratio is on the required level 10 years after the first application.


Actuarial Science

In this example we assume that the IRAM is used with a spread period of ten years. We assume that the pension fund doesn’t apply buffers and that the discount rate is equal to 5% in the first variant and 6% in the second variant. Next to that we assume a transition as per beginning of 2012 and we assume that all accrued benefits will be transferred into the new pension contract. In the figures the bars represent the reduction or indexation of the accrued benefits under the current framework (dark gray) and under the new proposed pension contract (light gray).

We constructed an example to clarify this further

Based on two possible future scenarios we clarify the differences between the current framework and the framework of the new pension contract. The first scenario is a basic scenario with a set of realistic assumptions. The second scenario is a pessimistic scenario where we assume deflation. The assumptions in the two scenarios are shown in table 1 and 2. In the basic scenario the financial markets slowly evaluate to the equilibrium situation where the short and long interest rates slowly have risen to the levels before 2008. Under the deflation scenario, interest rates and inflation continu to decline, stock markets fall significantly for two consecutive years. Thereafter the financial markets evaluate again slowly to the equilibrium situation where the short and long interest rates have risen to the levels before 2008. The initial coverage ratio was 95% in both scenarios. The initial coverage ratio is calculated under the current framework (i.e. a nominal coverage which is discounted by the risk free rate). We also assume that this fictitious pension fund - in line with the majority of pension funds - has set up a recovery plan which describes that at the end of 2013 the pension fund has recovered to a coverage ratio of 105%.

Basic Scenario

In figure 1 the net effect of reductions and indexations in the basic scenario under the current framework en under the new proposed pension contract, with discount rate of 5%, is shown. The current framework leads to an initial reduction of approximately 4% in 2013. In subsequent years within the current framework a (positive) indexation is granted. In case of the new pension contract there are no reductions and net indexation granted until 2014, thereafter a slightly increasing discount is applied. It can be concluded that the reduction in the current framework that is applied to the older generations is converted to a higher reductions applicable to the younger generations in the new pension contract.

Table 1: Realized inflation effects

Realised inflation 2012

2013

2014

Returns & Yields for the year 2015 2016 2017 2018

Equity

5,0%

7,5%

7,5%

8,0%

8,0%

8,0%

8,0%

8,0%

8,0%

8,0%

Shortterm interest

1,6%

2,0%

2,5%

3,0%

3,0%

3,0%

3,0%

3,0%

3,0%

3,0%

Longterm interest

3,7%

4,3%

4,5%

4,5%

4,5%

4,5%

4,5%

4,5%

4,5%

Credit spread

0,9%

0,9%

0,9%

0,9%

0,9%

0,9%

0,9%

0,9%

0,9%

0,9%

Realised inflation

1,5%

1,8%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

Expected shortterm inflation 2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

Expected longterm inflation

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2012

2013

2014

Returns & Yields for the year 2015 2016 2017 2018

2019

2020

2021

4,0%

2019

2020

2021

Table 2: Deflation effects

Deflation Equity

0,0% -10,0% -10,0%

5,0% 15,0%

9,0%

6,5%

6,5%

6,5%

6,5%

Shortterm interest

0,8%

0,5%

1,5%

2,0%

2,3%

2,5%

2,5%

2,5%

2,5%

2,5%

Longterm interest

2,5%

2,0%

2,5%

3,5%

4,0%

4,5%

4,5%

4,5%

4,5%

4,5%

Credit spread

1,5%

2,0%

1,5%

1,2%

0,9%

0,9%

0,9%

0,9%

0,9%

0,9%

Realised inflation

0,0%

0,0%

0,0%

0,5%

1,5%

2,0%

2,0%

2,0%

2,0%

2,0%

Expected shortterm inflation

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

Expected longterm inflation

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

2,0%

AENORM

vol. 20 (75)

May 2012

9


Actuarial Science

If we assume a discount of 6% the figure will change as shown in figure 2. When a discount rate of 6% is applied in the framework of the proposed pension contract no reductions of the accrued benefits are applicable. Therefore, the transfer of wealth from the younger generations to the older generations increases.

Deflation Scenario

In the deflation scenario, the net effect of reductions and indexations for the two frameworks is as follows (shown in figure 3): In the current framework we see a reduction of the benefits of approximately 11% in 2013. In 2017 there is for the first time an (positive) indexation granted. In case of the framework of the new pension contract there are no reductions and no indexations granted until 2015, thereafter a small reduction is applied. It can be concluded that the reduction in 2013 in the current framework is shifted to the younger generations in the framework of the new pension contract. The generation effect has significantly increased compared to the basic scenario. In the previous example we used a discount rate of 5%. If we now assume a discount rate of 6%, the figure is as shown in figure 4. It can be concluded that the differences in reductions of accrued benefits under the current framework and the new framework are significantly. The differences are greater in a scenario where the recession worsens.

10

AENORM

vol. 20 (75)

May 2012

Some other considerations about the proposed new pension contract For the longevity adjustment mechanism partially the same conclusions hold as for the IRAM. For this mechanism also holds that the longer the adjustment period, the greater the solidarity between younger and older generations will be. One of the main goals of the new pension contract is premium stability. We conclude that in the proposal the effect of an aging pension funds population is not sufficiently taken into account. Normally, when the active population of the pension fund ages, the premium increases. Regarding the discount factor, pension fund can opt for a discount factor that is higher than the risk free discount rate. By applying this higher discount rate also a transfer of wealth from the younger generation to the older generation will take place. Finally, at this moment it is quite insecure whether the proposed pension contract will be effectuated or not. The Dutch Government agreed to change the eligible age for the social security pension. The trade unions reacted on this buy saying that because of this decision they do not feel obliged to still agree upon the proposed pension contract.

References

Drs. H. Veerman AAG RBA, drs. E. Lamerikx AAG FRM, “Enige actuariële aspecten van het pensioenakkoord.” Tijdschrift voor pensioenvraagstukken, februari 2012-nummer 1


Actuarial Science

Figure 1 till 4 (downwards counting): Net effects of reductions and indexations. Figure 1: under the basis scenario. Figure 2: discount rate of 6,0%. Figure 3: under the deflation scenario. Figure 4: discount rate of 6,0%

AENORM

vol. 20 (75)

May 2012

11


Economics

Economics

Strategic Risk Management and Risk Monitoring for Pension Funds by: Bert Kramer, Sacha van Hoogdalem and Guus Boender

The worldwide credit crisis has also led to financial problems for pension funds. In a large number of countries, there are doubts about the sustainability of the pension system. In the Netherlands, in addition to the cost of longevity, pension funds were hit hard by dropping investment returns and low interest rates. Mid 2010, the average funding ratio for Dutch pension funds was 100%. Of all Dutch pension funds, around 65% was underfunded (that is, had a funding ratio below 105%). Weighted with the number of participants, these underfunded pension funds represent 88% of the Dutch market1. As a consequence, pension premiums are raised, and pension rights cannot be indexed and in the near future even have to be lowered. The public illusion of a guaranteed pension is shattered. The current crisis reemphasizes the necessity of a well founded strategic risk management that focuses on funded ratio risk, rather than on implementation risk. In this paper we describe our methodology on how strategic risk management of a pension funds should ideally be organized.

Bert Kramer Dr. Bert Kramer (1968) studied econometrics at the University of Groningen. In 1996, he received a PhD degree in Management. Bert joined Ortec Finance in 1995. His current position is team manager Research. His fields of expertise are ALM for non-life insurance companies, housing corporations and property investors. From 1997 – 2000 Bert also was Assistant Professor of Finance at Tilburg University.

Sacha van Hoogdalem Drs. Sacha van Hoogdalem RBA is Head of Products at Ortec Finance. She is responsible for all models and methodologies. Sacha joined ORTEC in 1992. She has been involved in strategic decision support of pension funds since 1994. From 2000 to 2006 Sacha was head of the ALM department. She studied Business Econometrics at Erasmus University in Rotterdam.

Key issues As the return on a risk free investment portfolio will usually be too low to guarantee, at an acceptable contribution level, the long term ambition of a pension fund, a pension fund will have to take risk. The first key question then is how much risk the stakeholders are willing to take (i.e., what is the risk appetite), distinguished in funded ratio risk, contribution risk and pension risk. E.g. in The Netherlands indexation of the pension rights is conditional on the solvency level, so that a limit on maximal indexation is a crucial risk budget within the system.

Guus Boender Prof. Dr. Guus Boender (1955) is board member of Ortec Finance and professor Asset Liability Management at the Free University at Amsterdam. He is one of the co-founders of ORTEC, a specialist in Asset Liability Management, Risk Management and Performance attribution for pension funds, insurance companies and housing corporations. Ortec Finance has now over 160 employees and offices in the Netherlands, Switzerland and UK. Guus gained his PhD in Econometrics and Operations research at The Erasmus University Rotterdam and published over 25 articles in leading international journals.

1 Source: De Nederlandsche Bank (DNB). The funding ratio is measured as the market value of the assets divided by the market value of the liabilities.

12

AENORM

vol. 20 (75)

May 2012


Economics

The second question is how this risk is going to be managed. In particular, in which extent should investment risk be reduced if the funded ratio weakens, and, should these risk-based decisions be influenced by forward looking views on the performance of the financial markets? As a rule of thumb we hold that 1% extra return is equal to 30% higher pensions or 30% lower premiums, thereby demonstrating the crucial role of investment risk in realizing the formulated pension ambitions. However, based on the available historical data, it should be stressed that in any given year a global stock exchange index faces a 2.5% chance of decreasing more than 30% in value and, moreover, that such weak stock markets can persist for a prolonged time, as evidenced by the crisis in Japan during the 1990s and the very slow recovery after the Crash of 1929. It is therefore paramount to determine the investment risk as high as responsible, yet always on the precondition that if this risk actually materializes, it can still be safely absorbed according to previously-agreed upon and clearly-communicated ways. A third issue concerns the struggle of hedging either nominal or real interest rate risk. The impact of this choice between nominal and real can be enormous. For instance, if a pension fund chooses to hedge the nominal interest rate risk, the nominal funding ratio is protected from further drops in the interest rate. However, if instead inflation and interest rates rise, the real funding ratio can drop dramatically. So steering in nominal terms can seriously affect the real ambitions.

Asset Liability Management and Strategic Risk Management The objectives of Strategic Risk Management and Asset Liability Management (ALM) are: 1.

2.

ALM: support the choice of the risk (and return) appetite of all stakeholders, including the policy horizon, and specify an integral ALM policy that given the specified risk limits maximizes the ambition of the fund. Risk Management: manage these risks and returns in an unfolding and changing environment.

This process is illustrated in Figure 1. Asset Liability Management

ALM not only involves establishing the risk appetite of all stakeholders, but also establishing the strategy that takes into account these – potentially conflicting – risk appetites in the best possible way. We define the natural asset mix as the optimal asset mix provided that the pension fund is in a steady state. This is the case when the

Figure 1: Risk Management and ALM

fund is not limited due to violations of the short-term restrictions, and when investment experts consider the financial markets not in a structurally unstable condition. Therefore the natural mix represents the allocation of strategic asset categories that can realize the ambition established in the pension deal while simultaneously respecting its appetite for risk. This allocation is always adhered to as long as the financial environment is considered “neutral” and the risk implications are not considered too high. The natural asset allocation is also referred to as the desired asset mix. To find the desired asset mix of a pension fund, ALM studies utilize the technique of scenario analysis2. Scenarios are future trajectories modeling the external insecurities that managers must take into account in their policy determination and evaluation. They concern inflation, interest rates, currencies, the returns of the various investment categories and styles (such as hedge funds), and the development of instruments deduced from these, such as swaps and options. ALM studies calculate, with the use of a corporate model of the pension fund, for every year and each scenario, what the consequences are of the policy intentions for all stakeholders involved. This is done by taking into account the relevant characteristics of all individual participants, namely the dynamics regarding long-life, career and disability etc., and how these characteristics are translated, given the pension scheme, into premiums, indexations and funding ratios. Strategic Risk Management

There are three main reasons to adjust the natural asset mix, namely (1) when the financial markets are temporarily unstable (for instance, due to quantitative easing by governments), (2) if the economic views are structurally changed (for instance, lower equity risk premium, lower (real) interest rates) and (3) if the risk limits are under threat or even violated (for instance, the probability of underfunding is higher than management considers warranted). 2 Boender, G., C. Dert, F. Heemskerk, and H. Hoek. “A Scenario Approach of ALM.” Chapter 18 in S.A. Zenios and W.T. Ziemba (Eds.). Handbook of Asset and Liability Management Vol. 2. North-Holland: Elsevier, 2007

The decision whether or not to adjust the natural asset

AENORM

vol. 20 (75)

May 2012

13


Economics

mix is part of Strategic Risk Management. Figure 2 illustrates the strategic risk management process. Pension funds should find a balance between continuously Figure 2: Strategic Risk Management

Economics

account long term ambitions. Or the investment committee advises a decision to accommodate the long term ambition while exceeding short term risk limits. 4.

Monitor Which information should the monitor of the strategic risk management process contain?

Monitoring Reports Up-to-date and easy to understand risk monitoring reports are an essential tool to support pension fund management in their Strategic Risk Management. Fund management should be able to develop a considered opinion based on these monitoring reports. Monitoring reports typically contain: adapting their policy based on short term developments and sticking too long to their long term policy. In our opinion, the agreed upon strategy and risk appetite are leading. Furthermore, all assumptions that formed the basis of this strategy should be monitored. Indeed, if the assumptions underlying the strategy are no longer considered valid, the strategy is no longer valid. Temporary deviations should lead to temporary policy adjustments that take into account short term consequences (this is often called risk management). Structural changes to the assumptions should lead to structural adjustments of the pension deal (this is often called ALM). Both aspects are an integral part of Strategic Risk Management. It is important to clearly define the decision making process beforehand. We distinguish four components of strategic risk management: 1.

Risk component a. What actions are undertaken if the risk limits are in danger or violated? b. Is risk management action undertaken irrespective of macroeconomic views? c. Which detection and decision processes are followed? d. Which actions are we allowed to take?

2. Investment

component Does the macroeconomic view play a role? If so, formalize a predefined process of information provision and decision making to temporarily or structurally adjust the strategy

3. Governance

component Establish who is responsible for what, also in case of crises. Otherwise nobody is responsible if things go wrong. Or different committee’s undertaken “incomplete� actions. For instance: the risk committee decides to hedge short term risk without taking into

14

AENORM

vol. 20 (75)

May 2012

1. The status quo with respect to a number of key statistics

like risk limits and the funding ratio. Figure 3 gives a graphical example. With a funding ratio of 115%, this fund does not meet the required buffer of 20%.

Figure 3: Change in solvency of the fund compared to the previous quarter


Economics

Table 1: Causes of the change in the funded ratio Year

2010

Funded ratio 2009 Q4 ∆%

Contribution M1 ∆% -point

Payments M2 ∆% -point

118.5 %

1.0%

0.8%

Indexation M3 ∆% -point

How did the changes in the status quo take place? Table 1 provides an example based on Dutch regulations. In this table, the impact of several contributors to the funding ratio development are exposed in an ex post analysis. 2.

0.0%

Yieldcurve M4 ∆% -point -9.1%

Excessreturn M5 ∆% -point 4.6%

Misc. M6

∆% -point -0.8%

Funded ratio 2010 Q2 % 115.0%

Figure 5: Risk decomposition funding ratio risk

3. Risk decomposition: Where do the current risks

come from? This concerns in particular an overview of the sensitivity of the funding ratio to the most relevant risk drivers and the effectiveness of risk policy. Figure 4 (see bottom of the page) presents a graphical example for interest rate sensitivity and the effectiveness of a hedging strategy using derivatives. Risk decomposition shows the impact of the underlying economic risk factors on the total risk; and the diversification effect between the different risk factors. An example is provided in Figure 5, where risk is defined as the average funding ratio minus the 95% Surplus at Risk.

Figure 4: Interest Rate Sensitivity of funding ratio

AENORM

vol. 20 (75)

May 2012

15


Economics

Future risk-return perspectives. Use ex ante analysis to monitor strategic risk limits. For example, an ex ante analysis of possible future developments of the funding ratio as in Figure 6. 4.

Economics

Figure 6: Future development of nominal funding ratio as of 30 June 2010. Orange rectangle: underfunded (<105%), dark blue rectangle: insufficient risk buffer (<120%)

Analyze consequences of stress scenarios such as hyperinflation, deflation or stagflation. We consider it of crucial importance that the consequences of policy changes triggered by stress scenarios are not only, as usual, evaluated on their short term consequences, but that the long term consequences are taken into account in the decision making as well. 5.

“no long voyage should be navigated fully on autopilot, and one must therefore always keep a close watch on the risk monitoring reports” Conclusions The fundamental choices pension funds will have to make regarding their investment policy are: What are the risk limits of all the stakeholders, both in equilibrium, and on the short term, and which ALM strategy, including the strategic asset allocation satisfies these risk limits in the most efficient way? •

How do we deviate from the strategy from a risk management point of view? •

How do we deviate from the strategy given our view on the financial markets? •

What is the content of the monitor that optimally serves the dynamics of strategic risk management? •

16

What is the governance of strategic risk management?

AENORM

vol. 20 (75)

May 2012

The key message is clear: lay down an ambition that is feasible under the risk appetite of the pension fund and which is accompanied by a suitable investment strategy. But no long voyage should be navigated fully on autopilot, and one must therefore always keep a close watch on the risk monitoring reports, so that, if the situation truly demands it, one has the ability and good sense to redirect its strategic course.


Econometrics

IBIS UvA: combining theory and practice in the area of quality and efficiency improvement by: Marit Schoonhoven The institute for Business and Industrial Statistics of the University of Amsterdam (IBIS UvA) is an independent consultancy firm. The institute sees the interaction between scientific research, on the one hand, and the application of technology in business and healthcare, on the other, as its core. Research focuses on statistical methodology for quality and efficiency improvement, while consultancy focuses on support during the implementation of Lean Six Sigma programs at companies and healthcare organizations. All IBIS UvA staff have a background in econometrics or mathematics and combine research with consultancy activities. The purpose of this article is to elucidate Lean Six Sigma and explain the author’s PhD research area, namely control charting.

Lean Six Sigma Lean Six Sigma provides a framework, including roles and responsibilities, that allows organizations to strive for continuous improvement. Roles are defined for higher management, program management, project sponsors, project leaders and team members, amongst others. Improvement projects are led by people from the line organization, who are known as Green Belts and Black Belts. To facilitate project execution, Lean Six Sigma offers a stepwise procedure consisting of successive stages during which a problem situation is smartly defined, measured, analyzed, improved and controlled. In each stage, tools are provided in order to carry out the given step effectively. One important principle of Lean Six Sigma is that project execution should make use of facts and data, so that the organization’s most important problems are selected and that solutions to those problems are effective. The Lean Six Sigma methodology can be applied to all kinds of process improvements, examples being an expansion of the mortgage client base at a bank or a reduction in the cost of operating the children’s ward of a hospital. Lean Six Sigma is not new: it embodies principles and techniques

- including managerial ideas as well as statistical tools and methods - that have proven themselves in the past few decades. The Lean Six Sigma program has shown its worth in that it has saved significant sums of money at numerous organizations. At the same time, by focusing on the added value for the client, it has often raised customer satisfaction and thereby increased sales. Since the program was introduced in the 1980s, many companies have adopted its methodology. And although Lean Six Sigma has its roots in manufacturing, since 2000, it has also been embraced by companies in the services industry such as banks and insurers. Hospitals have subsequently joined and, currently, we see that governmental bodies are taking steps to implement the program. IBIS UvA supports companies with their implementation of Lean Six Sigma by providing consultancy to higher management during the selection process and by reporting the results of all Black Belt and Green Belt projects carried out during the training period. In addition, a considerable portion of time is spent on training Black and Green Belts and assisting them with their process improvement projects.

Marit Schoonhoven Marit Schoonhoven joined IBIS UvA in September 2007 after obtaining an M.Sc. in Business Mathematics and Informatics from the Vrije Universiteit Amsterdam in 2004 and spending two years in the Corporate Credit Risk Management Department at ING, where she was responsible for the development of various credit risk models. At IBIS UvA, Marit organizes and provides Lean Six Sigma Green Belt and Black Belt courses and coaches participants during the quality improvement project they are required to complete as part of their training. In addition to delivering Lean Six Sigma training, Marit conducts research at the University of Amsterdam. In 2011, she successfully defended her Ph.D. thesis entitled Estimation Methods for Statistical Process Control. Her email address is m.schoonhoven@uva.nl

AENORM

vol. 20 (75)

May 2012

17


Econometrics

Control charts monitoring

for

improved

process

This section describes one of the lines of research pursued by IBIS UvA, namely control charting. It was the subject of the author’s Ph.D. Processes are subject to variation. Whether or not a given process is functioning normally can be evaluated with control charts. Such charts show whether the variation is entirely due to common causes or whether some of the variation is due to special causes. Variation due to common causes is inevitable: it is generated by the design and standard operations of the process. When the process variation is due to common causes only, the process is said to be in statistical control. In this case, the process fluctuates within a predictable bandwidth. Special causes of process variation may consist of such factors as extraordinary events, unexpected incidents, or a new supplier for incoming material. For optimal process performance, such special causes should be detected as soon as possible and prevented from occurring again. Control charts are used to signal the occurrence of a special cause. The power of the control chart lies partly in its simplicity: it consists of a graph of a process characteristic plotted through time. The control limits in the graph provide easy checks on the stability of the process (i.e. no special causes present). The concept of control charts originates with Shewhart (1931) and has been extensively discussed and extended in numerous textbooks (see e.g. Duncan (1986), Does et al. (1999) and Montgomery (2009)). In the standard situation, 20-30 samples of about five units are taken initially to construct a control chart. When a process characteristic is a numerical variable, it is standard practice to control both the mean value of the characteristic and its spread. The control limits of the statistic of interest are calculated as the average of the sample mean or standard deviation plus or minus a

multiplier times the standard deviation of the statistic. The spread parameter of the process is controlled first, followed by the location parameter. An example of such a combined standard deviation and location chart is given in Figure 1. The general set-up of a Shewhart control chart for the dispersion parameter is as follows. Let Yij ,i = 1, 2, 3, ... and j = 1, 2, ..., n, denote samples of size n taken in sequence of the process variable to be monitored. We assume the Yij’s to be independent and ( ( ) ) distributed, where λ is a constant. When λ = 1, the standard deviation of the process is in control; otherwise the standard deviation has changed. Let ̂ be an estimate of λσ based on the i-th sample Yij , j = 1, 2, ...,n. Usually, λσ is estimated by the sample standard deviation S. When the in-control σ is known, the process standard deviation can be monitored by plotting ̂ on a standard deviation control chart with respective upper and lower control limits

When ̂ falls within the control limits, the spread is deemed to be in control. For the location control chart, the Yij’s, i = 1, 2, 3, ... and j = 1, 2, ..., n, again denote samples of the process variable to be monitored. In this case, we assume the Yij’s to be independent and distributed, where is a constant. When = 0, the mean of the process is in control; otherwise the process mean has changed. Let

̅

be

an

estimate

38

16 14

Upper control limit

12 10

of

Upper control limit

36

S

34

8

32

6 4

Lower control limit

30

2

18

̂

40

18

0

(1)

where Un and Ln are factors such that for a chosen type I error probability α we have

Figure 1. Standard deviation and location control charts 20

Econometrics

Lower control limit 2

4

AENORM

6

8

10 12 Sample number

vol. 20 (75)

14

16

May 2012

18

20

28

2

4

6

8

10 12 Sample number

14

16

18

20


Econometrics

based on the i-th sample Yij , j = 1, 2, ..., n. When the in-control and σ are known, the process mean can be monitored by plotting ̅ on a location control chart with respective upper and lower control limits

(2)

where Cn is the factor such that for a chosen type I error probability α we have

̅

When ̅ falls within the control limits, the location of the process is deemed to be in control. The performance of the spread control chart is evaluated in the same way as that of the location control chart. We define Ei as the event that ̂ ( ̅ ) falls beyond the control limits, P(Ei) as the probability that ̂ ̅ falls beyond the limits and RL as the run length, i.e. the number of samples drawn until the first ̂ ( ̅ ) falls beyond the limits. When is known, the events Ei are independent, and therefore RL is geometrically distributed with parameter p = P(Ei) = α . It follows that the average run length (ARL) is given by 1/p and that the standard deviation of the run length (SDRL) is given by . √ In practice, the in-control process parameters are usually unknown. Therefore, they must be estimated from k samples of size n taken when the process is assumed to be in control. This stage in the control charting process is called Phase I (cf. Woodall and Montgomery (1999) and Vining (2009)). The monitoring stage is denoted by Phase II. The samples used to estimate the process parameters are denoted by Xij , i = 1, 2, ..., k and j = 1, 2, ..., n. Define ̂ and ̂ as the unbiased estimates of σ and respectively, based on the Xij. The control limits are estimated by

̂

̂ ̂

̂

(3)

for the standard deviation control chart and

̂

̂

̂ ⁄√

̂

̂

̂ √

(4)

for the location control chart. Note that Un, Ln and Cn in (3) and (4) are not necessarily the same as in (1) and (2) and might be different even when the probability of signaling is the same. Below, we describe how we evaluate the standard deviation control chart with estimated parameters. The location chart with estimated parameters is evaluated in the same way. Let Fi denote the event that ̂ is above ̂ or below ̂ We define ̂ as the probability that sample i generates a signal given ̂ , i.e.

( | ̂)

̂

̂ | ̂)

̂

(5)

Given ̂ , the distribution of the run length is geometric with parameter ̂ . Consequently, the conditional ARL is given by

(

| ̂)

( |̂

(6)

In contrast with the conditional RL distribution, the unconditional RL distribution takes into account the random variability introduced into the charting procedure through parameter estimation. It can be obtained by averaging the conditional RL distribution over all possible values of the parameter estimates. The unconditional p is

( ( | ̂))

(7)

the unconditional average run length is

) | ̂)

(8)

and the unconditional standard deviation of the run length is determined by

√ (

|̂ )

|̂ )

(

(9)

Quesenberry (1993) showed that for the ̅ and X control charts the unconditional ARL is higher than in the -known case. Furthermore, a higher in-control ARL is not necessarily better because the RL distribution will reflect an increased number of short RL’s as well as an increased number of long RL’s. He concluded that, if limits are to behave like known limits, the number of samples (k) in Phase I should be at least 400/(n-1) for ̅ control charts and 300 for X control charts. Chen (1998) studied the unconditional RL distribution of the standard deviation control chart under normality. He showed that if the shift in the standard deviation in Phase II is large, the impact of parameter estimation is small. In order to achieve a performance comparable with known limits, he recommended taking at least 30 samples of size 5 and updating the limits when more samples become available. For permanent limits, at least 75 samples of size 5 should be used. Thus, the situation is somewhat better than for

AENORM

vol. 20 (75)

May 2012

19


Econometrics

20

Econometrics

the X control chart with both process mean and standard deviation estimated. Jensen et al. (2006) conducted a literature survey of the effects of parameter estimation on control chart properties and identified the following issue for future research:

References

“The effect of using robust or other alternative estimators has not been studied thoroughly. Most evaluations of performance have considered standard estimators based on the sample mean and the standard deviation and have used the same estimators for both Phase I and Phase II. However, in Phase I applications it seems more appropriate to use an estimator that will be robust to outliers, step changes and other data anomalies. Examples of robust estimation methods in Phase I control charts include Rocke (1989), Rocke (1992), Tatum (1997), Vargas (2003) and Davis and Adams (2005). The effect of using these robust estimators on Phase II performance is not clear, but it is likely to be inferior to the use of standard estimates because robust estimators are generally not as efficient” (Jensen et al. 2006, p. 360).

Davis, C.M. and Adams, B.M. “Robust Monitoring of Contaminated Data.” Journal of Quality Technology 37 (2005): 163-174

This recommendation is the main subject of the PhD thesis. In particular, we studied alternative estimators in Phase I and the impact of these estimators on the performance characteristics of the Phase II control chart. The study assessed the performance of numerous estimators and their impact on Phase II control chart performance under conditions of normality as well as data contamination based on practical data examples as well as an extensive simulation study. Most of the estimators seemed to be robust against one type of data disturbance but not against more than one type. Therefore, we propose algorithms to estimate the process parameters. The algorithms are robust against several types of disturbances that could be faced in practice. As a consequence, the resulting control charts are more powerful, i.e. will detect changes in the process in an earlier stage. The material presented in the PhD thesis has led to four papers. Two articles on the standard deviation control chart have been published in the Journal of Quality Technology (Schoonhoven et al. (2011b)) and Technometrics (Schoonhoven and Does (2012a)). One article on the location control chart has been published in the Journal of Quality Technology (Schoonhoven et al. (2011a)) and another paper is under review for publication in Technometrics (Schoonhoven and Does (2012b)). The entire thesis can be found on http://dare.uva.nl/ record/398181.

Montgomery, D.C. “Intorduction to Statistical Quality Control” 6th Edition. New York, NY: Wiley

AENORM

vol. 20 (75)

May 2012

Chen, G. “The Run Length Distributions of the R, S and s2 Control Charts when σ Is Estimated.” Canadian Journal of Statistics 26 (1998): 311-322

Does, R.J.M.M; Roes, K.C.B.; and Trip, A. “Statistical Process Control in Industry” Dordrecht, The Netherlands: Kluwer Duncan, A.J. “Quality Control and Industrial Statistics”, 5th Edition. Homewood, IL: R.D. Irwin Inc. Jensen, W.A.; Jones-Farmer, L.A.; Champ, C.W.; and Woodall, W.H. “Effects of Parameter Estimation on Control Chart Properties: A Literature Review” Journal of Quality Technology 38 (2006): 349-364

Quesenberry, C.P. “The Effect of Sample Size on Estimated Limits for ̅ and X Control Charts” Journal of Quality Technology 25 (1993): 237-247 Rocke, D.M. “Robust Control Charts” Technometrics 31 (1989): 173-184 Rocke, D.M. “XQ and RQ Charts: Robust Control Charts” The Statistician 41 (1992): 97-104 Schoonhoven, M. and Does, R.J.M.M. “A Robust Standard Deviation Control Chart” Technometrics 54 (2012a): 73-82 Schoonhoven, M. and Does, R.J.M.M. “A Robust X Control Chart” Technometrics (Under review) (2012b) Schoonhoven, M.; Nazir, H.Z.; Riaz, M.; and Does, R.J.M.M. “Robust Location Estimators for the X Control Chart” Journal of Quality Technology 44 (2011a): 363-379 Schoonhoven, M.; Riaz, M.; and Does, R.J.M.M. “Design and Analysis of Control Charts for Standard Deviation with Estimated Parameters” Journal of Quality Technology 44 (2011b): 307-333


Econometrics

Shewhart, W.A. “Economic Control of Quality Manufactured. Product.” (1931) Princeton, NJ: Van Nostrand Tatum, L.G. “Robust Estimation of the Process Standard Deviation for Control Chards” Technometrics 39 (1997): 127-141 Vargas, J.A. “Robust Estimation in Multivariate Control Chards for Individual Observations” Journal of Quality Technology 35 (2003): 367-376 Vining, G. “Technical Advice: Phase I and Phase II Control Chards” Quality Engeneering 22 (2009): 478479 Woodall, W.H. and Montgomery, D.C. “Research Issues and Ideas in Statistical Process Control” Journal of Quality Technology 31 (1999): 376-386

AENORM

vol. 20 (75)

May 2012

21


Actuarial Science

Lee Carter model; Modeling and Forecasting U.S. Mortality Summarized by: Linda de Koter

Hereby you find the first in a new sequence of articles in which a historical paper is being evaluated. First you get an extensive summary of the selected article, and after that a professor gives his (critical) view on the subject. This time the subject of the article is the for actuaries well known Lee Carter model, written by Ronald Lee and Lawrence Carter. A model that is really important for the determination of stochastic survival rates nowadays. The commenting professor is Michel Vellekoop.

Reason to write

Life expectancy in the United States rose from 47 to 75 years between 1900 and 1988. With the same linear trend you end up with an expected age of 100 in 2065, while the Office of the Actuary only plans on a more modest expectancy of 80,5 years. This is one of the many reasons why it is extremely important to better understand the trend in mortality rates. It would be a nasty surprise for pension paying institutes when the linear trend does continue. That’s why scientists have done a lot of research on behalf of stochastic, instead of deterministic , survival rates. Lee and Carter proposed an all new method that is quite easy to use compared to other methods at that moment. Many methods at that time took a lot of factors into account, like medical, behavioral, or social influences. Where this model only uses time series methods on accrued data, and it is just extrapolative. Lee and Carter where skeptical about the usage of an upper limit to the human life span, therefore their method includes age-specific death rates that decline exponentially without limit.

Analysis of data

The data available in this research is far from perfect. The right data is only available for the years 1933 to 1987. For earlier years the data is cruder. If you analyze it, you find that changes in mortality rates are not the same for all ages, the extremes differ by a factor of 22. Yet the “Low” drop for the older ages is still 42%. There is one particular time span that is not taken into account, namely 1918, because the associated influenza epidemic influenced the survival rates too much.

22

AENORM

vol. 20 (75)

May 2012

Method

There are several models that can describe the mortality as a single parameter. The model that they going to fit is ln[m(x, t)] = ax + bx kt + εx,t where m(x,t) is the central death rate for age x in year t. Here a and b are age-specific constants and k is a time varying constant. The error term εx,t with mean 0 and variance σε2 captures historical patterns that cannot be captured by the model. This error term shouldn’t be too big. To give you more feeling of the meaning of the factors, eax is the general shape across age of the mortality schedule and -bx shows the sensitivity to the time related factor kt. Another nice feature of this model is that, even if kt goes to minus infinity, the agespecific rate goes to zero. In this model there is no negative mortality possible.

Pro’s

The Lee-Carter model has some good characteristics. The method is meant to define a set of central death rates that can be used to derive a life table, but a nice thing of it is that it can be reversed to estimate the population out of which the death rates were observed. Furthermore this method really differs from the usual methods on that date, where age-specific rates are estimated independently. First, you don’t need to calculate as much as n(n-1)/2 covariances of errors, with n the number of age groups. Secondly you don’t need several ARIMA processes for the forecast, which would lead to a lot of parameters. Another good characteristic is that the last year for which the total deaths are available can always be used as base year for the forecast. This is especially useful when there is a lag between the publication of total deaths and age specific death rates.


Actuarial Science

Figuur 1: Deathrates at age 20

Con’s

There are complications with this model, because answers are not unique. Suppose that vectors a, b and k are a solution, then for any scalar c, a-bc, b, k+c must also be an solution. And in that same way, a, bc, k/c is also a solution. Therefore the sum of -bx is normalized to unity and the kt is normalized to sum to zero. Then the singular value decomposition method can be used to fit the model. Another downside is that the model doesn’t give a good fit on the older ages (above 85 years), because there is just not enough data. It is shown by other researchers, Coal and Kisker (1990), that, in contrast to the assumptions of the Gompertz curve, in populations with enough data for older ages mortality increases with a linear decreasing rate. They choose a method developed by Coale and Guo to extend the lifetable to higher ages. This results in negligible change in calculated life expectancy in the sample period, however, the life expectancy in the forecast period does change with a substantial 0,7 years.

Results

When the model is fitted, there is a striking result, time related factor kt is declining roughly linearly from 1900 – 1989 on American data. This is a great result, as the change of life expectancy is definitely not linear. Next to that, the variance of kt is roughly constant. These are really good features for the forecasting. Shown in figure 1, you will find the fitted data compared to the actual death rates for 1950 – 2008 for a 25 year olds in the respectivetly years. Figure 2 shows the same fit for 60 year olds. If we look at all the graphics for all ages we can see how much the model explains. If we ignore the 85+ age group, then the model explains 97,5% of the variance over time. The authors place a remark about how great this result is. The only thing that you need for a forecast, is a forecast of the time related factors kt. Lee and Carter found that a well describing forecast of kt can be obtained by a random walk with drift. It is noted that it is extremely important to choose the length of the period used to forecast kt with care, as it can for example give 30% difference in the predicted kt if you differ from starting point between 1930 and 1980, and 45% if you pick choose between 1970 and 1980.

Figuur 2: Deathrates at age 65

The confidence bands on life expectancy are quite narrow, the authors pose 4 reasons for this. One reason is that kt is very close to the predicted linear trend during the period. Secondly, the entropy of the life table is decreasing, which leads to lower responses to errors in age-specific death rates and forecasting kt. A third reason is the decision to remove (to censor) the influence of the 1918 influenza epidemic. The last reason is that different errors in mortality rates tend to offset each other in the life expectancy calculations.

Other

A nice feature of this article is that the authors dedicate almost half a page to the influence of aids, only to conclude that probably there will be some drug against it someday. A conclusion of this article is that the mortality of the US is probably greatly underestimated by the Social Security Forecast And that is all about the famous Lee-Carter model. After reading this, it is time to hear the critical remarks of our professor Michel Vellekoop.

Interview

When I entered his room to interview Michel, he started to talk with passion about the model, of which you have read the summary above. During the interview I heard one phrase repeatedly, namely “The Lee-Carter model is a remarkably useful model to start with, but it is not perfect”.

A remarkably useful model…

Of course that sentence made me ask why Michel thinks the model is such a useful one. The answer to that was quite broad. First of all, the results can be estimated in at least three different ways and the results of those three estimates are really robust. Next to that, the model is not rocket science, it’s easy accessible for people who are used to the lifetables from for instance the AG-AI. It can even be implemented in Excel. We shouldn’t forget that stochastic life expectancies

AENORM

vol. 20 (75)

May 2012

23


Actuarial Science

is something of the last 20 years, and in practice it has only been used for less than 10 years. But most importantly, the model is used as the basis for almost every other model developed after the publication of this model. On my question of why the model is not introduced in the bachelor, Michel tells me it’s more appropriate for Master students, who have the required background concerning the statistical methods that are needed.

… yet not perfect

This model is not diagnostic, it is based on plain time series theory. That is different than for instance the model proposed by the CBS, which keeps records per cause of death. Yet overall the Lee-Carter model fits the data reasonably well. It is very well possible to improve the model. If for instance you add other time-dependent factors, the model fits a lot better. And the goode thing is, that the age-dependent factors are really different for males and females, perhaps showing a difference between the sexes. This leads to the next question: “if it is possible to show that there is a strong difference between sexes, is it appropriate to introduce unisex tables, as ordered recently by the European Courts?”. Michel answers that this is a tricky thing. The introduction of unisex life tables won’t necessarily lead to extra solidarity between males and females, since unisex tables make some products more expensive for males, and some others more expensive for females. Yet it leads to more insecurity, since in your pricing methods you don’t take into account whether your participant is male or female. And uncertainty costs you money, so by introducing the unisex tables you move to a more expensive product.

Back to the model

There are some remarks you can make about the model. Kappa t seems to be linear and with the ARIMA(0,1,0)

model with drift it is estimated that way. Yet in the last few years Kappa seems to take a different route, more second-order if you like. Next to that there are certain interesting age groups, the group of 10-25 year olds, where the mortality is higher than for surrounding age groups. This socalled “accident hump” may be due to youngsters that are taking a lot of risk because they don’t see the consequences for full yet. It is good to see that this hump gets smaller and smaller over the years; this may be the pay-off of safety campaigns like BOB (driving?, no drinking!). Michel tells me that he recently did a Gompertz fit on the Dutch mortality data and that the fit gets better and better, due to the diminishing accident hump. Another interesting group is the ages 60-85, where we see an almost linear curve for beta, yet there is a decline for ages above 85. Which is logical as the mortality improvement goes way quicker among the first group than the second. Extensions of the Lee-Carter model often have some difficulty in fitting older and younger age with the same kappa process.

A 1000 years old human?

Next I asked how much these models are worth, when you hear people saying that the first man who lives till 1000 years is already born. Michel says he can’t answer that as he is no expert on the medical aspects. Yet he is intrigued by the idea. He refers to prof. Johan Mackenbach of the Erasmus University who wrote a book about our healthier life style and mortality improvements. Yet the same Mackenbach shows that we do get more years to life, but that our average quality of life does not necessarily increase accordingly. Concluding we may state that the stochastic mortality model of Lee and Carter is a really good model, but only a starting point. It is a valuable tool in the process of estimating and forecasting mortality and longevity and their consequences for pension funds and insurance companies.

Michel Vellekoop Michel Vellekoop is full professor in Actuarial Sciences at the Department of Quantitative Economics at the University of Amsterdam. He studied Applied Mathematics at the University of Twente and obtained his PhD. degree in 1998 at Imperial College in London for research on nonlinear filtering problems for stochastic processes. Since then he has focused on applications in finance and insurance, both as assistant and associate professor at the University of Twente and as director of research for the Derivatives Technology Foundation. His main research interests are valuation and risk management problems for contingent claims in complete as well as incomplete markets. Since 2009 he has been theme coordinator for the Netspar research theme “Reconciling short term risks and long term goals for retirement provisions”.

24

AENORM

vol. 20 (75)

May 2012


Actuarial Science

References

Lee R.D. and Carter L. R.(1992), “Modeling and Forecasting U.S. Mortality”, Journal of the American Statistical Association, 419, pp 659-671. Coale, A., and Kisker, E. E. (1990), “Defects in Data on Old Age Mortality in the United States: New Procedures for Calculating Approximately Accurate Mortality Schedules and Life Tables at the Highest Ages”, Asian and Pacific Population Forum, 4, 1-31

AENORM

vol. 20 (75)

May 2012

25


Econometrics

Econometrics

Sensitivity Analysis of Quantiles by: Warren Volk-Makarewicz and Bernd Heidergott Quantiles play an important role in modeling quality of service in the service industry and in modeling risk in the financial industry. While estimating/computing quantiles is already a challenging task we argue in this paper that a sensitivity analysis of quantiles should be performed as well. The paper explains how, using quite natural arguments, a single-run sensitivity estimator for quantiles can be established. Numerical examples taken from option pricing theory will illustrate the estimator.

Introduction

Consider an analyst who has built a financial model that will value a certain financial product (the dependent variable), say, an option given some assumption on the market behavior, such as, for example, the volatility (an independent variable). To better understand the model, the analyst then wants to determine how sensitive the predicted value of the option is with respect to uncertainty of the independent variables. Mathematically speaking, the derivative of the value of the financial product with respect to the volatility is sought for. Sensitivities provide great insight into a model. To see this, suppose that the outcome of a model is highly sensitive with respect to a parameter used in calibrating the model. If now the actual value of the parameter is obtained by statistical analysis and thus only available with some uncertainty, then the outcome of the model can be judged as not reliable (and drawing conclusion from the model could lead to wrong decisions). If, on the other hand, the outcome of a model is insensitive with respect to a parameter used in calibrating the model, then this indicates the parameter as superfluous and the model may be simplified accordingly. In addition, information on sensitivities can also be applied in optimization. In this paper we address the problem of computing sensitivities of quantiles. We set off with providing a formal definition of a quantile. Let Z have distribution function F. The quanitle of Z (resp. F) at a level α (0, 1), denoted by , is defined as the largest value y such that the probability of obtaining a value is less than or equal to α:

qα = sup{y : F (y) ≤ α}.

(1)

measures are common in modeling quality of service (QoS). Indeed, in the call center industry, QoS is typically measured by the fraction of services meeting a predefined service level, which can be expressed in terms of the fraction of customers that could be helped within a pre-specified time (e.g. 90 % of customers are helped within 10 minutes). In public transportation networks, QoS is measured by the achieved punctuality (e.g. 95 % of trains are no more delayed than 2 minutes). In risk analysis, value at risk and conditional value at risk are defined through quantiles. As a final example, note that the 6-σ quality control approach in business management is another example, here it is the goal to guarantee that 99.9996% of produced parts are with a pre-specified range of boundary values.

Quantile Sensitivity Analysis

Let Z = (Zi :1 ≤ i ≤ n) be a sequence of independent and identically distributed copies of Z. For l = 1, . . . , n, the order statistic Zl:n is the lth smallest random variable from the collection Z. The order-statistic vector is given by (3) Note that the order statistic is well-defined with probability one as Z is assumed to be continuous. The order statistic Zl:n is the standard statistical estimator for the quantile. The relationship between quantiles and order statistics was first determined by R. Bahadur1 for i.i.d. random variables by showing that

(1)

Bernd Heidergott

Throughout this paper we will assume that F is continuous, so that Z possess a density function (pdf), denoted by f, and the quantile can be written more simply since F is now a bijection:

qα sup {y : F (y) = α} = F −1 (α) (2) Quantiles and quantile related performance

26

AENORM

vol. 20 (75)

May 2012

(1) (1)

(1)

At the Vrije Universiteit Amsterdam Bernd Heidergott is mainly involved in teaching mathematics and statistics for economists. In addition, he is teaching a course on Convex Analysis and Optimization for econometricians. His main current research directions are Gradient Estimation, Differentiation theory, Taylor series expansions and Maxplus algebra. Bernd Heidergott also has received the Best Lecturer Award of the faculty of Economics and Business Administration of the VU for the academic year 2008/2009.


Econometrics

limx→∞ Z αn :n = qα

(4)

with probability one. The requirement of independence in the result has been weakened progressively, for insttance, by Sen2, for m-dependent random variables3; and, like P. Sen in 19724, for ϕ-mixing random variables in which the random variables are asymptotically independent. The greatest technical weakening was introduced in the paper of C. Hesse (1990), where an almost sure result was proved for particular classes of linear stationary processes. Apart from (4) we will use the following result from the theory of spacings of order statistics of i.i.d. data: ⌈ ⌉ ⌈ ⌉

(5)

as n 1, where denotes convergence in distribution → and is an Exponential random variable with mean one. For the present analysis we assume that the distribution of Z depends on some controllable distributional parameter θ, where we assume for sake of simplicity that

θ ∈ Θ = (a, b) ⊂ R.

(1)

We express the dependency of the distribution of Z on θ through writing Fθ for the c.d.f. and fθ for the density function. Since Z is a continuous random variable, Fθ−1 (x) is differentiable w.r.t. to its argument (1)x, and if Fθ is differentiable w.r.t θ, so is Fθ−1 (y). By (1)definition, see (2),

α = Fθ (qθ (θ))

and we obtain an expression for the quantile sensitivity by differentiating the previous w.r.t θ

0 = ∂θ Fθ (qα (θ)) + fθ (qα (θ))∂θ qα (θ), or

∂θ qα (θ) = −

∂θ Fθ (qα (θ)) (6) fθ (qα (θ)) ∂θ

∂ where ∂θ is a typographic shorthand for which (1) we ∂θ ∂θ (1) will frequently use. If no confusion occurs will also write ∂ ∂ h. Combing the order statistic limit in (4)(2) h′ for ∂θ (2)with ∂θ

the limit of spacings in (5) suggests the following result (7):(1)

(

)

(

)

Observe that proving (7) has two main aspects: the statistical and the distributional differentiation aspect. For the statistical analysis one seeks sufficient conditions for the above limit to hold, which already yields an asymp (θ). Provided that (7) totically unbiased estimator for qα holds, taking averages over i.i.d. realizations of

−m(Z αm :m − Z αm −1:m )Fθ (Z αm :m ),

(8)

(1)

then yields a strongly consistent estimator for qα (θ). As we will comment later on, confidence intervals for qα (θ) can be established as well. The other aspect (1) in (7) (resp. (8)) is that on how to deal with the distributional derivative Fθ (Z αm :m ). In other words, efficient simu- (1) lation of Fθ (qα ) has to be addressed. If Fθ is available (1) in a closed-form analytical expression, then we could of course compute Fθ and apply the estimator directly. On (1) the other hand, in this case quantile sensitivity analysis is pointless as the qα (θ) can be computed as well. As(1) (1)sake of argument that simulation sume, however, for the (1) is applied for evaluating Fθ (qα ). In this paper we will apply measure differentiation for operationalizing of to estimation. In particular, if Fθ(1) exists Fθ (Z αm :m ) (1) then it can under rather weak conditions be written as Fθ = c(Fθ+ − Fθ− ), with cθ a constant and Fθ± distri(1) bution (1)functions6. Inserting the above difference expression for Fθ into (7), we arrive at the estimator (9): (1)

−mcθ (Z αm :m − Z αm −1:m )(Fθ+ (Z αm :m ) − Fθ− (Z αm :m )). (1) that the above estimator is a single-run estimator Note as no additional simulations apart of sampling the order statistics is required. Now suppose that Fθ is not known or computation(1)intractable. In most applications, Z can be written as ally

(1) Z = h(X) , (2)

(10)

with h a measurable mapping and X a simpler random variable with c.d.f. Gθ . Provided that h is invertible (i.e. h is continuous and strictly monotone), it holds that

(1) (1) (1)

1 1 R. Bahadur: “A Note on Quantiles in Large Samples“ The Annals of Mathematicel Statistics, 37 (3): 577-580 2 P. Sen: “Asumptotic Normailty of Sample Quantiles for m-Dependent Processes“ The Annals of Mathematical Statistics, 39 (5) 3 A sequence {Xn} of random variables are called m-dependent if Xn and Xn+k are independent for any n provided that k>= m 4 P. Sen: “On the Bahadur representation of Sample Quantiles for Sewuences of phi-Mixing Random Variables“ Journal of Multivariate Analysis, 2 (1)(1972): 77-95 5 C.H. Hesse: “A Bahadur-Type Representation for Empircal Quantiles of a Large Class of Stationary, Possibly Infinite, Linear 1 Processes“ European Journal of Operations Research, 187 (1990): 1188-1202 6 The Theoretical foundation of this fact and the study of the implications thereof on sensitivity analysis and optimizationis is research done mainly at the Vrije Universiteit, see, for example, [6, 3, 9]. 1

1 1 AENORM

vol. 20 (75)

May 2012

27

(


Econometrics

Fθ (z) = Gθ (h−1 (z)). Differentiating with respect (1) to

(

θ yields

Fθ (z) = G θ (h−1 (z)) or Fθ (z) = Gθ (h−1 (z)). (1) for appropriate c.d.f.

G± θ . This

yields the estimator (11):

−1 (G+ θ (h (Z αm :m ))

−1 G− θ (h (Z αm :m ))).

However, as already noted in 7, the inverse of h may not be available in closed form or the evaluation of it may be numerically infeasible. The estimator can be extended to this case, but this not covered in this article. Denote by Dm,k the sample average over k realization of one of the estimators in (8), (9), and (11). Let n denote the computational budget, i.e., n is total number of realizations of Z that can be used for estimating ∂θ qα (θ) . While taking m = n for the estimators yields the most accurate point estimation for ∂θ qα (θ), no statistical assessment on the quality of the estimator can be made. Therefore, one typically splits the overall budget into parts, namely, n = mk, where m is the number of realizations of Z assigned to the estimator and k is the number of independent replications of the estimator. Letting m and k tend to infinity simultaneously, the limit with respect to m yields that ∂θ qα (θ) will be approximated arbitrarily close and the limit with respect to k allows for constructing confidence intervals for ∂θ qα (θ). Sensitivity analysis of quantiles has been a long standing problem. The breakthrough papers by Hong8,9 were the first results on sensitivity analysis for quantiles. However, the key obstacle of Hong’s approach is the requirement that h(x) is (piece wise differentiable) with respect to x, and that h(X(θ)) is Lipschitz continuous in the above sense. The approach prosed in this paper, does require neither.

Statistical 1 Analysis

For the study of the statistical properties of our estimator we need a set of conditions on the smoothness of the density and derivative of the density of Z with respect to θ. We will 1 spare the reader the details1here and will directly state the properties. 1 The first statistical property of our estimator that it is asymptotically unbiased, i.e., it holds

1

(

)

(1)

as k,m→ ∞, which means that is a strongly consistent estimator. In (1)addition, α letting k1/2/m → 0 as k,m→ ∞, Dm,k follows a central (1) limit theorem, i.e., it holds that Then

α Dm,k (2)

limn→∞

α − q (θ) Dm,k α

α ))1/2 Varθ (Dm,k

d

→ N (0, 1)

(13)

May 2012

(1

(1)

k

k 1 1 α Sm,k = (1) { (dm (i))2 − ( dm (i))} k − 1 i=1 k i=1

(

as standard estimator of Varθ(dm(i), we can construct two-sided confidence interval for qα (θ) as follows. Let β denote the confidence level and denote let tβ,k denote the (1 - β/2) quantile of Student’s t-distribution with k degrees(1) of freedom. Then, it holds asymptotically that (14):

(1)

α qα (θ) ∈ (Dm,k −

tβ,k−1 α tβ,k−1 α α Sm,k , Dm,k + Sm,k ) k k

(

with probability of at least 1 - β.

Distributional Differentiation

Our estimator contains the derivative of a cumulative distribution function as argument. Surprisingly enough, these derivatives can be computed in many cases and enjoy nice properties. As we will work with the normal distribution for our examples, we will use this distribution function for illustration in the following. Let N (µ, σ 2 ) 1 denote the normal distribution with mean μ and standard 1 deviation σ and write N (µ, σ 2 )(x) for 1 the cumulative distribution function (c.d.f.) of N (µ, σ 2 ). The density of N (µ, σ 2 )(x) is given by (1)

1 vol. 20 (75)

(1

In applications, one requires to construct confidence intervals for the unknown quantile sensitivity and using

1

AENORM

(1)

7 J. Hong, G. Liu “Pathwise Estimation of probability sensitivities through terminating or terminating or steady-state simulation“ Operatioins Research, 58 (2) (2010): 357-370 1 Sensitivities of Conditional Value at Risk” Management Science, 55 (2) (2008): 281-293 8 J. Hong, G. Liu ”Simulating 1 9 J. Hong “Estimating Quantile Sensitivities” Operations Research 57 (1) (2009): 118-130

28

(1)

(1)

−mcθ (Z αm :m − Z αm −1:m )

)

Let dm(i), for 1 ≤ i (1) ≤ k , be a realization of one of the above estimators for qα (θ) based on a sample of Z (i.e., α m i.i.d. samples of Z), and denote by Dm,k the sample (1) average

−m(Z αm :m − Z αm −1:m )G θ (h−1 (z αm :m ))

or (12):

1

1

(1


Econometrics

(x−µ)2 1 fµ,σ (x) = √ e− 2σ2 , x ∈ R . σ 2π

Then N (µ, σ 2 )is differentiable with respect to μ and(1) σ; for details, see 10. Differentiating fμ,σ(x) with respect to θ yields (x−µ)2 ∂ 1 fµ,σ (x) = √ e− 2σ2 , x ∈ R. ∂µ σ 3 2π

Integrating the above derivative of the density out yields

∂ 1 N (µ, σ 2 )(x) = √ e 3 ∂µ σ 2π

(x−µ)2 − 2σ 2

, x ∈ R.

Alternatively, we can write

1 ∂ fµ,σ (x) = √ ∂µ σ 2π (

1 ∂ (1) 2 N (µ, σ )(x) = (ds - Maxwellµ,σ2 (x) − N (µ, σ 2 )(x)), ∂σ σ for x R , where ds - Maxwellμ,σ^2 (x) denotes the c.d.f. (1) of the double sided Maxwell distribution. Sampling from the double sided Maxwell distribution is discussed in 11. In terms of minimizing the variance of the estimator it is of interest (1) that the double sided Maxwell distribution and normal distribution can be coupled in a simple way. If X has c.d.f. ds - Maxwellμ,σ^2 (x), then UX, with U being uniformly distributed in [0, 1] and independent of everything else, has c.d.f. N (µ, σ 2 )(x). See, 13 for details. The normal distribution and the double-sided Maxwell distribution (1) can be computed by means of the error function. Fortunately, the expression for ∂N (µ, σ 2 )(x)/∂σ is the difference between the two c.d.f.’s and the corresponding error function terms cancel out. Specifically, it holds

(

∂(1) x − µ (x−µ)2 N (µ, σ 2 )(x) = − √ e− 2σ2 , x ∈ R ∂σ σ 2π

2 2 (µ − x) − (x−µ) (x − µ) − (x−µ) 2σ 2 1x≥µ − 2σ 2 1x≤µ ), x ∈ R e e σ2 σ2

(1

(2)

Suppose that μ and σ are mappings of a common parameter θ, i.e., μ = μ(θ) and σ = σ(θ). Provided that μ(θ) x2 − 2 Note that xe 2σ2 /σ for x ≥ 0 is the density of the (1) and σ(θ) are (1)differentiable with respect to θ, applying the Rayleigh distribution the c.d.f. of which is given by chain rule of differentiation yields for the derivative of N (µ, σ 2 ) with respect to θ (1) x2 Rσ (x) = 1 − e− 2σ2 , for x ≥ 0. Hence, (1) (1)

1 ∂ N (µ, σ 2 )(x) = √ ∂σ σ 2π

(1)

(Rσ (x − µ)1x≥µ − Rσ (|x − µ|)1x≤µ ),

(2)

with Rσ(x - μ) being the Rayleigh distribution shifted by μ, yielding a representation of the derivative of a cumulative distribution function as difference of two distribution functions as required for the estimator in (12). Note that a sample of a shifted Rayleigh-(μ + Rσ) (x - μ)- distributed random variable can be obtained from µ + σ −2ln(1 − U ), for U [0, 1], and that a sam(1) ple of a Rayleigh-μ + Rσ(|x - μ|)-distributed random vari able can be obtained from 1µ + σ −2ln(1 − U ), for U [0, 1]. We now turn to the derivative with respect to σ. It can be shown that

1

∂ N (µ, σ 2 )(x) = ∂σ d ∂ µ(θ) N (µ, σ 2 )(x)+ dθ ∂µ

( (16)

(

∂ d σ(θ) N (µ, σ 2 )(x) dθ ∂σ for x

(

R.

(1)

Applications

Our estimator can be applied in a quite general setting. For illustrative purposes we will however address in the following its application to the case of an option on a single stock. More(1) specifically, we consider the Black1 Scholes-Merton (BSM) model, 3, of a financial market. The BSM model is composed of one stock, having value S(t) at time t ≥ 0, that pays no dividends and a bond,(1)

1

1

10 B. Heidergott, F. Váquez-Abad, G. Pflug, T. Farenhorst-Yuan: “Gradient estimation for discrete-event systems by measurevalued differentiation“ Transactions on Modelling and Computer Simulation, Article 5, Vol. 20, issue 1 (2010) 11 B. Heidergott, F. Váquez-Abad W. Volk-Makarewicz: “Sensitivity estimation for Gaussian systems“ European Journal of Operations Research, 187 (2008): 193-207 1 12 L. Nielsen “Pricing and Hedging of Derivative Securities“ Oxford University Press, Oxford (1999)

1

AENORM

vol. 20 (75)

May 2012

1

29


Econometrics

having value β(t) = e-rt at time t ≥ 0, where r denotes the interest rate. The BSM model is complete which means that every contingent claim can be replicated. For hedging purposes the price of the stock is determined under the risk-neutral or equivalent martingale measure. For this model, this occurs when μ = r, and the value of the stock price becomes

S(t) = S(0)eµ−

σ2 t+σ 2

tX

with X being a standard normal random variable. Alternatively, let Xa,b denote a normal random variable with mean a = ln(S(0)eµ− = σ2, then

σ2 t+σ 2

tX

and variance b2

S(t) = S(0)eXa,b.

. The classical application of the BSM model is Option Pricing, and the classical financial option is the Vanilla call option in which the buyer of the option at maturity time T can purchase a stock at a specified price K. This only makes sense for the option holder if the share price, S(t), is above K and the value, discounted to t = 0, of the contingent amount the buyer receives is

H1 (S(T )) = e−rT max(S(T ) − K, 0) = h1 (Xa,b )

with

h1 (x) = e−rT max(ex − K, 0) (17)

(1) rT h−1 + K), x > 0 1 (x) = ln(K) + ln(xe

(1

. Inserting

the

expression

for

∂N (a, b2 )(x)/∂S(0) for G θ into the estimator (11) (1)

yields (18):

(1)

−1 (Z

(h 1 1 1 √ e m(Z αm ):m − Z( αm )−1:m ) S(0) b 2π

This yields a strongly consistent estimator for ∂qα(1) /∂S(0), where the order statistic is obtained (1) from Z which is sample of m i.i.d. copies of H1(S(T)) = h1(Xa,b). In our analysis, we have chosen a set of parameters for a stock with relatively low volatility, σ = 0.12, for a typical length contract of three months T = 0.25 in a calm macroeconomic environment r = 0.04. The above are on yearly quantities. With an exercise price of K = €10.5, we investigate the quality of estimators when we change our current price S(0), choosing three different values as observed in Table 1. We performed a series of computer simulations and the results are provided in Table 2. The confidence interval (CI) are 95% confidence intervals. We compare mean (1) and the confidence interval of our estimator to the actual value of the quantile sensitivity as well as the mean CPU

(1)

Suppose now that we are interested in the sensitivity of the α-quantile of H1(S(T)) with respect to the initial stock price S(0). In other words, we are interested in the ”α-quantile Delta.” In light of the representation H1(S(T)) = h1(Xa,b), the main task in applying our estimator, is to compute the derivative of the c.d.f. of Xa,b with respect to S(0). We apply (16) to θ = S(0), which yields (x−a)2 ∂ 1 1 √ e− 2b2 , N (a, b2 )(x) = ∂S(0) S(0) b 2π

for x R . Note that the inverse of h1(x) for h1(x) > 0(1) is given by

1

(1) Table 2: The results for the quantile sensitivity w.r.t the current price, S(0), for the Vanilla call option where the price is represented by the Black-Scholes model.

1

1

1 1 Table 1: The parameters for our comparison of quantile sensitivity approaches for the BSM vanilla call option

1 30

AENORM

vol. 20 (75)

1 May 2012

2

( αm :m )−m) 2b2

.


Econometrics

times. The fact that the sensitivity turns out be zero for S(0) = 9.5 and α = 0.9 stems from the fact that in this case S(0) < K and the actual value of the option is zero with probability larger than 0.9.

Conclusion

We have argued that sensitivity analysis is an essential part of validation of a mathematical model. For case of the important class of quantiles, we have shown how their sensitivities can be obtained in an simple and effective way

“Sensitivity analysis is an essential part of validation of a mathematical model” References

R. Bahadur: “A Note on Quantiles in Large Samples”. The Annals of Mathematical Statistics, 33 (3), (1966): 577-580 C.H. Hesse: “A Bahadur-type Representation for Empirical Quantiles of a Large class of Stationary, Possibly infinite, Linear Processes“, The Annals of Statistics, 18 (3), (1990): 1188-1202 B. Heidergott, F. Váquez-Abad, G. Pflug, T. FarenhorstYuan: “Gradient estimation for descrete-event systems by measure-valued differentiation” Transactions on Modelling and Computer Simulation Article No. 5, vol. 20, Issue 1, (2010)

B. Heidergott, F. Váquez-Abad, W. Volk-Makarewicz: “Sensitivity estimation for Gaussian systems” European Journal of Operations Research, 187, (2008): 193-207 J. Hong, G. Liu: “Pathwise Estimation of probability sensitivities through terminating or steady-state simulations“ Operations Research, 58(2), (2010):357370 B. Heidergott, A. Hordijk: “Taylor series expansions for stationary Markov chains” Advances in Applied Probability, 35, (2003): 1046-1070 J. Hong: “Estimating Quantile Sensitivities” Operations Research, 57 (1), (2009): 118-130 J. Hong, G. Liu: ”Simulating Sensitivities of Conditional Value at Risk” Management Science, 55(2), (2008): 281-293 B. Heidergott, H. Leahu: “Weak differentiability of product measure” Mathematics of Operations Research, 35, (2010): 27-51 L. Nielsen: “Pricing and Hedging of Derivative Securities” Oxford University Press, Oxford (1999) P. Sen: “Asymptotic Normality of Smaple Quantiles for m-Dependent Processes” The Annals of Mathematical Statistics, 39 (5), (1968) P. Sen: “On the Bahadur representation of Sample Quantiles for Sequences of ϕ-Mixing Random Variables”, Journal of Multivariate Analysis 2 (1), (1972): 77-95

AENORM

vol. 20 (75)

May 2012

31


Actuarial Science

Actuarial Science

One man’s breath... Longevity swaps: hedging longevity risk by: Maaike Schakenbos Pension funds run a variety of risks regarding their pension obligations - interest rate risks and inflation risks come to mind -, which can be hedged through interest rate swaps and inflation swaps, respectively. Longevity risk is another pension relating risk, and its costs increase rapidly when the age of the participants increases1.

The ‘AG Projection Table 2010-2060’ shows life expectancy to rise even quicker than expected. As pension funds will thus need to pay benefits for longer than they had anticipated, they run a risk: longevity risk – and it is difficult to estimate. This is why, in 2010, the ‘Goudswaard’-committee proposed to include the development of life expectancy in the pensionable age applied in pension plans2. Traditional solutions to cover the longevity risk include the mortality table with age corrections or mortality experience, a buy-out or buy-in. The mortality table ‘weapon’ consists of using higher life expectancy probabilities. We will briefly discuss buy-outs and buyins later in the next section. Longevity swaps, in various shapes and forms, may offer new solutions. This article is about the pros and cons of longevity swaps and our view on the future development. Various longevity swaps have already been closed in the United Kingdom, for example, between BMW and Deutsche Bank, whereas Deloitte UK has assisted Deutsche Bank in developing this swap. So far, no longevity swaps have been conducted in the Netherlands. One of the reasons for this is that the Netherlands has relati-

vely few closed-end funds without contribution inflow compared with the United Kingdom, and longevity risks mostly occur for these funds1. Deloitte and various other parties expect it will not be long before the first longevity swaps will be introduced in the Netherlands as well.

Buy-out and buy-in A pension fund can hedge the longevity risk by transferring its liabilities to an insurer. If it concerns a buy-out the insurer takes all risks, including the longevity risk, and it will pay the pension benefits for life. The pension fund pays a lump sum to that end, which it finances with its pension assets. After that, the pension fund will be wound up. In the event of a buy-in, also called reinsurance, the liabilities are only partially transferred to an insurer. This may concern the liabilities of the pension beneficiaries only or of the inactive participants.

Maaike Schakenbos Maaike Schakenbos (Business Analyst Deloitte Pension Advisory – Actuariaat): After her study mathematics, Maaike Schakenbos started in January 2011 as a Business Analyst / Junior Consultant with Deloitte Pension Advisory. During the past year she not only gathered actuarial knowledge and consulting skills, but also got the opportunity to study interesting topics, such as longevity swaps.

Figure 1: Indication of risk premium for hedging longevity risk3

1 D. Blake, 2010 2 K.P. Goudswaard, R.M.W.J. Beetsma, T.E. Nijman and P. Schnabel, 2010 3 G. Frijters and A. van Haastrecht, 2011/2

32

AENORM

vol. 20 (75)

May 2012


Actuarial Science

Figure 2: Longevity cash flow swap

Longevity swaps

reduced by an extra 1% in this case for the fixed leg. So far the cash flow swap has only been used for retired participants. As the uncertainty is too significant for active participants, high risk premiums are applied that make the swap expensive.

Longevity swap 1: cash flow swap

Figure 3: Longevity cash flow swap for a participant who is 65 years old on date 05.

Longevity swaps are a new method for pension funds to hedge against longevity risk. Contrary to the buy-out and the buy-in, no liabilities or investments are transferred with this swap. The longevity swap is based on the defined pension benefits of the participants at a certain date. The expected cash flows or the expected liabilities are calculated taking into account a number of assumptions, including mortality. Insurers or investment banks can guarantee these cash flows at a certain risk mark-up. Figure 1 shows that due to the larger uncertainty, the price of hedging the longevity risk increases when the duration of the underlying obligation increases. We will now describe two types of longevity swaps, which can be priced according to different methodologies: the cash flow swap and the value hedge swap.

If it concerns the cash flow swap4, see figure 2, the fund makes a series of fixed payments to a counter party (the fixed leg) during a predetermined fixed term for an agreed upon pool of participants. The counter party, in turn, pays the actual pension benefits (the floating leg) to the pension fund, which subsequently pays them to the retirees. Assume the cash flow swap is concluded for a single participant. Should this participant decease within the fixed term agreed upon with the counter party, the fund will have to continue its payments to the counter party until the fixed term expires. If the participant is still alive by the end of the term, the counter party pays the fund during the participant’s remaining life. Figure 3 shows an example of a 65-year-old participant with a pension benefit of EUR 100 per year and 2% fixed inflation rate. The fixed leg is negotiated and recorded beforehand, while the floating leg depends on the participant actually being alive and on the ensuing benefits. The latter may be higher or lower than the fixed leg.   The fixed leg does not squarely halve the range of the floating leg due to the risk premium applied by the counter party: the prudent expected mortality risks are

Longevity swap 2: value hedge swap

The value hedge swap6 could otherwise be described as “pension obligation insurance”. When closing the swap the obligation of a fixed pool of participants is estimated for a future date. Once the swap has expired, the actual obligation is established and if it exceeds the estimate upon concluding the obligation, the counter party will pay the difference to the fund. If lower, the fund pays the difference to the counter party. So, take the example of a 35 year-old man who participates in a pension plan. The obligation that the fund will need to have accrued for him in ten years time is estimated beforehand. When he turns 45 years of age, the actual obligation is calculated, after which one party pays the other party the difference. This swap applies for an agreed upon period. When both the fund and the counter party have fulfilled their commitments, the ‘insurance’ is cancelled. Due to the limited period and the related lower risk, these swaps are offered for active participants as well.

4 See note 1 5 See note 3 6 A.J.G. Cairns, K. Dowd, D. Blake and G.D. Coughlan, 2011

AENORM

vol. 20 (75)

May 2012

33


Actuarial Science

Actuarial Science

Figure 4: Exchange of risks7

Pricing

The swaps require a reference framework. Based on which commitments does one set the premiums? The index based swap is linked to a reference index that states the average life expectancy of, for example, the entire Dutch population or of a certain group of the population. The fund makes periodic payments to the counter party and it is either refunded variable payments (pension benefits based on the reference index) or a one-off difference, depending on the reference index at that moment. The indemnity swap is based on the actual pool of participants and, hence, provides an exact hedge. The pension fund pays premiums during the swap’s term and receives the actual pension benefits (cash flow swap), or the positive or negative difference between the estimated and the actual obligation (value hedge swap).

Viable longevity swaps need a counter party with an opposite interest to that of pension funds regarding longevity risk. In other words, as is shown in figure 4, a counter party is needed that profits from an increasing life expectancy. Life insurance companies insure, for example, the short-life risk and make a profit when the life expectancy rises. Living longer means they need to make less life insurance payments. This is contrary to pension funds, which need to pay pension benefits for a longer period if the life expectancy rises. Some UK industry pension funds have hedged the risk of longevity with investment banks and in turn the investment banks have hedged the risk with (re)insurance companies. 7 See note 3 8 M. Stoeckle, A. Loddo and D. Picone, 2008

AENORM

vol. 20 (75)

Hedging the longevity risk through longevity swaps has a lot of benefits: the company no longer has uncertainty about any additional future payments because of the higher life expectancy, while hedging significant risks creates more stability, which may create more trust and higher share prices. The drawback is the complexity of longevity swaps and the time it takes to implement them, so they are predominantly suited for major pension funds. Any later decision to sell or terminate the longevity swaps is likely to be either impossible or to involve very high bail out costs. After all, since the market for longevity swaps in the Netherlands still needs to be created, so does the marketability of longevity risk.

Price of a longevity swap

Counter party

34

Pros and cons

May 2012

Setting a price for longevity swaps is difficult: it is a new market and there are no empirical figures. By reducing the mortality risk in the mortality table which the pension fund applies, the future cash flows for various scenarios can be estimated. This can be used to determine the premiums. For the time being, though, this seems to be a long shot since no one knows what the actual mortality risks will be. Another idea is to use a model for interest rate swaps8, as the longevity swap is based on the same principle as used for interest rate swaps, where one cash flow is exchanged for the other. A model for interest rate swaps has been converted into a model for credit swaps and volatility swaps before. An interest rate swap depends on


Actuarial Science

the future interest curve and, analogous to that, a longevity swap depends on the simulated, future mortality curve.

Conclusion

The decrease of the mortality rates constitutes a major problem for pension funds, as they need to pay pension benefits for a longer period than they had anticipated. A longevity swap creates the option to exchange the longevity risk for the short-life risk that, e.g., a life insurance company runs. This is a new market, one which has yet to arise in the Netherlands and it is one reason why the pricing is still a tough issue. Various longevity swaps have already been closed in the United Kingdom and we expect this to rapidly happen in the Netherlands too.

References

D. Blake: “Het afdekken van langlevenrisico’s in pensioenregelingen”, presentation, obtained on 14 October 2011 from www.zwitserleven.nl/i_tems/ zwitserleven_pensioen_seminar_2010_(10_06) (June 2010)

K.P. Goudswaard, R.M.W.J. Beetsma, T.E. Nijman and P. Schnabel: “Een sterke tweede pijler, Naar een toekomstbestendig stelsel van aanvullende pensioenen”, appendix to de accompanying letter by then minister Donner of the Ministry of Social Affairs and Employment, Parliamentary Papers II (27 January 2010) J. van As, W. Boeschoten and G.J. van den Brink: “Langleven in Nederland”, Amsterdam: Dutch Association of Investment Professionals (VBA) (2010) G. Frijters and A. van Haastrecht: “Managen van langlevenrisico”, Pensioen Bestuur & Management (2011/2): 30-31 A.J.G. Cairns, K. Dowd, D. Blake and G.D. Coughlan: “Longevity Hedge Effectiveness: A Decomposition”, Working paper, Edinburgh: Heriot-Watt University (2011) M. Stoeckle, A. Loddo and D. Picone: “A model for longevity swaps: Pricing life expectancy”, Frankfurt/ Main: Dresdner Kleinwort (2008)

AENORM

vol. 20 (75)

May 2012

35


69

71

vol.18 dec ‘

70

vol. 18 dec ‘10

vol. 19 may ‘11

72

vol. 19 1 aug ‘1

Are you interested in being in the editorial staff and having your name in the colofon? If the answer to the question above is yes, please send an e-mail to the chief editor at aenorm@vsae.nl. The staff of Aenorm is looking for people who like to: - find or write articles to publish in Aenorm; - take interviews for Aenorm; - make summaries of (in)famous articles; - or maintain the Aenorm website. To be in de editorial board, you do not necessarily have to live in the Netherlands.


Econometrics

Generalized Autoregressive Score Models by: Drew Creal, Siem Jan Koopman, André Lucas To capture the dynamic behavior of univariate and multivariate time series processes, we can allow parameters to be time-varying by having them as functions of lagged dependent variables as well as exogenous variables. Although other approaches of introducing time dependence exists, the GAS models, Generalized Autoregressive Score, particular approach have become popular in applied statistics and econometrics. Here we discuss a further development of Creal, Koopman, and Lucas (2012) which is based on the score function of the predictive model density at time t.

Typical examples are the generalized autoregressive conditional heteroskedasticity (GARCH) models of Engle (1982) and Bollerslev (1986), the autoregressive conditional duration and intensity (ACD and ACI, respectively) models of Engle and Russell (1998) and the dynamic copula models of Patton (2006). Creal, Koopman, and Lucas (2012) argue that the score function is an effective choice for introducing a driving mechanism for time-varying parameters. In particular, by scaling the score function appropriately, standard dynamic models such as the GARCH, ACD, and ACI models can be recovered. Application of this framework to other non-linear, non-Gaussian, possibly multivariate, models will lead to the formulation of new time-varying parameter models. They have labeled their model as the generalized autoregressive score (GAS) model. Here we aim to introduce the GAS model and to illustrate the approach for a class of multivariate point-process models that is used empirically for the modeling credit risk. We further aim to show

Siem Jan Koopman Prof. Dr. Siem Jan Koopman is Professor of Econometrics at the Vrije Universiteit Amsterdam and research fellow at the Tinbergen Institute since 1999. His Ph.D. is from the London School of Economics (LSE) and dates back to 1992. He had positions at the LSE between 1992 and 1997 and at the CentER (Tilburg University) between 1997 and 1999. His research interests are Statistical analysis of time series; Time series econometrics; Financial econometrics; Kalman filter; Simulation-based estimation; Forecasting.

that time-varying parameters in a multi-state model for pooled marked point-processes can be introduced naturally in our framework.

The GAS model

Let N × 1 vector yt denote the dependent variable of interest, ft the time-varying parameter vector, xt a vector of exogenous variables (covariates), all at time t, and θ a vector of static parameters. Define Yt = {y1, . . . , yt}, Ft = {f0, f1, . . . , ft}, and Xt = {x1, . . . , xt}. The available information set at time t consists of {ft , Ft } where

Ft = {Y t−1 , F t−1 l , X t },

for t = 1, . . . , n.

(1

We assume that yt is generated by the observation density (1) (1)

yt ∼ p(yt | ft , Ft ; θ).

Furthermore, we assume that the mechanism for updating the time-varying parameter ft is given by the familiar autoregressive updating equation

ft+1 = ω +

p i=1

Ai st−i+1 +

q

Bj ft−j+1 ,

j=1

(2)

(2)

(1) Ai where ω is a vector of constants, coefficient matrices and Bj have appropriate dimensions for i = 1, . . . , p and j = 1, . . . , q, while st is an appropriate function of past data, st = st (yt , ft , Ft ; θ) . The unknown coefficients in (2) are functions of θ, that is ω = ω (θ), Ai = Ai(θ), and Bj = Bj(θ) for i = 1, . . . , p and j = 1, . . . , q.

AENORM

vol. 20 (75)

May 2012

(1

37


Econometrics

The approach is based on the observation density (1) for a given parameter ft. When observation yt is realized, time-varying ft to the next period t + 1 is updated using (2) with ∂ ln p(yt | ft , Ft ; θ) st = St · ∇t , ∇t = , ∂ft (3) St = S(t , ft , Ft ; θ),

where S(·) is a matrix function. Given the dependence of the driving mechanism in (2) on the scaled score vector (3), the equations (1) – (3) define the generalized autoregressive score model with orders p and q. We refer to the model as GAS (p, q) and we typically take p = q = 1. The use of the score for updating ft is intuitive. It defines a steepest ascent direction for improving the model’s local fit in terms of the likelihood or density at time t given the current position of the parameter ft. This provides the natural direction for updating the parameter. In addition, the score depends on the complete density, and not only on the first or second order moments of the observations yt. Via its choice of the scaling matrix St, the GAS model allows for additional flexibility in how the score is used for updating ft. In many situations, it is natural to consider a form of scaling that depends on the variance of the score. For example, we can define the scaling matrix as (4) −1 St = It|t−1 , It|t−1 = Et−1 [∇t ∇ t ] , where Et−1 is expectation with respect to the density p(yt |ft , Ft ; θ). For this choice of St, the GAS model encompasses well-known models such as GARCH, ACD and ACI. Another possibility for scaling is (5) −1 St = Jt|t−1 , Jt|t−1 Jt|t−1 = It|t−1 , where St is defined as the square root matrix of the (pseudo)-inverse information matrix for (1) with respect to ft. An advantage of this specific choice for St is that the statistical properties of the corresponding GAS model become more tractable. In particular, the driver st becomes a martingale difference with unity variance. A convenient property of the GAS model is the relatively simple way of estimating parameters by maximum likelihood (ML). This feature applies to all special cases of GAS models. For an observed time series y1, . . . , yn and by adopting the standard prediction error decomposition, we can express the maximization problem as θˆ = arg max θ

38

AENORM

n

(6)

t ,

t=1

vol. 20 (75)

May 2012

Econometrics

where t = ln p(yt |ft , Ft ; θ) for a realization of yt. Evaluating the log-likelihood function of the GAS model is particularly simple. It only requires the implementation of the GAS updating equation (2) and the evaluation of t for a particular value θ* of θ. Example : GARCH models Consider the basic model yt = σtεt where the Gaussian disturbance εt has zero mean and unit variance while σt is a time-varying standard deviation. It is a basic exercise to show that the GAS (1, 1) −1 model with St = It|t−1 and ft = σt2 reduces to (7) ft+1 = ω + A1 yt2 − ft + B1 ft , which is equivalent to the standard GARCH(1, 1) model as given by ft+1 = α0 + α1 yt2 + β1 ft ,

(8)

ft = σt2 ,

where coefficients α0 = ω , α1 = A1 and β1 = B1 − A1 are unknown. When we assume that εt follows a Student’s t distribution with ν degrees of freedom and unit variance, the GAS (1, 1) specification for the conditional variance leads to the updating equation ft+1 = ω + A1 · 1 + 3ν −1 · (1 + ν −1 ) 2 y − ft (9) (1 − 2ν −1 )(1 + ν −1 yt2 /(1 − 2ν −1 ) ft ) t +B1 ft .

This model is clearly different compatered to the standard t-GARCH(1, 1) model which has the Student’s t density in (1) with the updating equation (7). The denominator of the second term in the right-hand side of (9) causes a more moderate increase in the variance for a large realization of |yt| as long as ν is finite. The intuition is clear: if the errors are modeled by a fattailed distribution, a large absolute realization of yt does not necessitate a substantial increase in the variance. Multivariate extensions of this approach are developed in Creal, Koopman, and Lucas (2011). Example : Regression model The time-varying linear regression model yt = xt′βt + εt has a k × 1 vector xt of exogenous variables, a k × 1 vector of time-varying regression coefficients βt and normally independently distributed disturbances εt ~ N(0, σ2). Let ft = βt. The scaled score function based on St = Jt|t−1 in for this regression model is given by st = (x t xt )−1/2 xt (yt − x t ft )/σ,

(10)

where the inverse It|t−1 of used to construct Jt|t−1 is the Moore-Penrose pseudo inverse to account for the singularity of xt xt′. The GAS (1, 1) specification for the time-varying regression coefficient becomes

(1)


GET A SUBS FREE CRIP TION N www OW! . aeno

rm.eu

DOWNLOAD and READ published articles online

title

www.aenorm.eu


Econometrics

ft+1 = ω + A1

xt (x t xt )1/2

·

(yt − x t ft ) + B1 f t . σ

(11)

The updating equation (11) can be extended by including σ2 as a time-varying factor and by adjusting the scaled score function (10) for the time-varying parameter vector ft = (βt′ , σt2 )′.

Illustration: dynamic pooled marked point process

Statistical models with time-varying intensities have received much attention in finance and econometrics. The principal areas of application in economics include intraday trade data (market microstructure), defaults of firms, credit rating transitions and (un)employment spells over time. To illustrate the GAS model in this setting, we consider an application from the credit risk literature in which pooled marked point-processes play an important role. We empirically analyze credit risk and rating transitions within the GAS framework for Moody’s data. Let yk,t = (y1k,t, . . . , yJk,t)′ be a vector of marks of J competing risk processes for firms k = 1, . . . ,N. We have yjk,t = 1 if event type j out of J materializes for firm k at time t, and zero otherwise, and we assume that the pooled point process is orderly, such that with probability 1 precisely one event occurs at each event time. Let t*

Econometrics

denote the last event time before time t and let λk,t = (λ1k,t, . . . , λJk,t)′ be a J × 1 vector of log-intensities. We model the log intensities by λk,t = d + Zft + Xk,t β,

(12)

where d is a J × 1 vector of baseline intensities, Z is a J × r matrix of factor loadings, and β is a p × 1 vector of regression parameters for the exogenous covariates Xk,t. The r × 1 vector of dynamic factors ft is specified by the GAS (1, 1) updating equation (2) with ω = 0. Since ft is not observed directly, we need to impose a sign restriction on Z to obtain economic interpretations for the time-varying parameters. We assume the model has a factor structure: intensities of all firms are driven by the same vector of time-varying systematic parameters ft. The log-likelihood specification using (12) is given by t =

J N

j=1 k=1

yjk,t λjk,t − Rjk,t ·

(13)

(t − t∗ ) · exp (λjk,t∗ ) ,

where Rk,t = (R1k,t, . . . ,RJk,t)′ and Rjk,t is a zero-one variable indicating whether company k is potentially subject to risk j at time t. Define P as a J × J diagonal matrix with jth diagonal element p = R · exp[λ ] / Rjk,t · j,t k jk,t jk,t j,k exp[λjk,t] = P[ k yjk,t = 1 | j,k yjk,t = 1], i.e., the probability

Figure 1: The estimated intensities (in basis points) for each transition type for the one-factor marked point process model. Moody’s rating histories are for all US corporates between January 1981 and March 2010.

1

40

AENORM

vol. 20 (75)

May 2012


Econometrics

that the next event is of type j given that an event happens for firm k. Based on the first and second derivative of t and setting St = Jt|t−1 , we obtain the score and scaling matrix ∇t = Z

N

k=1

yk,t − Rk,t · (t − t ) · exp(λk,t∗ ) , St = (Z P Z)

− 21

.

By combining these basic elements into a GAS specification, we have obtained a new timevarying parameter model for credit rating transitions. In comparison with related models, parameter estimation for the current model is much easier.

Application to Moody’s credit rating data

For our illustration, we adopt the marked point-process model (12), (13) and (2) with ω = 0 and st = St t given by (14) for a data set which contains Moody’s rating histories of all US corporates over the period January 1981 to March 2010. The initial credit ratings for each firm are known at the beginning of the sample and we observe the transitions from one rating category to another over time. Moody’s ratings include 21 different categories, some of which are sparsely populated. For the sake of this illustration, therefore, we pool the ratings into a much smaller set of 2 credit classes: investment grade (IG) and sub-investment grade (SIG). Default is treated as an absorbing category: it makes for J = 4 possible events. It is often concluded in credit risk studies that default probabilities are countercyclical. We therefore allow the log-intensities (12) to depend upon the annual growth rate (standardized) of US industrial production as an exogenous variable. We only present the results for a single factor, r = 1. In order to identify the parameters in the 1 × 4 vector Z, we set its last element to unity so that our single factor is common to all transition types but is identified as the event representing a move from SIG to default. For the one-factor model (r = 1), we perform a full benchmark analysis for the new GAS model in relation to our benchmark model of Koopman et al. (2008), hereafter referred to as KLM08 . The marked point process KLM08 model has the same observation density (13) as the GAS model. However, the time-varying parameter ft follows an Ornstein-Uhlenbeck process driven by an independent stochastic process. Parameter estimation for the KLM08 model is more involved than for the GAS model due to the presence of a dynamic, non-predictable stochastic component.

Figure 1 compares the estimates of ft obtained from the two model specifications. For each of the four possible rating transitions, we plot the intensity of the transition (in basis points on a log scale). These intensities, after dividing them by the number of days in a year, can approximately be interpreted as the daily transition probabilities for each rating transition type. We learn from Figure 1 that the estimates of the time-varying probabilities of the GAS model are almost identical to those of the KLM08 model. However, in our current GAS framework, the results can be obtained without the need of computationally intensive simulation methods required for models such as KLM08 . It underlines an attractive feature of our GAS approach.

References T. Bollerslev: “Generalized autoregressive conditional heteroskedasticity”, Journal of Econometrics 31 (3) (1986)(1) 307–327 D. D. Creal, S. J. Koopman, and A. Lucas: “A dynamic multivariate heavy-tailed model for time-varying volatilities and correlations”, Journal of Business & Economic Statistics 29 (2011) 552–563 D. D. Creal, S. J. Koopman, and A. Lucas: “Generalized Autoregressive Score Models with Applications”, Journal of Applied Econometrics, forthcoming (2012) R. F. Engle: “Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation”, Econometrica 50 (4) (1982) 987– 1007 R. F. Engle, and J. R. Russell: “Autoregressive conditional duration: a new model for irregularly spaced transaction data”, Econometrica 66 (5) (1998) 1127–1162 S. J. Koopman, A. Lucas, and A. Monteiro: “The multistate latent factor intensity model for credit rating transitions”, Journal of Econometrics 142 (1) (2008) 399–424 A. J. Patton: “Modelling asymmetric exchange rate dependence”, International Economic Review 47 (2) (2006) 527–556

AENORM

vol. 20 (75)

May 2012

41


De Nederlandsche Bank: Werken aan vertrouwen DNB is een organisatie met belangrijke maatschappelijke taken: we dragen bij aan een solide monetair beleid, aan een zo soepel en veilig mogelijk betalingsverkeer en we houden toezicht op banken, verzekeringsmaatschappijen en pensioenfondsen. Daarmee maken we ons sterk voor de financiële stabiliteit van Nederland. Zo zorgen we voor vertrouwen in ons financiële stelsel en daarmee voor welvaart en een gezonde economie. Eigentijdse arbeidsvoorwaarden We staan niet voor niets al drie jaar achter elkaar in de top-5 van het Intermediair Beste Werkgevers. Onderzoek: een eigentijds pakket arbeidsvoorwaarden, 36-urige werkweek, mogelijkheid om –afhankelijk van je werkzaamheden – flexibel te werken, een prima startsalaris plus dertiende maand en een beloning voor extra inzet in de vorm van een bonus. Ontwikkeling Wij vinden het belangrijk dat onze medewerkers zich blijven ontwikkelen. Niet alleen om te zorgen dat je vakinhoudelijk bijblijft, maar vooral ook om je talenten en competenties te stimuleren. Daarom bieden we je voortdurend mogelijkheden om te leren en te groeien, waarbij we proberen in te spelen op wat jij de meest waardevolle stimulans voor je persoonlijke en professionele ontwikkeling vindt. Stages Er zijn verschillende stagemogelijkheden binnen DNB. Wij bieden meewerk-en onderzoekstages. Heb jij een idee voor een stageopdracht dat past binnen ons werkterrein? Dan nodigen we je van harte uit om een gemotiveerd voorstel te doen. Maak voor het indienen van je voorstel gebruik van het digitale sollicitatieformulier op de DNB website. Stuur behalve je cv en cijferlijsten ook een brief mee met daarin je stagevoorstel. Verder zijn we ook benieuwd naar je motivatie om bij DNB stage te lopen. DNB Traineeship Het eerste DNB traineeship start in april 2012 en biedt een bijzondere kans voor 8 recent afgestudeerde ambitieuze high potentials. Als trainee bij DNB ga je namelijk direct inhoudelijk aan de slag, je krijgt de kans om veel te leren vanuit verschillende invalshoeken en we leiden je op tot een breed inzetbare professional. De aanmelding voor het eerste traineeship van DNB is inmiddels gesloten. Meer informatie over een volgend traineeship vind je op de DNB website. Inhousedagen Wil je kennismaken met DNB? Dan ontmoeten wij je graag op een van de bedrijvendagen, banenmarkten, beurzen, evenementen of presentaties op universiteiten waar wij aanwezig zijn. Ook organiseren wij regelmatig een inhousedag voor studieverenigingen of in samenwerking met een extern bureau. Een goede mogelijkheid om DNB als bewaker van de financiële stabiliteit en als werkgever te leren kennen! Kijk voor een overzicht van onze evenementen op http://www.werkenbijdnb.nl. Meer informatie Kijk voor meer informatie over DNB, stages en vacatures op http://www.werkenbijdnb.nl of mail voor meer informatie naar recruitment@dnb.nl


On this page you find a few challenging puzzles. Try to solve them and compete for a prize! Submit your solution to Aenorm@vsae.nl.

Answer to “Dog’s Mead”

covering, and further, assuming that te field consist of twelve rows, how should the five dollars be divided so that each man is paid in proportion to the work accomplished?

The race of the econometrician and the actuary

Everyone who reads Aenorm has heard of the famous race between the econometrician and the actuary. The econometrician could walk twelve times faster than the actuary, so the great magazine Aenorm arranged for a race in which the actuary would have a twelve mile head start. Aenorm maintained that the econometrician would never overtake the actuary because while he walked twlve miles, the actuary would advance one mile. Then when the econometrician went the one mile, the actuary would have gone one-twelfth of a mile. There would always be a small distance between them, although this distance would grow smaller and smaller. We all know of course that the econometrician does catch up with the actuary, but it is not always easy in circumstances of this sort to determine the exact point of passing. There is a similarity between the famous race and the movements of the hands on the clock. Since it is now exactly noon, the two hands are together. When will the two hands be together again? (By exactly we mean the time must be expressed accurately to the fraction of a second)

Dividing farmers earnings

It appears that for five euros Hobbs and Nobbs agreed to plant a field of patatoes for farmer Snobbs. Nobbs can drop a row of patatoes in forty minutes and cover them at the same rate of speed. Hobbs, on the other hand, can drop a row in only twenty minutes, but while he is covering two rows, Nobbs can cover threes.

The canals on Mars

In the figure above this text, is a map of the newly discovered cities and waterways on our neighbour planet, Mars. Start at the city marked T, at the south pole and see if you can spell a complete English sentence by making a tour of all the cities, visiting each city only once, and returning to the starting point.

Solutions Solutions to the two puzzles above can be submitted up to July 31st 2012. You can hand them in at the VSAE room (E2.02/04), mail them to aenorm@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 72, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one will be the winner. Solutions can be both in English and Dutch.

Assuming that both men work steadily until the entire field is planted, each man doing his own dropping and

AENORM

vol. 20 (75)

May 2012

43


Time flies. On the 1st of February we started as the new VSAE-board and a lot has happened since. On the 17th, 18th and 19th of April, we welcomed 150 talented Master and PhD students in econometrics from all over the world for the thirteenth Econometric Game. This edition the case, made by Professor Geert Dhaene and Professor João Santos Silva, was about the effect of smoking on the birth weight of new born babies. Among all great solutions submitted by participating teams, this year’s jury voted the University of Copenhagen as winner! On the 9th of May the twelfth edition of our Actuarial Congress took place. Among 170 students and actuaries gathered in Felix Meritis to discuss the Dutch Pension Generation Gap. The congress, which was very well presided by Jeroen Breen (General Manager at Actuarieel Genootschap & Actuarieel Instituut), was a great success. We thank both committees for what they have achieved and their great commitment and wish the new committees the very best of luck! Of course, the VSAE also organized many relaxing activities in the last few months. Besides our monthly drinks, we organized a pubquiz, a party and a soccer tournament with Kraket. We also travelled to the lovely city of Prague with 40 students for our yearly, legendary Short Trip Abroad. Last but not least we (team VSAEKraket) took part in the Batavierenrace, the world’s largest relay race. The academic year is coming to an end and it is almost time for summer holidays. We wish all our members good luck with their lasts exams, and hope that everyone enjoys the summer afterwards!

Agenda •

Monthly drink

Paintball with Ernst & Young

44

6 June 7 June

22 June

End-of-year activity: Walibi

AENORM

vol. 20 (75)

May 2012

Summer is coming soon. This means it’s time to relax on the beach, to go on vacation and to enjoy our spare time in the park. It also means that the academic year will soon come to an end. While I’m writing this we are approaching our third General Members Meeting this year, during which the next board of Kraket will be announced. From that moment on we will slowly be taking steps back to create space for our successors. We have to close up our portfolios, end the year, and do everything we can to make sure the next board will have a great start. There will be room for new opportunities – for change, for expanding our activities and especially for rethinking old habits. I would like to wish the succeeding board the best of luck during the next year, I have a lot of confidence their competence and ambition. It is also the time to say our thanks. For the past year our active members have been pouring their collective energy into our projects and activities, they form the backbone of our association; they made our year great. I want to thank them and wish them the best of luck in whatever they wish to pursue.


WIJ ZIJN OP ZOEK NAAR JOU! Triple A – Risk Finance is een onafhankelijk en innovatief Risico Management en Actuarieel Consultancy bureau. Wij zien risico management zich ontwikkelen als een belangrijke "business driver" voor de financiële instellingen. Wij zijn in staat innovatieve maatwerkoplossingen te realiseren, waarbij wij onze expertise zoveel mogelijk overdragen aan de klantorganisatie, zodat zij na ons vertrek zelfstandig verder kan. PROFIEL Een opleiding actuariaat of econometrie gecombineerd met sterke persoonlijke vaardigheden: analytisch, adviesvaardig, resultaatgericht, innovatief en ondernemend. Je bent tevens geïnspireerd door Risk Management en weet je binnen dit vakgebied uitstekend te profileren. Door je sterke communicatieve vaardigheden ben je een sparringpartner voor je collega's en klanten. WAT BIEDEN WIJ JOU?

Een uitdagende functie binnen een hecht team professionals als onderdeel van een jonge ambitieuze, gedreven organisatie. Uiteraard bieden wij je een goed basissalaris aangevuld met een uitstekende bonusregeling en goede secundaire arbeidsvoorwaarden. Bezoek onze website www.aaa-riskfinance.nl voor meer informatie. GEÏNTERESSEERD?

Neem dan contact op met Mijke van den Berg Mail: info@aaa-riskfinance.nl Telefoon: 020 - 707 36 40

> OUT OF THE BOX ACTUARISSEN EN RISK PROFESSIONALS


Welkom in de advieswereld Jij bent een consultant in hart en nieren. Je wilt iets doen met je wiskundige achtergrond. Én je vindt het interessant contact te hebben met klanten en met collega’s over de hele wereld. Dan ben je bij Towers Watson op de juiste plek!

smar t phone

Scan deze

QR code met je

Benefits | Risk and Financial Services | Talent and Rewards werkenbijtowerswatson.nl


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.