Aenorm 65

Page 1


Mijn fascinatie Werken aan maatschappelijke impact. Oplossingen vinden voor de problemen die gisteren nog onoplosbaar leken. Kennis combineren, doelgericht samenwerken. Mensen en organisaties helpen om beter te functioneren. Dat is mijn fascinatie.


Colofon Chief editor Annelies Langelaar Editorial Board Annelies Langelaar Editorial Staff Erik Beckers Daniëlla Brals Lennart Dek Chen Yeh Arlette Westdorp Jacco Meure Bas Meijers Design United Creations © 2009 Lay-out Taek Bijman Cover design Michael Groen Circulation 2000 A free subscription can be obtained at www.aenorm. eu. Advertisers Aegon All Options Aon Delta Lloyd DNB Flow Traders SNS Reaal TNO Towers Perrin Watson Wyatt Information about advertising can be obtained from Daan de Bruin at info@vsae.nl Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine.

Tumult at the Gate by: Erik Beckers During the last weeks, the University of Amsterdam (UvA) reached the Dutch news thanks to the four professors Paul Aarts, Michiel Leezenberg, Annelies Moors en Ruud Peters. This was due to an article in a popular university magazine in which they advocated for appointing Tariq Ramadan as a visiting professor at the UvA. For those that have missed all the fuzz around Tariq, here is a short recap. Already in 2006, Tariq was appointed as visiting professor in ‘Identity and Citizenship’ at the Erasmus University of Rotterdam. During the past years he became a famous advisor in the discussion on Muslims in The Netherlands. One of his main propositions is that young Muslims, who have emigrated to the Western world, should try to integrate into the Western civilization as quick as they are able to, without the obstacles of any cultural traditions of their country of origin. He also shared this vision in his talkshow on the channel Press TV, which is subsidized by the Iranian government . His connection with this government, through the show on Press TV, eventually led to the dismissal of Tariq at the Erasmus University, a decision which the professors at the UvA could not understand. In their opinion, such a measure goes against the academic liberty of professors in The Netherlands. But is this argument strong enough to hold when someone indirectly supports a government which clearly suppresses its citizens and which is at this very moment even performing fake trials for the people who dared to demonstrate against the recent election result? These professors are not the only ones who are, in my view, misjudging the situation in Iran. Former Dutch Prime Minister Ruud Lubbers recently stated at the website of World Connectors that the position of women in Iran is an example to Muslims worldwide of how it should be. His argumentation was that the position of women in Afghanistan is far worse, which is nearly the same as telling people in a Brasil slum not to complain because there are people in Africa who are far more hungry. It would be in line with the left wing (was it not even red in the past?) image of the University of Amsterdam to ignore Tariq Ramadan and certainly do not offer him a position. After all this heat let us end with a mathematical twist on religion. In the infamous cult movie ‘Pi’ a relation is drawn between Jewish religion and mathematics. When trying to decode the stock exchange, the main character ends up with a formula which looks to be the solution to the ultimate question in Jewish religion. Perhaps Tariq could have been more successful in his quest of Muslim integration when he had turned to mathematics.

ISSN 1568-2188 Editorial staff adresses VSAE Roetersstraat 11, C6.06 1018 WB Amsterdam tel. 020-5254134 Kraket De Boelenlaan 1105 1018 HV Amsterdam tel. 020-5986015 www.aenorm.eu

© 2009 VSAE/ AENORM

vol. 17 (65)

October 2009

1


00 65

vol. vol.17 00 oct.09 m. y.

Growth in Economics: Coordination Problems in Experiments

Intergenerational Risk Sharing through Pension Systems in Theory and Practice

In 'real' world phenomena efficient coordination is often observed during difficult problems. In his paper, Weber (2006) conducts an experiment that seems to comply with the real-world observations of successful coordination in large groups.

One of the motivations for pension systems to exist is the risk sharing or insurance opportunities they offer. Pension systems allow people to pool their resources and to bear the risks on those resources collectively. Other reasons for having a pension system include redistribution of income and preventing myopia in the form of too low savings rates by individuals.

by: Chen Yeh

df

An Actuarial Neutral Method to Raise a Delayed AOW Entitlement When people postpone their AOW, they receive approximately 5% extra per year of delay. This article relates to the extra bonus people receive. Critics say that a 5% bonus per year is too low and should be 8-9%. The goal of this thesis is to construct an actuarial neutral method that calculates the bonus and to check whether the government is correct with their 5% bonus per year forecast. ty assurance.

by: Siert Jan Vos

Fair Value of Life Insurance Contract with Embedded Options This article gradually highlights the main problems that an insurance company has to face when issuing complex life insurance contracts with embedded options, and presents the basic principles that can be applied in order to price them.

by: Hans Staring by: Anna Rita Bacinello

Distribution Management in the Food Industry One of the most challenging tasks in today's food industry is controlling product quality throughout the supply chain. In this article, a short introduction is given to distribution management in the food industry, followed by a discussion of a modelling approach. This article is also meant as an illustration of the interdisciplinary research performed within the FoodDTU research centre at the Technical University of Denmark. by: Renzo Akkerman

Voting Power: Bribing Lobbyists in Voters' Networks Voting power has been regarded as one of the most fundamental concepts in social choice theory. Extensive research has resulted in power indices. In the following article, an a posteriori voting power model is presented that focuses on influence power. Voters’ social relationships are explicitly modelled and power is decomposed into constitutional and network effects. It is shown that neither of the two latter effects is dominating. by: Chen Yeh

Interview with Fleur Rieter Drs. Fleur R.M. Rieter AAG has worked since 2003 as director at Legal & General. From 1998 to 2003 she worked at PricewaterhouseCoopers and from 1995 to 1998 she was affliated with the Actuarial Department of Zwitserleven. She is also a board member of the Dutch Actuarial Association and the Actuarial Institute, where she is responsible for quality assurance. by: Erik Beckers

Market Consistent Valuation using Stochastic Scenarios Financial modeling of insurance contracts is central to the contribution of actuaries to the financial measurement and management of insurers. Market consistent valuation is a driver which moves the evolution of financial modeling from a deterministic basis to a stochastic basis. This contribution contains an example of market consistent valuation of an insurance product with a minimum return guarantee. by: Carlo Jonk

2

AENORM

vol. 17 (65)

October 2009


BSc - Recommended for readers of Bachelor-level MSc - Recommended for readers of Master-level PhD - Recommended for readers of PhD-level

Pricing and Hedging Asian Basket Spread Options in a Nutshell

Inferences on Some Heterogeneous Models

In this paper we study the pricing and hedging of arithmetic Asian basket spread options of the European type and present the main results of Deelstra et al. (2008). Asian basket spread options are written on a multivariate underlying. We first need to specify a financial market model containing multiple stocks and we choose to use the famous Black and Scholes model.

Homogenous models are simpler than heterogeneous models and usually generate enough power for prediction or forecasting. However, homogenous models cannot explain the differences caused by intrinsic differences of consumers or distinguish the agent based effects. This paper will briefly talk about some “compromised complex” models and methodology developments for their inferential processes.

by: Alexandre Petkovic

by: Zhengyuan Gao

The Restricted Core for Totally Positive Games with Ordered Players Recently, applications of cooperative game theory to economic allocation problems have gained popularity. In many such allocation problems, the game is totally positive and there is some hierarchical ordering of the players. This article is a refinement of the famous Core of these games which is based on the distribution of dividends taking into account the hierarchical ordering of the players. by: René van den Brink

Stochastic Rates of Return in a Hybrid Pension Fund Hybrid pension funds represent a combination of Defined Contribution and Defined Benefit Pension Funds. In this article, a hybrid pension fund is proposed which is accumulated through time based, on a pre-defined target. The only source of uncertainty assumed in this work, is through volatile rates of return. The contributions are assumed non constant and adjusted to eliminate any possible deficits arisen, by using the modified spreading method developed by Owadally (2003). by: Denise GómezHernández

Embedded Options and Solidarity Transfers within Pension Deals Remarkable is the rapid change in social-economic discussions around pensions as a result of the recent turmoil in financial markets. Currently the big question is who will be paying the bill as funding ratios of Dutch pension funds now have dropped significantly, whereas 2007 headlines were quoting “Who owns the excessive pension fund buffers?” The rapid change and intensity of these discussions in media and by politicians illustrate that pension deals are often incomplete. This article discusses which loose ends can be identified within pension deals. Subsequently, it is shown how option pricing theory and stochastic discount factors can be useful instruments in an approach to address value. by: Pim van Diepen

Puzzle Facultive

Social Security and Temptation Problems Unfunded Social Security is the largest transfer program in most industrialized countries and has an enormous impact on macroeconomic variables and individual saving decisions. It is therefore important to assess whether it actually improves welfare in the economy. by: Alessandro Bucciol

AENORM

vol. 17 (65)

October 2009

3


Mathematical Economics

Growth in Economics: Coordination Problems in Experiments by: Chen Yeh In ‘real’ world phenomena efficient coordination is often observed during difficult problems. Think of firms in a particular business deal or individuals being part of communities. In these phenomena, coordination plays a crucial role and these large groups often manage to coordinate successfully. However during experiments, which are subject to laboratory conditions, experimental economists have always failed to create a large, efficient, coordinating group of individuals. In his paper on the contrary, Weber (2006) conducts an experiment that seems to comply with the real-world observation of successful coordination in large groups. According to Weber, efficient large group tacit (i.e. non-communicating) coordination can be reached by letting groups start off small and then add entrants that are well aware of the group’s history. The result is quite unique as it is the first laboratory demonstration of efficient coordination in large groups.

Introduction In the past few decades, coordination has been an important topic for economics (Schelling (1960), Arrow (1974)). Especially the case of tacit (i.e. noncommunicating) coordination has been a popular choice among economists. In tacit coordination individuals deal with situations in which they attempt to match the actions of others without any agreement or knowledge about the actions of other players. Examples of coordination problems include buyers and sellers searching for a particular market, workers with complementary production tasks or consumers purchasing products with network externalities (Weber, 2006). It is often observed that firms and communities in the ‘real’ world coordinate successfully. However this observation seems to contradict the results of experimental economists, as large groups of people that do not communicate, almost never coordinate successfully in a laboratory environment. Thus the question remains: if large laboratory groups cannot coordinate successfully, how do firms and communities in daily life do manage to coordinate efficiently? In his paper, Weber (2006) demonstrates that tacit

coordination among large groups is possible when a “growth process” is adapted. Previous results have shown that small groups often are capable of coordinating successfully. Thus Weber (2006) hypothesizes that once a small group is coordinating efficiently; this group can establish a set of rules regarding what actions are appropriate. As a result, entrants (new members) will adhere to these rules and the group will continue to coordinate activity successfully. The results seem to confirm his hypothesis and are quite unique as Weber’s experiment is the first laboratory demonstration of the regular occurrence of successful tacit coordination among large groups.

Coordination in experiments: the minimum effort game In experimental economics, experimental methods are applied to answer economic questions or to test the validity of economic theories. Often subjects of the experiment play a modified game. In coordination experiments in particular, the minimum effort or weak-link game is often studied. This game was introduced by Van Huyck et al. (1990). A total of n players pick a number from a set of

In this issue of AENORM, we continue to present a series of articles. These series contain summaries of articles which have been of great importance in economics or have caused considerable attention, be it in a positive sense or a controversial way. Reading papers from scientific journals can be quite a demanding task for the beginning economist or econometrician. By summarizing the selected articles in an understanding way, Aenorm sets its goal to reach these students in particular and introduce them into the world of economic academies. For questions or criticism, feel free to contact the Aenorm editorial board at info@vsae.nl.

4

AENORM

vol. 17 (65)

October 2009


Mathematical Economics

Table 1. Pay-off matrix (in dollars) for the minimum-effort coordination game (Source: Weber (2006)) Minimum choice of all players

Player´s choice

7 6 5 4 3 2 1

7 0.90

6 0.70 0.80

integers that can be thought of as effort or contribution levels. Then every player’s payoff is dependent upon his own pick and the minimum choice of all n players. This explains the term weak-link as every player’s pay-off is partially dependent upon the lowest choice in the group. The original version of this game by Van Huyck et al. (1990) is also used by Weber (2006). Table 1 shows the pay-off matrix for the minimum effort game. Pure-strategy Nash equilibria are easily spotted: everyone makes the same choice and therefore receives the same payoff, thus there are 7 equilibria in total. The equilibria differ because those corresponding to higher effort levels also result in higher pay-offs. Therefore more efficient coordination is equivalent to all players making higher choices in equilibrium. It is clear that the most efficient (or Pareto-optimal) equilibrium is achieved when all players pick 7 and thus receive $0.90. Note that this minimum effort game does not have the incentive problem that is present in the Prisoner’s Dilemma as players do not have the incentive to deviate from the Pareto-optimal outcome, contrary to the Prisoner’s Dilemma. However reaching the most efficient equilibrium may not be that easy as players are faced with strategic uncertainty. While it is fairly easy for a player to recognize the most efficient equilibrium, he or she may be unsure of what others will do. As a result, players may choose something other than the desired outcome 7, especially when they think it is likely that someone else will choose something other than 7. Thus simply being unsure about the other players’ actions may lead to inefficient outcomes. Previous results of the minimum effort game are clear: tacit coordination on the Pareto-optimal outcome seems to be impossible for large groups. The only results that indicate some success on the most efficient outcome are for groups consisting of only 2 players. These results are summarized in table 2. Following these previous results, a strong negative relationship between a group’s size and the ability of this group to coordinate on the most efficient level seems to present.

5 0.50 0.60 0.70

4 0.30 0.40 0.50 0.60

3 0.10 0.20 0.30 0.40 0.50

2 -0.10 -0.00 0.10 0.20 0.30 0.40

1 -0.30 -0.20 -0.10 -0.00 0.10 0.20 0.30

Weber’s remedy: growing groups Previous results thus seem to indicate that efficient coordination among large groups is hard or even impossible, contrary to real-world groups. Weber (2006) notices that few large groups start off large: most groups in fact begin small. Once a small group is successfully coordinated, these groups might be able to stay coordinated when entrants (new players) are added. This might especially occur when these entrants are aware of the group’s previous success (or history). As it later turns out, this latter condition seems to be of crucial importance. To incorporate the idea of adding entrants (or “growing groups”) in the minimum effort game, a socalled “growth path” is needed. This growth path defines a sequence of weakly increasing sets of players, i.e. every period the number of players stays the same or increases by a number greater or equal to 1. The set of players in any particular period then includes three possible kinds of players: incumbents (those players who were already present in the previous round), informed entrants (those players who did not participate in the previous round, but observed the entire sequence of group minima) and uninformed entrants (those players who did not participate in the previous round and did not observe the group’s history). To test the hypothesis of the fact that growing groups, coupled with the exposure of entrants to the group’s history, produces better coordinating groups than “simple” large groups, Weber (2006) incorporates the following experimental design. Several slow, steady growth rates are adopted. The main characteristic of these growth rates is that per round a maximum of 1 player is added to the game session. Furthermore in each session of the experiment, a group of 12 students participated in the minimum effort game. These students were numbered and afterwards instructions were read aloud and before starting the actual game, participants answered several questions to ensure that they understood how the game worked. In total there were three different sessions: in control

AENORM

vol. 17 (65)

October 2009

5


Mathematical Economics

Table 2. Previous results of the minimum-effort coordination game (Source: Weber (2006)) Minimum choice in fifth periode

7

6

5

4

3

2

1

86% 18% 0% 0% 0% 0%

3% 4% 0% 0% 0% 0%

3% 0% 0% 0% 0% 0%

3% 11% 0% 0% 0% 0%

0% 15% 10% 0% 0% 0%

0% 15% 10% 0% 0% 0%

5% 37% 80% 100% 100% 100%

sessions, no growth was present and the group stayed constant at 12, in the history growth session, informed entrants were added and in the no-history growth session, only uninformed entrants were added. Lastly, the control session lasted 12 periods and both growth sessions consisted of 22 periods.

Number of groups 37 27 10 5 2 7

Five control sessions were conducted, thus the sample for the control sessions consisted of 60 students. These students were undergraduates at Stanford University, California Institute of Technology and Carnegie Mellon University. The results can be found in figure 1. The minimum choices in all periods are shown for all five control sessions. Furthermore the average minimum (over 12 periods) and the average effort level are depicted in figure 1. In overall, the results of the control sessions are clear: the minimum effort level fell quickly to 1 in all sessions. Both the average choice and the average minima also ended up near 1. These outcomes thus seem to confirm previous results. As was suggested earlier, surprising results can be found in the (predetermined) growth sessions. In total,

the sample for the growth sessions consisted of 144 students (12 sessions times 12 students). These sessions were conducted among students from Stanford University (session 1 to 4), University of California, Santa Cruz (session 5 to 7) and Carnegie Mellon University (session 8 – 12). Nine sessions (1 to 9) were in the history condition, while the remaining three (10 to 12) were in the no-history condition. It should be noted that the growth paths were designed in such a way that successful coordination among a two-person group could first be established. Afterwards growth occurred in a slow and steady manner. Three notable observations can be made. First, the small groups were always able to coordinate efficiently (either at effort level 6 or 7) independent of the fact if they were part of the history or no-history session. The second observation can be regarded as Weber’s strongest contribution: growth often produces large groups that are coordinating at high levels of efficiency. It should be noted that these levels are not always the Pareto-optimal outcome; however they are higher than the conventional found minimum levels. While this is a noticeable result, Weber also observes that growth did not always work, even in history sessions. In three of the nine sessions, the minimum by the first period in which the group played as a 12-person

Figure 1. Period minima in control sessions (Source: Weber (2006))

Figure 2. Period minima and growth in history sessions 1 and 2 (Source: Weber (2006))

Efficient coordination in the laboratory: results of Weber’s experiment

6

Group size 2 3 6 8 9 14-16

AENORM

vol. 17 (65)

October 2009


Mathematical Economics

Figure 3. Period minima and growth in history sessions 3 and 4 (Source: Weber (2006))

the history sessions. The same result holds for minima in the no-history versus history sessions.

Conclusion

Figure 4. Period minima and growth in history sessions 5 to 9 (Source: Weber (2006))

Figure 5. Period minima and growth in no-history sessions 10 to 12 (Source: Weber (2006))

Contradictory to previous experimental results, efficient tacit coordination is often achieved among firms and communities in the real world. In tacit coordination individuals deal with situations in which they attempt to match the actions of others without any agreement or knowledge about the actions of other players. In his paper, Weber (2006) demonstrates that tacit coordination among large groups is possible when a “growth process” is adapted. He first notices that few large groups start off large: most groups in fact begin small. Once a small group is successfully coordinated, these groups might be able to stay coordinated when entrants are added. This might especially occur when these entrants are aware of the group’s previous success. In his experiments, it turns out that this condition is critical. To incorporate the idea of adding entrants (or growing groups) in the minimum effort game, Weber introduces growth paths. The manager (or experimenter) as a result decides when an entrant is allowed to participate in the game. In total, three different sessions of the game were held: the control, history and no-history session. The results largely seem to confirm Weber’s hypothesis as large groups do seem to coordinate more efficiently than in previous experiments. However his results also indicate that growth does not always work. Moreover, Weber carefully mentions that his paper was not meant “to provide a prescription for how a group that is already large and coordinated on an inefficient equilibrium might turn things around [to an efficient equilibrium]”. He admits that in practice there are also other ways to improve coordination in large groups. Nonetheless, Weber’s results are unique as it is the first laboratory demonstration ever of efficient coordination among large groups

References Arrow, K.J. The Limits of Organization. New York: W.W. Norton & Company, 1974. collective was already 1. Weber does admit that these results contradict his hypothesis; however he defends his experiment by noticing that in all the sessions that ended up at the minimum of 1, the minimum was higher, at least through a group size of 9, than previous experiments. Thus Weber concludes “there is clear support for the hypothesis, that starting with a two-person group, which can reliably reach efficiency, and then adding informed players at a slow rate enables more efficient coordination than starting with a large group”. In the last part of his paper, Weber shows statistically that the minima in the control sessions are significantly different than those in

Schelling, T.C. The Strategy of Conflict. Cambridge, MA: Harvard University Press, 1960. Van Huyk, J.B. et al. "Tacit Coordination Games, Strategic Uncertainty and Coordination Failures." The American Economic Review, 80.1 (1990): 234–248. Weber, R.A. "Managing Growth to Achieve Efficient Coordination in Large Groups." The American Economic Review, 96.1 (2006): 114–126.

AENORM

vol. 17 (65)

October 2009

7


Actuarial Sciences

An Actuarial Neutral Method to Raise a Delayed AOW Entitlement by: Hans Staring On 29th of August 2008, Minister Donner of Social Affairs announced that citizens in the Netherlands would have the possibility to postpone their AOW entitlement for a maximum of five years. The most important goal of this measure is to make it more normal for people to work after they reach the age of 65. With the aging of the population in mind, the Cabinet Balkenende IV wants to raise the workforce participation to 80% in 2016 (Cabinet Balkenende IV, 2008, p.3). Greater workforce participation is necessary to keep government finances healthy and to be able to pay higher expenditures on AOW and the health care system. When people postpone their AOW, they receive approximately 5% extra per year of delay. This article relates to the extra bonus people receive. Critics say that a 5% bonus per year is too low and should be 8-9% (Panneman, 2008, p.8). The goal of this thesis is to construct an actuarial neutral method that calculates the bonus and to check whether the government is correct with their 5% bonus per year forecast. Firstly, the need to raise workforce participation is discussed. Secondly, we design an actuarial neutral formula to calculate the bonus and subsequently discuss the formula the government will use. This allows a judgment of fairness to be made regarding the critique on the 5% bonus. Furthermore, because individuals wish to know how they can benefit from the bonus received by delaying their AOW, we calculated the result in the event of a delay of one year. Lastly, a conclusion is given.

Why the AOW must be changed The AOW was introduced in 1957 by the Cabinet Drees III. The costs for the society were low, slightly more than 2% of the Gross Domestic Product (GDP) (Van Eekelen & Roeterink, 2007, p.19). Nowadays, the costs are much higher. There are several causes for this. The most important demographic cause is that the old-age dependency ratio has increased. The old-age dependency ratio is defined as the ratio between the population in the age-group 65 years and over and the population in the age-group 20-64. You can see the

Hans Staring Hans Staring (1988) is currently a Master student of Actuarial Sciences and Mathemical Finance at the University of Amsterdam. Last summer, he obtained his BSc. This article is a summary of his bachelor thesis, written under guidance of Prof. Dr. J. B. KunĂŠ.

8

AENORM

vol. 17 (65)

October 2009

increase in figure 1. The old-age dependency ratio has partly increased due to current higher life expectancy. Figure 2 shows that on average males now enjoy their AOW entitlement three years longer than in 1957. For women, this is more than five years. The old-age dependency ratio has also increased due to the strong decrease of the fertility rate between 1965 and 1975, when the contraceptive pill became available to many women. The fertility rate decreased from 3.1 in 1957 to 1.7 in 2009. There are also labour forces that have made the AOW more costly. Most importantly, people retire years earlier than in 1957, making the AOW less affordable for the smaller workforce. In the Netherlands, the workforce participation in the age-group of 60-64 was only 16% in 2000. After cancelling fiscal advantages for the VUT and pre-pension, the participation increased to 28% in 2007 and is expected to be 43% in 2011. (Commission Bakker, 2008, appendix 2, p.4) All these developments made the AOW much more expensive for working people and the government. In 1957, the premium percentage for working people was 6.75% and it grew to 17.90% in 2009. This is the maximum tariff the government has imposed. Because 17.90% is insufficient, remaining costs are paid by the government and were 30% of the total costs in 2006 (Van Eekelen & Roeterink, 2007, p.20). The expectation is that if nothing changes, this will be more than 50% in 2030 (Van Eekelen & Olieman, 2003, pp.8-10). The total costs increased from 2% of GDP in 1957 to 4.5% of GDP in 2006 (Van Eekelen & Roeterink, 2007, p.19). By 2030, the total AOW costs will be 6-8% of GDP (Van Eekelen & Olieman, 2003, p.6). The conclusion is that the time has come to take measures against ever increasing costs


Actuarial Sciences

Figure 1. Old-age dependency ratio

Figure 2. Life expectancy at birth and at age 65

of the AOW. If a more flexible AOW helps people to delay their AOW, this measure is worth support.

Determining an actuarial neutral method for the bonus The AOW entitlement is an entitlement that is paid from as soon as a person reaches 65 till the end of their life. Payments are received at the end of each month. We will now determine the actuarial neutral formula to calculate the bonus if someone wishes to delay his or her AOW entitlement. We start with the formula for a whole life annuity immediate which is n years delayed. U 65 × a65 = U 65 + n × a65 + n × n E65

(1)

We can also wrote this as U 65 + n = U 65 ×

a65 / n E65 a65 + n

(2)

and subsequently as U 65 + n a / E − 1 = 65 n 65 − 1 U 65 a65 + n

(3)

We now have the formula for the bonus when the entitlement is delayed n years. We see it’s the product

ä(12) x (c ) = α (12) × ä x (c ) − β (12) × ä x (c ) a x(12) (c ) = ä(12) x (c ) − with f ×d f × d (12)

β (12) =

f − f (12) f (12) × d (12)

U 65 + n a (12) / E − 1 = 65 (12)n 65 − 1 U 65 a65 + n

When we hold the interest rate constant, then with the help of some basic actuarial formulas we can write this formula as a formula that is large, but very easy to use. The formula above belongs to a funded system whereas the AOW is a pay-as-you-go system (PAYG). This has an impact on the formula. Where in a funded system we have the interest rate i, in a PAYG system the present value of the entitlement of a person is in fact fictive. This fictive value is lower when the price inflation f is higher and vice versa. Furthermore, periodically the AOW entitlement is raised with a factor R. First, we write formula 4 as

U 65 + n −1 = U 65

1 ä 65 (c )) / n E65 12 −1 1 (12) ä 65+n (c ) − × ä 65+n (c ) 12

(ä(12) 65 ( c ) −

(5)

and subsequently as α (12)× ä65 ( c ) − β (12)×ä 65 (c ) −

1 ä65 (c ) 12

U 65 + n n E65 −1 = − 1 (6) U 65 α (12)ä 65+n (c ) − β (12)ä65+n (c ) − ä65+n12(c )

1 × ä x (c ) 12

α (12) =

of a whole life annuity immediate at age 65, a whole life annuity immediate at age 65+n and an n-year pure endowment of 1 at age 65. This entitlement is paid at the end of each year. The AOW entitlement is paid each month. We than get the formula (4)

Formula 6 can we written as

(12)

ä x (c ) = ∑ c0 × (1 + R ) k × v k × k p x with v n = ( k =0

1 n ) 1+ f

AENORM

vol. 17 (65)

October 2009

9


Actuarial Sciences

Table 1. Bonus factor AOW with regard to the start of AOW, Own method

Table 2. Bonus factor AOW with regard to the start of AOW, Gvernment method

Age first AOW

Bonus factor Own method

Age first AOW

Bonus factor Government method

66 67 68 69 70

5.14% 10.78% 17.00% 23.88% 31.52%

66 67 68 69 70

5.31% 11.20% 17.80% 25.24% 33.67%

Aboutaleb, 2008, p. 7): LE / (LE-P)

U 65 + n −1 U 65 α (12)× c0 ä65 (

α (12) × c0 ä 65+n ( α (12)× ä65 (

=

Δc 1 Δc c ) − β (12)× c0 ä65 ( ) − c0 ä65 ( ) c0 c0 12 c0 n E65

c Δc ) − β (12) × c0 ä 65+n ( ) − c0 c0

c Δc 1 Δc ) − β (12)× ä65 ( ) − ä65 ( ) c0 c0 12 c0 n E65

12

Δc ) c0

−1

(7) Δc

α (12)ä 65+n (

c0 ä65+n (

Δc ä65+n ( ) c ) − β (12)ä 65+n ( ) − 12 c0 c0 c0

−1

The bonus method is now independent of the amount c0. In practice this means that the bonus method is independent of the civil status (married, single, perhaps entitled for extra allowance) of the receiver of the AOW. The actuarial neutral bonus method is equal in all these cases. As it is very difficult to estimate the value of the price inflation f and the increase of the entitlement with R, we chose to assume these are constant and the average of the last 10 years (2.72% for R, 2.29% for f). We see that on average the purchasing power of the AOW entitlement increases every year. We have used the life-expectancy probabilities from the central bureau of statistics (CBS), taking into account future developments. When someone delays his or her AOW for two years, their bonus will be 10.78%.

The bonus method as used by the government Minister Donner has announced that the following formula will be used to calculate the bonus (Donner &

LE stands for the life expectancy in years at age 65 and P stands for the delay of the AOW entitlement in years. The life expectancy is calculated by the CBS, taking into account future developments. In a memorandum (van der Meulen & van Duin, 2009) they determined the life expectancy that will be used in the bonus method. For 2007, the life expectancy that will be used is 19.85 years (pp. 5-6). With only this data, the government can calculate the bonus by any given delay. In table 2 this is done for whole years of delay. If someone decides to receive AOW at age 67, the bonus they will receive from the government is 11.20%. The first remarkable thing about formula 8 is that it does not use price inflation or the periodic raise by the government. We saw that the raise (on average 2.72%) is higher than the price inflation (on average 2.29%). Without incorporating these estimates, postponing the AOW will be beneficial as it raises purchasing power. The second remarkable thing is the calculation of the remaining life expectancy in the age group 66-70. This is calculated in the denominator of formula 8. With a life expectancy at age 65 of 19.85 years, the life expectancies for the ages 66-70 are calculated as 18.85, 17.85, 16.85, 15.85 and 14.85 years respectively. This is of course too low, because some people die. The remaining people must have a higher than the average remaining life expectancy. Because the denominator is too low, the fraction is too high, which is beneficial for people who delay their AOW. When we compare the actuarial neutral method and the government method (table 1 and table 2) we see that the government method has indeed a consequently higher

Table 3. After tax differences in income

10

Income

€ 30,500

Civil Class

After tax difference in income

AENORM

€ 30,500

€ 45,750

Married

Single

€ 4,233

€ 6,380

vol. 17 (65)

October 2009

€ 45,750

€ 61,000

€ 61,000

Married

Single

Married

Single

€ 4,774

€ 6,550

€ 4,095

€ 5,872


Actuarial Sciences

Figure 3. Married People

bonus. The criticism that the bonus of the government is too low is incorrect. This is due to the fundamental differences between funded and PAYG pensions. The AOW entitlement is raised every year by 2.72% on average. This explains the main component of the difference between 5% for the AOW and 8-9% bonus for funded pensions. The remaining difference is explained by the discount factor used. In a funded pension system the discount factor is 3-4%. In a PAYG system, the discount factor is the price inflation, which is usually lower than 3%. The government chose a formula that is easy to understand and to calculate. This can positively influence the use of the possibility to delay. It is also positive that the government takes into account future developments on the hazard rate. The bonus is slightly higher, but not shockingly so, in the government method. The bonus method is not actuarial neutral, but comes close to it.

Analysis of 1 year postponement AOW The analysis has been completed for three wage classes and two civil status categories. The wages are â‚Ź 30,500, â‚Ź 45,750 and â‚Ź 61,000. The two civil status categories are married and single. Consider the following situation. Two neighbours, both 65 years old, earn the same wage and share the same civil status. One neighbour delays his AOW for one year while the other chooses to receive his AOW immediately. The neighbour who receives his AOW puts the difference in their after tax income in a savings account. The first year he receives interest of 4.4% (this interest is used in Donner & Aboutaleb (2008, p.33)). After that, each year he withdraws the amount his neighbour receives as bonus from his savings account. The moment his savings account has insufficient funds to pay the bonus, it is better to postpone your AOW. Of course, you do not know if you will live till the moment when that choice would have affected you. The differences in after tax income (using the latest Dutch tax rules) are found in the table 3. For married persons, the AOW entitlement is raised the last ten years by 2.69% per year on average and 2.76% for singles. Using the government method, the

Figure 4. Singles

bonus people get for a one year postponement of their AOW is 5.31%. Now the analysis can be made. The time by which the savings account is depleted is shown in figure 3 for married people and in figure 4 for singles. We can see that the savings account becomes unable to fund the bonus between 2019 and 2021. The neighbours are than between 74 and 76 years old. The conclusion is that for people who work after the age of 65, and expect to live past the age of 76, it is financially interesting to postpone their AOW.

Conclusions Paragraph two showed that the costs of the AOW as a fraction of GDP has grown over the years and is likely to grow into the future. The pool of funds generated to pay the costs is getting smaller. These reasons support the plans of the cabinet to increase workforce participation. After that, paragraph three and four showed that the bonuses from the government method are slightly higher than the bonuses calculated by the actuarial neutral method. A positive of the government bonus method is that it is very easy to use and to understand. Under the given assumptions, the bonus should be between the 5% and 32%, depending on the length of the delay. Lastly, paragraph five showed that people who postpone their AOW for one year need to live approximately ten years to justify doing so financially.

References Central Bureau for statistics. http://statline.cbs.nl/ statweb/. Web. february-may 2009. Commission Bakker. "Naar een toekomst die werkt." 2008. Print.

AENORM

vol. 17 (65)

October 2009

11


Actuarial Sciences

Donner, Piet Hein & Ahmed Aboutaleb. "Memorie van toelichting wijziging van de Algemene Ouderdomswet in verband met opname van de mogelijkheid om op verzoek van de pensioengerechtigde het ouderdomspensioen geheel of ten dele op een later tijdstip te laten ingaan." Kamerstuk 2008-2009, 31774, nr. 3/3, 18 Novembre 2008. Print. Van Eekelen, Lambrecht & R. Olieman. "Voorbij de grenzen van de premie?" J.B. KunĂŠ (red.), leven in een ouder wordende samenleving. Generatiebewust vooruitzien in de 21e eeuw, Den Haag: Sdu uitgevers. (2003): 85-100. Print. Van Eekelen, Lambrecht & Christiaan Roeterink. "Overdrachten tussen Generaties." Tussen groen en grijs: Over ouderen en de verhouding tussen de generaties, Den Haag: De Haagse Hogeschool. 2007: 13-27. Print. Kabinet-Balkenende IV. "Men is zo oud als men zich voelt." Cabinet policy memorandum, 28 May 2008. Print. Van der Meulen, A. & C. van Duin. "Resterende levensverwachting op 65-jarige leeftijd; een verkennend onderzoek." www.cbs.nl. Web. 26 March 2009. Panneman, Diede. "AOW-bonus misleidt." Het Financieele Dagblad, 9 Septembre 2008:8 Print.

12

AENORM

vol. 17 (65)

October 2009


Actuariaatcongres 2009 www.vsae.nl

[Actuaris van de Toekomst]

dinsdag 8 december [09.00 - 17.30 uur]

Tuschinski Amsterdam

kijk voor meer informatie en inschrijvingen op: www.actuariaatcongres.nl


ORM

Distribution Management in the Food Industry by: Renzo Akkerman One of the most challenging tasks in today’s food industry is controlling product quality throughout the supply chain. In this article, a short introduction is given to distribution management in the food industry, followed by a discussion of a modelling approach. This approach allows the modelling of food quality degradation in such a way that it can be integrated in decision support models for production and distribution planning. The article is also meant as an illustration of the interdisciplinary research performed within the FoodDTU research centre at the Technical University of Denmark.

Introduction Despite the food sector’s relevance, food distribution management has received only little attention in the literature. The reason may be that the management of food supply chains is complicated by specific product and process characteristics. These characteristics have often also limited the possibilities for supply chain integration in food supply chains (Van Donk et al., 2008), but the inclusion of these food-specific characteristics is needed to develop successful decision support models in this area. For food products, production and distribution systems especially have to focus on food safety and food quality. First, food safety is increasingly important in today’s food supply chains, evidenced by food scares related, for example, to the presence of salmonella in poultry products, or cows infected with BSE. Such situations sometimes lead to major product recalls, and can be commercially devastating for the companies involved. To manage food safety, systems such as HACCP (Hazard Analysis Critical Point Analysis) were developed, based on risk management principles. Also, legislation currently enforces traceability of food products during all stages of production and distribution. Even though this legislation is in place, complete traceability is still more the exception

Renzo Akkerman Renzo Akkerman is an associate professor in operations management at the Technical University of Denmark. He obtained his PhD in Operations Management from the University of Groningen in The Netherlands, where he also received his MSc in Econometrics and Operations Research. His main research interests are in operations management and supply chain management in the food industry.

14

AENORM

vol. 17 (65)

October 2009

than the rule (Miller, 2009). Nevertheless, the presence of the legislation continuously improves the information availability, increasing the opportunities for research in the field of production and operations management. Secondly, food quality might be an even more important product characteristic to consider throughout the supply chain, especially as it is significantly influenced by how production and distribution is organized. Also here, recent technological developments have improved the information availability, for example statistics related to temperature monitoring throughout distribution systems. Overall, the focus on safety and quality make the management of food supply chains complex and the distribution planning more challenging. Furthermore, small time windows for delivering the products, high customer expectations, low profit margins, and developments related to sustainable supply chains (visible in initiatives like food miles and carbon footprints) make food distribution management a challenging area that has only recently began to receive more attention in the literature. In the remainder of this article, the focus is on food quality, and an example of how it can be integrated in decision-making related to distribution network planning. In general, the phrase network planning refers to a midterm, tactical decision problem in which decisions on distribution are highly interrelated with decisions on production and inventory. In the following section, some basic principles behind food quality degradation are outlined, as this is necessary to understand before its integration in production and distribution planning can be discussed.

Food quality degradation Quality degradation of food products depends mainly on the storage time and the storage environment, where temperature is the main factor in the latter. This is of course essential in production and distribution planning,


ORM

Figure 1. Generic food supply chain structure.

as it is here that we decide how long and in which environment products will be stored. Figure 1 shows the generic structure of a food supply chain, and highlights where planning decisions affect the quality degradation of food products , based on time and temperature. The theory of food quality degradation is a field of research in itself, but, in a given environment, quality degradation is mostly linear or exponential in relation to time. For example, most fresh fruits and vegetables follow a linear relationship, and most fresh meat and fish follows an exponential relationship (as this is based on microbial growth). The convenient aspect of this is that we can assume the degradation is linear in models, as the exponential relationships can easily be linearized by taking logarithms. Even though product quality is a very important aspect in food supply chains, there is limited work in the production and operations management research that addresses this. A notable exception can be found in Zhang . (2003), who considered the design of a distribution system in which product quality is represented as a function of time and temperature for production, transportation, and storage. In the few contributions found in the literature, quality degradation is mainly treated as a given, which was also identified by Van der Vorst et al. (2008), who stress that product quality should be an essential factor in the design of food supply chains. In the remainder of this article, a pro-active approach to managing food quality throughout the supply chain is presented.

Modelling food distribution In Rong et al. (2009), we developed a modelling approach to support food production and distribution planning. The model is based on mixed-integer linear programming, and combines product quality degradation models with the more common logistics models containing the usual decision variables related to production quantities, inventory levels, and product shipments between different stages in the supply chain. We will not discuss the complete modelling approach here, but focus instead, as this is the most distinguishing feature, on how quality degradation is integrated. As described, quality degradation of a food product can be seen as a linear function of time when the products are exposed to an environment with a certain temperature. Using a range of discrete quality levels ranging from 0% to 100%, the quality level of the product decreases in those quality levels as time goes by, whether or not it remains in storage or is transported to other facilities. However, the degree of change can differ for different stages in the distribution network. We can build on a large body of knowledge available in the food science literature to find the actual relationship between quality and time for the specific product in which we are interested. We can identify minimum quality levels qmin for each actor in the supply chain: production facilities, distribution centres, and retailers. These minima are based on the minimum quality level required by the retailers and could also differ for different retailers. High-end stores might require higher quality product than discounters. The maximum quality level

The research described in this article is performed within the interdisciplinary food research centre FoodDTU, crossing 10 departments at the Technical University of Denmark (DTU). The research, teaching and consulting undertaken in this research centre covers the entire supply chain – from sea and soil to the dinner table. People with a variety of backgrounds are involved, from micro-biology to sociology, and food technology to operations management. For more information about FoodDTU, visit www.fooddtu.dk, or contact the author of this article at renzo@man.dtu.dk.

AENORM

vol. 17 (65)

October 2009

15


Je leert meer...

...als je niet voor de grootste kiest.

Wie graag goed wil leren zeilen, kan twee dingen

net zo goed voor onze traineeships waarin je diverse

doen. Je kunt aan boord stappen van een groot zeilschip

functies bij verschillende afdelingen vervult. Waardoor je

en alles leren over een bepaald onderdeel, zoals de stand

meer ervaring opdoet, meer leert en sneller groeit.

van het grootzeil. Of je kiest voor een catamaran, waarop

SNS REAAL is met een balanstotaal van € 124 miljard en

je zelf de koers bepaalt en jouw ontwikkeling direct van

zo’n 8.000 medewerkers groot genoeg voor jouw ambities

invloed is op de snelheid van je boot. Zo werkt het ook met een startfunctie bij SNS REAAL, de

Starters

en klein genoeg voor een persoonlijk contact. Ambitieuze en ondernemende starters op hbo-

innovatieve en snelgroeiende dienstverlener in bankieren

en wo-niveau bieden we naast een afwisselende functie

en verzekeren. Waar je als starter bij een hele grote

een uitstekend salaris en goede doorgroeimogelijkheden.

organisatie vaak een vaste plek krijgt met specieke

Aan jou de keuze: laat je de koers van je carrière door

werkzaamheden, kun je je aan boord bij SNS REAAL in de

anderen bepalen of sta je liever zelf aan het roer? Kijk voor

volle breedte van onze organisatie ontwikkelen. Dat geldt

meer informatie over de startfuncties en traineeships van

voor onze nanciële, commerciële en IT-functies, maar

SNS REAAL op www.werkenbijsnsreaal.nl.


ORM

Figure 2. Illustration of a food production and distribution system (consisting a production facility P, a distribution centre D, and a retailer R) and the modelling of product quality in discrete quality levels.

of the product qmax is based on the initial quality of the product at the start of the distribution system. Figure 2 illustrates a possible distribution system and the quality degradation throughout this system, where temperatures for the various storage and transportation steps are assumed given.

Discussion The modelling approach described in this article combines decision-making on traditional logistical issues such as production volumes and transportation flows with decisions on storage and transportation temperatures. The approach has also been applied in a case study, illustrating how to apply the generic model to a specific case. This important addition especially focuses on how the product quality of a specific product is modelled on a discrete scale and what type of result is obtained from the model. In short, it was able to determine the temperatures for the different storage and transportation steps in the distribution network that, in combination with the resulting production quantities and distribution plans, were able to comply with all quality requirements at minimal cost. Maybe even more importantly, the model could be used to analyse the cost-efficient operation of the distribution network under different circumstances. For example, studying scenarios in which transportation becomes more and more expensive (e.g. related to its unwanted CO2 emissions through carbon taxes). Then, we are able to determine how much more effort (and money) should be

Planning decisions affect the quality degradation of food products Defining the range of quality levels, should be coordinated with the definition of time periods (used for transportation time and storage time) so that the quality level degrades at least one level for each time period (either during transportation or during storage). This is necessary to make sure that the model is able to trace product batches of different quality throughout the production and distribution network. It also allows us to distinguish between all product flows by their quality level, which is essential in our work. Another key aspect of our modelling approach is that these temperatures are also decisions. This means that the degradation of the product involved is actively influenced in the model, obviously taking into account the related cost factors (such as the cooling cost at distribution centres). In addition to the quality degradation aspects and the temperature decisions, the remainder of the model is a more traditional mixed-integer linear programming approach to production and distribution planning, with an objective function comprising several cost factors related to production and distribution, and constraints relating to demand fulfilment, inventory balances, production capacities, etc. Obviously the focus on product quality is reflected in most of the model, for example where inventory balances have to consider products with different quality levels, and also include potential loss of product if its quality falls below the specified minimum level. notes

spent on increasing the starting quality of the product, or on slowing its quality degradation process. Both of these options would allow transport at higher temperatures, leading to a decrease in transportation cost – and CO2 emissions. In a follow-up project, similar modelling techniques are applied to the distribution system of meal elements (Akkerman et al., 2009). This project is based on a new way of distributing the main components of professionally prepared meals (e.g. served in hospital kitchens or university canteens). Further to detailed modelling of product characteristics during distribution, the main focus is on finding efficient and environmentally friendly distribution strategies. This means that in addition to costs, the models developed also explicitly consider the environmental impact of choosing a certain packaging material or a certain delivery frequency. As in the model described in this article, all these aspects are highly interrelated with the changing product quality, and good decision support models are essential in understanding the relationships and possible trade-offs between costefficiency, environmental impact, and food quality.

Acknowledgements This article is predominately based on Rong et al. (2009), and the author is therefore also thankful to his co-authors and other colleagues within the FoodDTU research

AENORM

vol. 17 (65)

October 2009

17


ORM

centre.

References Akkerman, R., Y. Wang, and M. Grunow. "MILP approaches to sustainable production and distribution of meal elements." Proceedings of the 39th International Conference on Computers & Industrial Engineering, July 6-8, 2009: 985-990, Troyes, France. Miller, D. "Food product traceability: New challenges, new solutions." Food Technology, 63.1 (2009):32-36. Rong, A., R. Akkerman and M. Grunow. " An optimization approach for managing fresh food quality throughout the supply chain." under revision for International Journal of Production Economics, (2009). Van der Vorst, J.G.A.J., S.E. Tromp, and D.J. Van der Zee. "Simulation modelling for food supply chain redesign; integrated decision making on product quality, sustainability and logistics." International Journal of Production Research, article in press (2008). Van Donk, D.P., R. Akkerman and T. Van der Vaart. "Opportunities and realities of supply chain integration: The case of food manufacturers." British Food Journal 110.2 (2008):218-235. Zhang, G., W. Habenicht and W.E.L. Spieβ. "Improving the structure of deep frozen and chilled food chain with tabu search procedure." Journal of Food Engineering, 60.1 (2003):67-79.

18

AENORM

vol. 17 (65)

October 2009


Interview with Fleur Rieter by: Erik Beckers Fleur Rieter Drs. Fleur R.M. Rieter AAG has worked since 2003 as director at Legal & General. From 1998 to 2003 she worked at PriceWaterhouse Coopers and from 1995 to 1998 she was affiliated with the Actuarial Department of Zwitserleven. She is also board member of the Dutch Actuarial Association and the Actuarial Institute, where she is responsible for quality assurance.

Could you first describe your career? I have studied econometrics in Groningen, The Netherlands from 1987 to 1993. After graduation I first worked for a life insurance company called Swiss Life. There I discovered Actuarial Science, which seemed quite interesting to me. Swiss Life offered me the opportunity to become an actuary. For a period of three years I worked on different processes like reporting and product development and graduated from the University of Amsterdam (Actuarial Science) and became Actuary AG (Dutch Actuarial Society). Because I wanted to experience more of the actuarial profession and I became a consultant for PriceWaterhouseCoopers. For more than five years I was involved with mergers and acquisitions, due diligence and international projects and cooperating with accountants. Legal & General (L&G) was one of my clients. and they in fact offered me the position of one of their directors who was leaving. This offer appealed to me and I agreed, although it meant taking a step back from the actuarial profession. My position at L&G combines the activities of financial director and that of operations director. Besides the actuarial departments there are four more departments which have to report to me. Actuarial science provides you with an extensive knowledge of product development and reporting. This made it easier to work more broadly in a management position. Being an actuary already makes you a specialist in a lot of different fields of insurances. I have been working for L&G for over 6.5 years now. During this period the company grew strongly for some time. In those days I investigated internally how the actuarial, financial and ICT processes could be improved and professionalized, a task which I still enjoy doing. As a result of the credit crunch and the “woekerpolissen� (usurious policies), the growth of L&G has stabilized. It is challenging to adapt to these kind of circumstances. Guarding the quality within L&G already appealed to me and therefore I was immediately interested when

the Dutch Actuarial Society (AG) approached me to perform this task for the Dutch Actuarial Association & Actuarial Institute (AG&AI). I think there are some similarities between the high quality processes of L&G and the ambition for quality assurance of AG&AI. Five years ago AG&AI was less professional than nowadays and this development has continued during the last two years in which I was one of the board of AG&AI. This position gives me the opportunity to return a favour to the actuarial society. As you already stated you are currently responsible for quality assurance. What does quality assurance stand for at AG? Quality is a difficult concept, since it is not easy to define. A five star restaurant of course represents quality, but McDonalds may deliver quality too. At a general meeting of the AG I once explained that being a member of AG puts a stamp on your forehead, which should internationally be recognized as a mark of quality. As an actuary AG, the title you obtain when finishing the degree for actuary AG, you should be able to deliver a certain product, which should be of an assured quality. After going through the procedure and experiencing and a few years of training, you should be able to comply with the requirements. Even if you have graduated a long time ago, you should still be developing yourself in order to adapt to the changes of the profession. This also applies to other professions, like accountants who are permanently being educated through the NIVRA in the Netherlands. The Dutch program of Permanent Education (CPD, continuous professionalism development) is one of the leading in the actuarial world, since every actuary has to get a certain amount of points every year, where it is more gradual in other countries. As I have been one of the controllers for the implementation, in cooperation with one of the committees of AG, Permanent Education is something I am quite proud of. It is encouraging to see more and more actuaries starting to realize the benefits

AENORM

vol. 17 (65)

October 2009

19


of this system. How do you divide your time between L&G and AG&AI? My prime responsibility is that of fulltime director at L&G. My position at the board of AG&AI is only for approximately half a day per week. But this is an average, during more quiet times at L&G I am able to spend slightly more time on AG duties. The president of the board has to spend more time on his duties, which consist for a large part of representative ones. The other members of the board, like myself, divide the remaining

for example, AG is a member of the International Actuarial Association (IAA) and the Groupe Consultatif. The Groupe Consultatif brings together and monitors all European actuarial associations. Furthermore, AG joins in a gathering with three other European countries (Germany, Austria and Switzerland) to exchange experiences. Next to these two groups, AG is represented in a variety of different international consultations, for example on Solvency II, actuarial professionalism (education) and quality control. Moreover, we organise an international seminar in The Netherlands every year. All of these international forms of contact are maintained

A five star restaurant of course represents quality, but McDonalds may deliver quality too. duties. I enjoy my work for both L&G and AG equally, since they are both complements of each other. You were just talking about the new regulations on Permanent Education, which has been obligatory for all actuaries AG for over two years now. Through these regulations, the actuary AG is obliged to earn 40 points per year. These points can be earned through a variety of conferences, literature and special duties. Are there any actuaries that find it hard to meet this norm? This we will be evaluated on the first of January 2010. At that moment the first three year term has ended, by which every actuary AG has to have earned all of his points. On this date we will probably discover that not every actuary has earned enough points. They will have another year to show improvement to the board and the committee and have to hand in a recovery plan (just like Dutch pension funds) for achieving this. Perhaps a few actuaries will fail to show any improvement. Members of AG know that there will be consequences if they do not earn enough points. It is not an option to leave the unsuccessful unpunished as this cannot be justified not only given our ambition, but also not to the regulator and other stakeholders. Of course cases of illness and such will be taken into account. Recently NIVRA expelled some of her members from the Dutch registry, the reason was because they had not earned enough points. To conclude, the Permanent Education program forms a solid step towards quality among Dutch actuaries. It is important to be taken seriously through this mark of quality, not only by the Dutch regulator (DNB) and the Verbond van Verzekeraars (VvV), but internationally as well. International recognition of the quality of Dutch actuaries is of course of great importance to the Dutch actuarial profession. Does AG&AI communicate with actuarial societies in other countries because of this? Dutch actuaries are internationally well represented,

20

AENORM

vol. 17 (65)

October 2009

by senior actuaries. The AAG quality mark means that you are able to work together with other actuaries from other countries as well, which already occurs in practice. The Permanent Education was established in line with international guidelines to ensure the AAG quality mark would receive international recognition. For example, some of these guidelines concern actuaries in an IFRS environment. AG&AI recently made an online summary to keep Dutch actuaries informed on all of these guidelines. In addition, there are a lot of actuaries AG who work abroad and therefore also contribute in establishing the name of AG&AI internationally. What is the effect of last year’s credit crunch on Dutch actuaries? Does the word “actuary” currently haves a more negative association or do Dutch people think that actuaries will bring security and calmness to today’s unstable financial markets? I do not have a specific view on this, but I am sure Dutch people currently have no negative associations with the profession. In fact I believe that we have become even more important, since an actuary is a translator of risk. Nowadays everyone understands that risk translation is important, while for example before the credit crunch people thought it was impossible that a pension fund should ever have deficit. Therefore, companies have to closely monitor their risk internally, even more because of the introduction of Solvency II. We have to learn from the mistakes in Basel II, which did not prevent banks from going bankrupt. Outcomes of models have to be interpreted with care; you should ask yourself whether they reflect reality. Since we have a role in monitoring Solvency, there is no reason for any negative association with actuaries, but we definitely have to be aware of the lessons learned. Is AG involved in the implementation of Solvency II? Being an item on Agenda of every international meeting, Solvency II definitely is one of our spearhead


actions. AG Solvency II committee is engaged in the process by writing and handing in consultation papers to CEIOPS. In addition, we are in consultation with the DNB, who is currently taking action on subject of Solvency II for Dutch insurers. Does AG regularly have contact with the Dutch regulator, DNB? We have annual consultation on Board level, in which I of course join as well. Next to this, there is a lot of contact between several of our committees and the DNB. What is your opinion on the current education of actuaries AG in relation to the credit crunch? Would you like to have more attention paid to compliance for example? In September 2010 AG&AI launches, in cooperation with TiasNimbas Business School a new and unique Executive Master of Actuarial Science (EMAS ). The Master program is designed to meet the international standards. In my opinion, the education suffices in its level of compliance and is even more tailor-made for students. We have translated the international standards for actuaries to the Dutch situation. The education is updated and connected once more to practice. We will keep on doing this since changes keep on coming. In January this year, AG&AI started the renewed education program Actuary AG. For more details and information on this program, other programs en EMAS I refer to the website of AG&AI, www.ag-ai.nl.

women are achieving higher positions in the Netherlands, for the benefit of the whole financial sector their number still could be improved. What can we expect from AG&AI in the future? Lately, there have been a lot of developments concerning AG&AI and its actuaries. Besides the credit crunch we should not forget that Solvency II is coming our way. Furthermore, the occupational group keeps on growing, since more and more actuaries are needed. Therefore Dutch actuaries will increasingly be able to raise their voices. Finally, the office of AG&AI has developed theirselves over the last two years. AG&AI is situated in a new office building in Utrecht, the new education is upcoming and we have appointed advisory board, which assists us on topics like for example Quality and Public Affairs and Public Relations. Since we are still busy implementing these issues, you can expect a lot from us in the future.

We cannot deny that the credit crunch has harmed the goodwill towards Dutch insurers. Will the association (AG) play a part in restoring the goodwill? This is the task of insurers themselves, with no direct involvement of members or the association AG. Eventually, actuaries are concerned primarily with solvability. It is our task to educate and inform our members on the subject and provide them with enough guidance. The association will of course monitor where the credit crunch affects actuaries, where we can improve and take our responsibility. Next to your career at L&G and AG&AI, you are also making an effort for the position of women in the higher levels of business. Together with 30 top executives you have written a letter to the Minister of Finance, Wouter Bos, in October 2008 in which you stated that women could bring more diversity to the necessary reform of the financial sector. What is the bottleneck at the moment? It was a friendly statement, which addressed the fact that more women in higher positions would bring more diversity and could therefore change the current culture in the financial sector. Women are less focused on increasing their bonuses, which was one of the causes of the credit crunch. Although nowadays more and more

AENORM

vol. 17 (65)

October 2009

21


Actuarial Sciences

Intergenerational Risk Sharing through Pension Systems in Theory and Practice by: Siert Jan Vos Why do pension systems exist? There are several reasons for a society to develop a pension system (see Diamond & Barr, 2006, for a non-technical overview). The traditional reason for having a pension system is providing the elderly with a basic income to prevent poverty. A second reason for having a pension system is consumption smoothing: if you spend everything you earn in the years that you work, you have no money left for consumption once you retire and do not have an income anymore. A pension system provides a way to save money during working life and to use this for consumption during retirement. A third motivation for pension systems to exist is the risk sharing or insurance opportunities they offer. Pension systems allow people to pool their resources and to bear the risks on those resources collectively. Other reasons for having a pension system include redistribution of income (in a supportive role to the tax system) and preventing myopia in the form of too low savings rates by individuals.

Introduction In this article, we will focus on the third reason for the existence of pension systems, its risk sharing properties. When we are talking about pensions and their associated risks, we typically think about risks that cover a large time period like demographic risks, asset return risks during your life and human capital return risks. For an example of demographic risks, have a look at figure 1 from King (2004) to see how hard it is to correctly predict mortality rates. Over the past 50 years, every 10 years mortality forecasts have had to be changed significantly because people lived way longer than we expected them to do. In 1955, actuaries thought that in 2000, on average males that lived until their 60th year were expected to live for another 18 years. In 1999, this projection had been revised to an expectation of another 27 years – an enormous 50% increase. This increase is partly responsible for the ongoing discussions of increasing the retirement age in

Siert Jan Vos Siert Jan Vos received an M.Sc degree in economics in 2006 and in economics and finance of aging in 2007 at Tilburg University. Currently, he is a second year PhD candidate as part of the Netspar (Network for Studies on Pensions, Aging and Retirement) research theme 'The macro economics of pensions and aging.'

22

AENORM

vol. 17 (65)

October 2009

almost all West European countries. The reason that pension systems are by far the best way to organize risk sharing is that private risk sharing or insurance can only be achieved if all parties involved are alive both when the risk sharing contract is signed and when the event the contract pertains to has materialized (Hassler & Lindbeck, 1997). This largely rules out private risk sharing between people from different generations. Furthermore, even if it would be possible to write such a contract, in many cases private risk sharing breaks down due to asymmetric information problems such as adverse selection (the problem that if you start an insurance scheme you will attract only the people who badly need the insurance) and moral hazard (the problem that once people are insured, their behaviour changes in a way that makes it more likely that the insurer will have to pay). These problems make it highly unlikely that private parties, especially from different generations, will be able to write a risk sharing contract. This is where the pension system becomes very useful, because in essence the government or pension fund running the system is an “infinitely lived party� and can thus write contracts with all private agents, even on behalf of those who are not born yet. Moreover, it can in principle force the entire population to join in the pension system, thus overcoming the asymmetric information problems mentioned earlier. Pension systems facilitate risk sharing both within a generation (intragenerational risk sharing) and between generations (intergenerational risk sharing). Most of the literature focuses on intergenerational risk sharing, since this is the most crucial element a pension system adds to the insurance possibilities of individuals. First we will


Actuarial Sciences

discuss some of the findings in the literature on theoretical optimal intergenerational risk sharing. Then, we will go on to the different types of pension systems that exist in reality, and finally we will have a brief look at the Dutch pension system and assess some of its characteristics from an intergenerational risk sharing point of view.

Figure 1. Actuarial Profession projections of male life after 60. (Source: Continuous Mortality Investigation, Actuarial Profession. Data are for United Kingdom policy holders of Actuarial Profession members.)

Optimal intergenerational risk sharing A simple example of optimal intergenerational risk sharing can be constructed as follows (see Beetsma & Bovenberg (2009) and Beetsma, Romp & Vos (2009)). Suppose we have an economy in which there are two generations alive at the same time, the old generation o and the young generation y. Both generations have expected utility from consumption: E[u(ci)], i ∈ o, y where the function u is a monotonically increasing and u '( ci ) = ∞ and lim u '(ci ) = 0 . concave function with clim ci → ∞ i →0 The total amount of goods that is produced in the economy is equal to Y. Total consumption is equal to total production, so total consumption is also equal to Y. Suppose there exists a ‘social planner’, a person that knows the preferences of both generations and has the power to allocate consumption to any individual that he wants to. What would a benevolent social planner do to maximize the expected utility of both generations? The answer can be found by constructing a social welfare function, which is given by the total utility the social planner wants to maximize. In this case that is the sum of the utilities of both generations: W(Cy, Co) = E[nyu(cy) + nou(co)] where ni is the number of people in generation i. The social planner will maximize the social welfare function with respect to consumption of the old and the young, subject to the restriction that total consumption must equal Y. Constructing the Lagrangian of this problem yields: max c y , co E [ n y u ( c y ) + no u (co ) + λ (Y − n y c y − no co )]

with the corresponding first order conditions: E[ n y u '( c y )] − E [ λn y ] = 0 E[ no u '(co )] − E[ λno ] = 0 ⇒ E[u '(c y )] = E[u '( co )]

This is a central result from the literature on optimal intergenerational risk sharing: optimal risk sharing is achieved by writing a contract such that the expected marginal utilities of income are equal for all parties involved.

Although this result does not necessarily mean that the consumption of each individual should be equal (for example different generations could have different utility functions), it is rather extreme because implementing this result in a real world setting under the belief that all people are equal would indeed require equal consumption for everyone. The usual interpretation of this result often is slightly different: it is not the level of consumption for every individual that should be equal, but the variation in consumption in response to shocks. Take for example a stock market crash. Without a pension system young workers would hardly be affected by this and the retired would suffer a large loss of income. The social planner wants the young to absorb a part of this shock. As a result, the effects of both positive and negative exogenous shocks should be borne by as many people as possible to lessen the impact of shocks per individual.

Pension systems in reality The next step is to find out what pension systems arrangements in reality look like. In general we can distinguish pension systems along at least two dimensions: defined benefit (DB) versus defined contribution (DC) and pay-as-you-go (PAYG) versus funded systems (see Lindbeck & Persson, 2003). The DB/DC distinction tells us the characteristics of the pension arrangement with respect to contributions paid and benefits received. In a defined benefit system, the benefit to be paid out to the retiree is fixed. The contributions are subsequently set such that these contributions are sufficient to pay out the benefits. In a defined contribution system, the contributions are fixed and the benefits of the arrangement are as high as the returns to the contributions allow. The other distinction, PAYG vs. funded, concerns the way a pension system is paid for. In a PAYG system, the currently working population pays for the currently retired. By the time the current working population has retired, a new young population pays for their retirement

AENORM

vol. 17 (65)

October 2009

23


Actuarial Sciences

benefits and so on. In a funded system, each individual saves for his own retirement through a pension fund. In this way, each individual pays for his own retirement instead of depending on a new young generation. If we take a look at these different pension systems from the perspective of intergenerational risk sharing, we see that a DB system puts most of the risk on the working generations. Since the benefits of the retirees are fixed, any changes will have to be absorbed by the working generation. Suppose that people live longer than expected, then, the working people will have to pay a higher pension premium to ensure that the retired get their promised benefits. A DC system puts all risks on the retired generations. If people now live longer than expected, they will have to absorb this by consuming less each year once they are retired so that the amount they saved is still sufficient to finance consumption.

Summary

The Dutch pension system

Barr, N. and P. Diamond. "The economics of pensions." Oxford Review of Economic Policy, 22.1 (2006): 1539.

To go with one specific example, we look at the Dutch pension system. The system consists of a PAYG lump sum state pension, a wage linked occupational pension and voluntary savings with a preferential tax treatment (which we won’t discuss here). The occupational pension enables employees to save part of their wage through a pension fund and receive pension benefits from the pension fund once they are retired. Occupational pension funds are funded and, although some of them are DC (and this number has been growing in recent years), most of the pension funds can be defined as a hybrid form of a DC and a DB fund. The DC element is that indexation of the accrued rights depends on the funding ratio of the pension fund and is therefore linked to the returns on investment of the pension fund. The DB elements are the facts that pension rights are based on the worker’s wage and the number of years that he worked and the fact that pension premiums of the workers can be raised in response to a situation of underfunding. The first pillar provides a basic income for everyone (here the old age poverty prevention goal of pensions is clear), while the occupational pensions provide opportunities for intergenerational risk sharing. In response to a negative shock, a pension fund can simultaneously lower the indexation on accrued rights, which lowers the benefits to the retired, and increase premiums without increasing the accrual rate – premiums go up without the workers building up any additional pension rights – which hurts the workers. The hybrid form of the occupational pension funds allows the effects of shocks to be spread over different generations. One slightly worrying development in recent years is that quite a number of pension funds and arrangements have moved to a completely DC type of pension for the enrolled employees. Such a shift puts all risks on the retirees in the scheme, which can not be a good idea from a risk sharing point of view.

24

AENORM

vol. 17 (65)

October 2009

In this article we discussed the risk sharing role of pension systems and focused on intergenerational risk sharing as a way of having people share risks that they could not share without the existence of the pension system. We have seen that a theoretically optimal pension system distributes the effects of shocks such that everyone is affected by it in the same proportion. Next, we took a look at what the main forms of pension systems in reality look like and what their risk sharing potential is. Finally, we looked at the Dutch pension system and its risk sharing properties, which seem to be quite good, although in recent years there has been a trend in some sectors to shift to pension arrangements that do not share risks very well.

References

Beetsma, R. and A.L. Bovenberg. "Pensions and intergenerational risk sharing in general equilibrium." Economica, forthcoming (2009). Beetsma, R., W. Romp, and S. Vos. "Intergenerational risk sharing, pensions and endogenous labour supply in general equilibrium.” mimeo, University of Amsterdam (2009). Hassler, J. and A. Lindbeck, “Intergenerational Risk Sharing, Stability and Optimality of Alternative Pension Systems.” mimeo, Institute for International Economic Studies, Stockholm (1997). King, M. "What fates impose – facing up to uncertainty." Speech at 8th British Academy Annual Lecture, Bank of England (2004). Lindbeck, A. and M. Persson. "The gains from pension reform." Journal of Economic Literature, 41 (2003): 74-112.


RUIMTE

voor uw ambities Risico’s raken uw ondernemersgeest en uw ambities. Aon adviseert u bij het inzichtelijk en beheersbaar maken van deze risico’s. Wij helpen u deze risico’s te beoordelen, te beheersen, te bewaken en te financieren. Aon staat voor de geïntegreerde inzet van hoogwaardige

expertise,

diensten

en

producten op het gebied van operationeel, financieel en personeel risicomanagement en verzekeringen. De focus van Aon is volledig gericht op het waarmaken van uw ambities.

In Nederland heeft Aon 12 vestigingen met 1.600 medewerkers. Het bedrijf maakt deel uit van Aon Corporation, Chicago, USA. Het wereldwijde Aon-netwerk omvat circa 500 kantoren in meer dan 120 landen en telt ruim 36.000 medewerkers. www.aon.nl.

4982aa

RIS IC O M A N A G E ME N T | E MPL OYEE B EN EF I T S | VER Z EKER I N GEN


Actuarial Sciences

Fair Valuation of Life Insurance Contracts with Embedded Options by: Anna Rita Bacinello This article gradually highlights the main problems that an insurance company has to face when issuing complex life insurance contracts with embedded options, and presents the basic principles that can be applied in order to price them. The article is based on some conferences that the author gave as invited speaker at the Austrian Workshop on Asset Liability Management in Insurance (Wien, 2004), the Workshop on Life Insurance Fair Valuation (Lyon, 2005), the Annual Meeting of the Swiss Association of Actuaries (Lausanne, 2006), the 9th Spanish-Italian Congress of Financial and Actuarial Mathematics (Alcalรก de Henares, 2006), and at the University of Ljubljana (2007).

Introduction Life insurance contracts are often rather complex products that embed various kinds of options, more or less explicitly defined. The most popular implicit options are implied by the presence of minimum guarantees in equity-linked life insurance contracts. In particular, Brennan and Schwartz (1976) and Boyle and Schwartz (1977) were the first who recognized such options in the second half of the seventies of the previous century, and applied them the then recent results from Option Pricing Theory initiated by Black and Scholes (1973) and Merton (1973) in the first half of the same decade. Similar options are embedded also in participating policies, where the guarantees are usually of the cliquet-style and the link between benefits and the reference fund whose performance is taken into account in order to compute bonuses is often very complex. The interest rate risk, along with the longevity risk, underlie another important option that is sometimes offered to policyholders: the option to convert a future lump sum, typically a survival benefit, into a life annuity at a guaranteed conversion rate. Moreover, several life insurance products are equipped

Anna Rita Bacinello is a Full Professor of Mathematics and Finance in the Faculty of Economics of the University of Trieste, Italy. Her main fields of research are Finance and Insurance, with particular attention to the valuation of life insurance contracts with minimum guarantees and other embedded options.

26

AENORM

vol. 17 (65)

October 2009

with a typical American-style option, the surrender option, that entitles the policyholder to early terminate the contract by receiving a (usually guaranteed) cash surrender value. The recent evolution of financial markets has dramatically shown that all these options cannot be ignored, even if they appear very deep out of the money at issuance. Moreover, their valuation and reserving methods must take into account the recent requirements of the International Accounting Standards Board, that has been working over the last years with the proposal of defining a fair value accounting system for both assets and liabilities of insurers. The aim of this article is to gradually highlight the main problems that an insurance company has to face when issuing such complex contracts and to introduce the basic principles that can be applied to price them, without presenting however specific valuation models, for which there is an extensive literature in the actuarial and financial journals. We start with equity-linked life insurance, and in particular with a basic example, that is a fixed-term contract paid by a single premium at issuance, in which only financial risk is involved and there are (implicit) options of European-style. Then we introduce mortality risk, that turns European into Titanic options (see Milevsky and Posner, 2001), and show moreover that, when the contract is paid by periodical premiums, the options involved acquire also an Asian feature. After that we discuss about the valuation of the guaranteed annuity option and the surrender option, focusing in particular on the definition of their payoff structure. Finally we illustrate some examples of participating contracts, that are usually characterized by more complicated payoffs and can involve other implicit options, such as the bonus option and the default option, and hint at some additional sources of risk affecting the liabilities of a pension fund.


Actuarial Sciences

Equity-linked life insurance In equity-linked life insurance contracts the benefits are directly linked to the value of a reference portfolio composed by units of a given asset that can be, e.g., a stock, a stock index, a mutual fund or a combination of mutual funds. Then the cash value of benefits is stochastic, while premiums are usually deterministic. Premiums, net of insurance and expense loadings, are deemed to be invested in the reference asset. These contracts are generally characterized by a high level of financial risk, that can be totally charged to the policyholder, in pure equity-linked contracts, or can be shared between the policyholder and the insurer, in guaranteed equity-linked contracts. In particular, guarantees can operate only when the benefits become due (terminal or point-to-point guarantees), or contain some ratchet features, that allow to periodically consolidate the greater between actual and guaranteed returns.

Basic example Consider a fixed-term contract with maturity T, paid by a single premium at issuance (time 0). Under this contract a benefit is paid with certainty at the maturity date, independently of the fact that the insured is still alive or dead, hence it is a purely financial contract, without any kind of demographic risk. We introduce the following notation: D = initial amount (deemed to be) invested in the reference asset at time 0, St = unit value of the reference asset at time t≥0 , Ft = value of the reference portfolio at time t ∈[0,T], U = fair single premium, BT = benefit. Then the number of units of the reference asset deemed to be acquired at time 0 is n≐

D S0

and remains constant over time. Moreover, the value of the reference portfolio is proportional to the current unit price: Ft = nS t = D

St , t ∈ [0, T ]. S0

a) Pure equity-linked contract The benefit is equal to the value of the reference portfolio: BT = FT . In this case the insurance company can perfectly

hedge the financial risk by simply investing the amount D at time 0 in the reference asset. Then, under the usual assumptions of perfectly competitive and frictionless markets, free of arbitrage opportunities, the single premium must coincide with the initial investment: U = D. b) Guaranteed equity-linked contract with a terminal guarantee We denote by GT the maturity guarantee, that can be equal, e.g., to all (or part of) the initial investment in the reference asset with (or without) accumulation at a minimum interest rate guaranteed (i.e., GT= αDegT with 0 < α ≤ 1 and g ≥ 0). The benefit is now defined as: BT = max { FT , GT } = nmax {ST , K T } , with K T =

GT , n

and can be decomposed in terms of European options on units of the reference asset with maturity T and exercise price KT: BT = FT + max {GT − FT , 0} = n [ S T + max { K T − ST , 0}]    payoff European put

= GT + max { FT − GT , 0} = GT + n max {ST − K T , 0} .    payoff European call

If such options are traded, also in this case the financial risk can be perfectly hedged at time 0 by - actually investing the amount D in the reference asset, - buying n European put options on this asset with maturity T and exercise price KT . Alternatively, if zero-coupon bonds with maturity T are also traded, the insurance company can instead invest nothing in the reference asset and buy - n European call options on this asset with maturity T and strike KT, - GT zero-coupon bonds with maturity T and unit face value. Then, letting cτ(t,K) = market price at time τ ( ≤ t) of a European call option on a unit of the reference asset with maturity t and exercise price K, pτ(t,K) = market price at time τ ( ≤ t) of a European put option on a unit of the reference asset with maturity t and exercise price K, vτ(t) = market price at time τ ( ≤ t) of a default-free zerocoupon bond with maturity t and unit face value,

AENORM

vol. 17 (65)

October 2009

27


What do you see?

A rock climber?

We see focus, confidence, teamwork and eagerness. Exactly what successful traders at All Options need. Take the next step.

www.alloptions.nl


Actuarial Sciences

we have: U = D + np0 (T , K T ) = GT v0 (T ) + nc0 (T , K T ).

A crucial problem arises however if, as expected, there are no traded options on the reference asset with the same maturity of the contract (or with the required strike). Recall, in fact, that while life insurance policies are usually long-term contracts, traded financial options have very short durations. Then the insurer has also the (hard) task of trying to replicate them. In theoretical frameworks, characterized by complete markets such as, e.g., the Black and Scholes (1973) and Merton (1973) model, such options can be perfectly replicated by means of dynamic self-financing strategies in the underlying asset and in riskless bonds. Unfortunately, even if similar frameworks could be reasonably adopted to model the behaviour of the asset prices, (perfect) replicating strategies require continuous rebalancing of the hedging portfolio, but trading in the “real world” cannot take place continuously over time and, moreover, the real world is pervaded by transaction costs that could make prohibitive a too frequent rebalancing. As a consequence, alternative hedging strategies, taking transaction costs and discrete rebalancing into account, are called for, and their choice becomes particularly delicate if the valuation is plunged into an incomplete market framework.

Introduction of mortality risk Assume now that a guaranteed benefit Bt = max{Ft,Gt} (with Gt suitably specified) is paid: a) at time t=T, provided that the insured is still alive (pure-endowment contract), b) at time of death of the insured t, provided that he(she) dies within the term of the contract T (term insurance contract), c) at time of death of the insured t , provided that he(she) dies within the term of the contract T, or at maturity t=T, if the insured is still alive (endowment contract), d) at time of death of the insured t, whenever death occurs (whole-life insurance contract). Remarks: - In case a) the whole benefit and, in particular, the put options representing the minimum guarantee provision, are a sort of European knock-out contingent-claims but, differently from the usual barrier options, they expire before maturity if and when the insured dies, instead of if and when the price of the underlying asset hits a given barrier. How should the insurance company hedge this case? One possibility could simply be to ignore the knock-out feature and act exactly as in the

-

- -

-

basic case of a fixed-term contract, but of course this results in a super replicating hedging strategy, that could imply a too high premium. Also in case b) it may be that the benefit is never paid and hence the guarantees never become effective, since the options knock in (and expire immediately after) only if the insured dies before the term of the contract. In cases c), d), instead, the benefit is paid with certainty, soon or later. In case d) there is not a formal maturity of the options, hence they could also be seen as perpetual options. However, one can still think as if there were a maturity date, suitably chosen. In cases b), c), d) the options involved are something in between European and American-styled, that is why Milevsky and Posner (2001) call them Titanic options. They can actually be exercised before maturity, but the early exercise is not the consequence of a rational decision of the options holder since it is triggered by an event (death of the insured) that can be reasonably assumed independent of the behaviour of financial markets. Moreover, this event causes also the early expiration (knock-out) of the options, if they are not in-the-money. Finally, their exercise price (Kt=Gt/n) is in general time-dependent.

These examples of equity-linked life insurance contracts with demographic risk show that insurance markets are typically incomplete, because it is not possible to exactly replicate a claim contingent to the lifetime of a given person by trading in the classical financial assets. Then the insurer has to hedge an integrated risk (financial + demographic) in incomplete markets. The classical way to tackle this problem is to resort to pooling arguments. Recall in fact that the insurance company does not need to hedge separately each single policy issued, but its portfolio as a whole. Assume, in particular, that it has issued L identical policies on independent and identically distributed lives (e.g., with the same age at entry, x). Just to fix the ideas, consider the most common case of endowment policies with maturity T (coinciding, e.g., with the retirement date of the policyholders) and assume that the benefit is paid at the end of the year of death, i.e., at time t (t = 1,2,...,T), if the insured dies between times t–1 and t, and at maturity T, if the insured is still alive (this is the Brennan and Schwartz (1976) and Boyle and Schwartz (1977) contract). For the Law of Large Numbers, if L is sufficiently large, the fraction of policies expiring in a given year t (t = 1,2,...,T) approaches (in probability) its expected value, that is  t −1q x π 0 (t ) ≐   T −1q x +

if T

p x = T −1 p x

t = 1, 2,.. ., T − 1

if t = T

(with the usual actuarial notation). Then the mortality

AENORM

vol. 17 (65)

October 2009

29


Actuarial Sciences

risk can be, at least in principle, diversified away by issuing a large enough number of independent and identically distributed risks. If the mortality fluctuations are completely eliminated with reference to the whole portfolio of policies, there is no reason that the insurance company requires a compensation for mortality risk (beyond expected values), hence it becomes risk-neutral with respect to mortality. In this case the whole portfolio of L identical endowment policies becomes equivalent to a portfolio of L fixed-term contracts with different maturities t = 1,2,...,T. More in detail, for each possible maturity t there are Lπ0(t) identical fixed-term contracts in the equivalent portfolio. These contracts can be hedged as in the basic example described before. In particular, if the European options and/or the zero-coupon bonds required to exactly replicate their benefit are traded (i.e., if financial risk can be perfectly hedged), the single premium of each endowment policy must coincide with the average initial cost of the replicating strategy: T

U = D + n ∑ π 0 (t ) p 0 (t , K t ) t =1

T

= n ∑ π 0 ( t ) [ K t v0 ( t ) + c0 ( t , K t ) ] . t =1

Unfortunately, these theoretic arguments very hardly can be applied in practice, because usually the portfolios of life insurance companies are not large enough to allow that mortality fluctuations are completely eliminated. Then the mortality risk cannot be perfectly hedged. As a consequence, insurance companies are not risk-neutral with respect to mortality, and require a compensation for assuming this risk. Such compensation usually derives from an implicit safety loading of the premium, obtained by using a prudential mortality table. This means that (net) premiums are still expressed as expectations, with respect to mortality, but the expectations are computed by using probabilities “extracted from a risk-adjusted mortality measure” instead of the true probabilities π0(t). E.g., in term insurance, whole life insurance or endowment contracts, the premiums are computed by using higher probabilities of death than those actually assessed, while in pure-endowment contracts and annuities the probabilities of death used for computing premiums tend to be smaller than the “actual” ones. Remark: It is quite natural to parallel this measure with the riskneutral (or equivalent martingale) measure of Financial Economics. A possible objection is that it does not lead to a linear pricing rule and is not the same for all types of life insurance coverage. However, to recover such appealing parallelism, one could think (as suggested by de Finetti, 1963) that the adjusted probabilities used to price the various types of contracts are all extracted from the same measure but are conditioned to a different

30

AENORM

vol. 17 (65)

October 2009

information. If, e.g., the policyholder requires a coverage in case of early death, the insurance company can argue that the insured is in a relatively bad health status, and vice versa when a coverage in case of survival is instead required. In addition, a very crucial problem can arise, specially when dealing with portfolios of annuities or pure-endowments: even if the portfolio is large enough to assume that all diversifiable mortality risk has been eliminated, the “total” mortality risk can be composed also by a systematic part, affecting all policies in the same direction. This is the case, e.g., of the longevity risk, i.e., the risk of systematic (negative) deviations between expected and actual mortality rates. Then stochastic mortality models that capture the evolution of future mortality trends are called for.

Periodic premium contracts Consider now the same type of endowment policy previously analysed, but assume that it is paid by a sequence of constant annual premiums, due at the beginning of each year of contract, if the insured is still alive. Assume, once again, that a fixed amount, denoted by D, is deemed to be invested in the reference asset at any premium payment date. If zero-coupon bonds with any maturity between 0 and T are traded and the mortality risk can be perfectly hedged, then the time 0 value of the liabilities of the policyholder is given by T −1

P

∑ t p x v0 (t ) ≐ Paɺɺx ,T , t =0

where P denotes the annual premium and v0(0)=1. The annual premium P is fair if and only if the time 0 value of the liabilities of the policyholder equals the time 0 value of the liabilities of the insurance company, i.e., if P=

V0 , aɺɺx ,T

where V0 denotes the time 0 value of the liabilities of the insurance company. However, a new problem arises now: since there are periodic investments in the reference asset, at the current unit price, the value of the reference portfolio and, consequently, the benefit, become path-dependent: t −1

D , t = 1, 2,..., T and τ ∈ (t − 1, t ], S j =0 j

Fτ = S τ ∑


Actuarial Sciences

t −1

t −1 St S + max { Gt − D ∑ t , 0} S S j =0 j j=0 j

Bt = max { Ft , Gt } = D ∑ t −1

= Gt + max { D ∑

St − Gt , 0}, t = 1, 2,..., T . S j =0 j

Then the European options required to replicate the benefit of each fixed-term contract composing the “equivalent” portfolio, if seen as derivatives on single units of the reference asset, are Asian-like, so that they present a Titanic + Asian feature. If they are traded (with the mortality risk diversified away), once again we have:

option is uncertain, since it reflects preferences of the insured and possible asymmetric information (with respect to the insurance company) about his(her) health status. Letting ρ = guaranteed conversion rate, αT = market annuity rate at time T (depending on the age of the insured, the current term structure of interest rates and the annuity type), the liability at maturity of the insurance company (in case of survival of the insured) is usually formalized as

BT } BT + max{ BT aT − BT , 0} aT= ρ ρ =t 1 =j 0 BT max , = B + a − ρ , 0 T } B max { BT , T aT= } BT + max{ BT TaT − ρBT , 0} { T = ∑ π 0 (t )[Gt v0 (t ) + c0* (t , Gt )] , ρ ρ t =1 BT = B + a , C C B B max max ≐ ≐ ρ ρ a a − − ρ ρ , , 0 0 with TT ((T TT )) {{ TTT }} thus representing the ρ payoff of the guaranteed annuity option. However, this so that is actually the payoff of such option only if the insured T T is really willing to employ the amount BT to buy a life ∑ π 0 (t ) p0* (t , Gt ) ∑ π 0 (t )[Gt v0 (t ) + c0* (t , Gt )] annuity. If instead the insured does not want to buy a life , P = D + t =1 = t =1 annuity and prefers to collect immediately the amount aɺɺx ,T aɺɺx ,T BT (because, e.g., he(she) is seriously ill, or he(she) needs money, …), then it could be that he(she) does not exercise the option, even if deeply in-the-money, without being irrational. In fact, differently from the options on where c0* (t , Gt ), (respectively, p0* (t , Gt ), ) denote the time financial traded securities, insurance markets are far from 0 price of a European call (put) option on the reference being frictionless and hence it is usually impossible to portfolio with maturity t and strike Gt. exercise the option buying the life annuity at the “strike” Unfortunately, it is very unlikely that these options are price BT and immediately after sell it back at its market actually traded and, moreover, the mortality risk may be value (BT / ρ)αT ( > BT). not perfectly diversifiable. Then the insurance company Then the “subjective” value of the guaranteed annuity has to face the problem of hedging an integrated risk in from the point of view of the insured, i.e., the (maximum) an incomplete market (e.g., along the lines of Møller, price that he(she) is willing to pay for it, can be different 1998). This requires the choice of a suitable model for from its market value (BT / ρ)αT, and even equal to 0 (if the assets price (thus involving also model risk), and the the insured does not want the life annuity) or greater than choice of a suitable hedging strategy. (BT / ρ)αT. To formalize this assume, for a moment, that the Guaranteed annuity option subjective value of the guaranteed annuity from the point of view of the insured is known and denote it by Assume now that the insured, in case of survival at VT. Moreover, let AT denote the actual payoff of the maturity T, has the option to convert the benefit BT into a guaranteed annuity option. We have: life annuity at a guaranteed conversion rate.

= V0

T

t −1

∑ π 0 (t )[ D ∑ v0 ( j ) + p0* (t , Gt )]

Remarks: - This is an explicit (knock-out) European call option with underlying asset the time T market annuity rate and strike the guaranteed conversion rate. - In case of survival at maturity, the liability of the insurer is not simply BT=max{FT,GT}, but BT+ the payoff of such option. - While the exercise of the previous embedded options implied by the minimum guarantees is “automatic” and their payoff is preference-free, the exercise of the guaranteed annuity option is “subjective”. - The structure of the payoff of the guaranteed annuity

max { BT ,

C AT =  T 0

if VT > BT if VT ≤ BT

.

In particular, if VT ≥ (BT / ρ)αT, this means that the insured wants the life annuity, and to get it he(she) would be willing to pay even more than its current market price (BT / ρ)αT. Of course, since the insured is rational and non satiated, he(she) chooses the cheapest way to get the annuity, that is by exercising the option if in-the-money (i.e., if BT< (BT / ρ)αT), or otherwise by directly buying the annuity in the market. If instead VT< (BT / ρ)αT, it could be

AENORM

vol. 17 (65)

October 2009

31


Actuarial Sciences

that VT > BT, and in this case the (in-the-money) option is also exercised, while if VT ≤ BT the option is not exercised independently of its moneyness. In spite of this, it is immediate to realize that AT≤CT ∀ VT. Then, even if AT can be different from CT, it is equally correct to assume CT as payoff function, in the pricing of the guaranteed annuity option, because: - the insured’s expectations and/or preferences are not known in advance, hence the actual payoff AT is uncertain, and CT supplies an upper bound to it (prudential), - the insured has the right to receive CT, i.e., to exercise the option if it is in the money (fair).

The behaviour of the policyholder at time t can then be described as follows:

The valuation/hedging of the guaranteed annuity option is really crucial. Its underestimate has caused several solvency problems to some life insurance companies in the 90’s of the previous century, due to a sudden drop in interest rates coupled with a significant increase in life expectancies. In particular, the choice of suitable stochastic models for longevity risk and for the term structure of interest rates is absolutely necessary.

Remarks: - Once again, Wt ≐ max { Rt − I t , 0} is an upper bound to the actual (uncertain) payoff of the surrender option from the insurance company’s point of view. Then it is both prudential and fair to assume Wt for pricing purposes. - The value of the residual contract quantifies the future liabilities of the insurance company, including those implied by the exercise of the surrender option at any time after the current time t, net of the future premiums to be collected from the policyholder. - This value depends on both mortality and financial uncertainty; even if pooling arguments can be invoked in order to hedge mortality risk, it is not possible to keep separate, in the valuation, these two sources of uncertainty as in the Titanic case. - If the contract is paid by periodical premiums there is an additional complication, because the premium depends on the value of the surrender option, that in turn depends on the premium level.

Surrender option This option gives the policyholder the right to early terminate the contract, before maturity T, and receive a (pre-determined, although not necessarily known in advance) cash amount, called surrender value. Remarks: - It is a non-standard knock-out American put option written on the residual contract, with maturity T and exercise price given by the surrender value. - Along with the Titanic (+ Asian) feature, now we have also an American feature, because the policyholder can choose the “optimal” exercise time (provided that the insured is still alive). - Also in this case the payoff structure at a given time of possible exercise is unknown, due to “subjective” preferences/information of the policyholder. - Moreover, there is not even a market value of the residual contract (underlying asset) to compare with the surrender value, but only an “internal” value assigned by the insurance company and usually not public. In order to discuss about the construction of the payoff function for pricing purposes we introduce the following notation: Rt = surrender value, in case of surrender at time t (<T), Vt = “subjective” value of the residual contract from the point of view of the insured, It = internal value assigned to the residual contract by the insurance company, Mt = “payoff” of the surrender option from the point of view of the insurance company, conditioned to the knowledge of Vt.

32

AENORM

vol. 17 (65)

October 2009

 if Rt ≤ Vt   if Rt > Vt

⇒ not surrender ⇒ surrender

,

so that the (conditional) payoff of the surrender option is given by if Rt ≤ Vt 0 Mt =  ≤ max { Rt − I t , 0} ∀ Vt .  Rt − I t if Rt > Vt

The valuation of the surrender option is a very delicate problem, that has to be tackled by means of a numerical approach (binomial or multinomial trees, partial differential equations with free boundary problems, least squares Monte Carlo simulation).

Participating policies Participating policies, or policies with profits, are characterized by the fact that the profits of the insurer are shared with policyholders. There are several ways in which this profit-sharing can be realized. Usually dividends (also called bonuses) are credited to the policy reserves at the end of each year, and this implies the “purchase” of additional insurance that makes benefits to be “adjusted”, year by year. This mechanism is often referred to as reversionary bonus system. Sometimes, in the case of annual premium contracts, also the policyholders are called to contribute to the increase of benefits by means of an annual increase of the premium. In this way the policies are allowed to follow the market returns on investments, and both benefits and premiums are kept up-to-date.


Actuarial Sciences

There is usually a minimum interest rate guaranteed. In the past this rate used to be far lower than the market rates, so that the risk associated with the issue of such guarantees used to be completely disregarded because the implicit options involved were deeply out-of-the-money. However, their moneyness can be driven by changes in financial risk factors, that are far from being unlikely due to the long-term nature of life insurance business. Moreover, the strong competition among insurers and between insurance and financial markets can force insurers to sell products embedding options that are not at all out-of-the-money even at issuance. As a consequence, in more recent years, after the drop in the market interest rates observed in many industrial countries, these options have become in-the-money, thus causing solvency problems to some insurance companies. This has called the intervention of regulatory authorities, that have introduced some upper bounds to minimum interest rates guaranteed of new participating business, but of course this has created possible inequalities between different cohorts of policies, in particular between the old policies still in force with high minimum interest rates guaranteed and the new ones. Hence the problem of accurately valuing all embedded options (bonus option, default option, …) and assessing the parameters characterizing the guarantees and the participation mechanism is very crucial, and is attracting the interest of a large number of researchers and practitioners. Note that participating policies imply additional practical problems with respect to equity-linked business, because the reference fund is very often an internal segregated fund, directly managed by the insurance company. Then, being not a traded fund, it is even more difficult to use its “units”, together with riskless assets, in order to hedge the embedded options. Moreover, the payoff structures are often rather complicated, with more or less exotic guarantees (e.g. of the ratchet type). To illustrate this, we make some examples of benefit for single premium contracts, after the introduction of the following notation: T = maturity of the contract, U = single premium, Bt = benefit payable at time t (t = 1,2,...,T), Pt = policy value at time t (t = 1,2,...,T), Ft = value of the reference fund at time t (t = 1,2,...,T), RG = minimum interest rate guaranteed (annual compounded), α = participation coefficient, β = terminal bonus rate, γ = target “buffer” ratio (i.e., target ratio between Ft– Pt and Pt), m = length of a “smoothing” period.

Example 1 P0=U, Pt = Pt −1[1 + max {rG , α (

Ft −1 − Pt −1 − γ )}], t = 1, 2,..., T , Pt −1

BT=PT. This is an example of fixed-term contract (i.e., without mortality risk) sold in Denmark (see Grosen and Jørgensen, 2000). The benefit at maturity, BT, is given by the value of the policy reserve. The policy reserve is initially equal to the single premium U, and after is adjusted, at the end of each year of contract, according to a fraction, α, of the excess of the ratio between the actual buffer at the end of the preceding year (Ft-1–Pt-1) and the corresponding policy reserve (Pt-1) with respect to the target buffer ratio γ. However, if this adjustment rate is less than the minimum interest rate guaranteed rG, then the policy reserve earns the minimum interest rate guaranteed. Example 2 P0=U, = Pt Pt −1[1 + max {rG , α (

min{t , m } ( F t − j +1

j =1

Ft − j ) − 1

min{t , m}

)}],

t = 1, 2,..., T , BT = PT + β max { P0

FT − PT , 0} − max { PT − FT , 0}. F0

This is another example of fixed-term contract, sold in the UK (see Ballotta, Haberman and Wang, 2006). The adjustment rate of the policy reserve is now given by the average of the rates of return on the reference portfolio over the last m years of contract (if the contract has a duration of at least m years), with still rG as minimum interest rate guaranteed. The fact that the adjustment is made not only on the basis of the last return on the reference portfolio allows to get a sort of smoothing of distributed profits, that of course is also present in the previous example, although with a different mechanism. The benefit at maturity is not simply given by the corresponding value of the policy reserve PT, but there is also a participation, at rate β, on the positive excess of the initial reserve P0, with an accrued return over the period [0,T] exactly equal to the corresponding return on the reference portfolio ((FT / F0)–1 ), with respect to the final value of the policy reserve (terminal bonus). However, if the value of the reference portfolio is insufficient, i.e., if FT< PT, then the lack is covered by the policyholder (default option).

AENORM

vol. 17 (65)

October 2009

33


Actuarial Sciences

References

Example 3 B0 =

U T

∑ π 0 (t )(1 + rG ) − t

Bacinello, A.R. "Fair Pricing of Life Insurance Participating Policies with a Minimum Interest Rate Guaranteed." ASTIN Bulletin, 31.2 (2001): 275-297.

,

t =1

Bt =

Bt −1 [1 + max{rG , α ( Ft − 1)}], 1 + rG Ft −1

t = 1, 2,..., T .

This is an example of policy sold in the Italian market (see Bacinello, 2001). Differently from the previous examples of fixed-term contracts, it is an endowment policy which pays the benefit Bt-1 at time t if the insured dies in the t-th year of contract (t = 1,2,...,T) or BT at maturity if the insured is still alive. The initial benefit B0 is obtained by computing the actuarial accumulated value of the single premium U according to the technical interest rate rG and the demographic probabilities π0(t). Then, if the fraction α of the rate of return on the reference portfolio in year t exceeds rG, the benefit earns an additional return at rate α(

Ft − 1) − rG Ft −1 , 1 + rG

so that the total rate of return credited to the policy in year t is given by max{rG , α (

Ft − 1)}. Ft −1

Conclusion The payoff structures are even more complicated if we consider the liabilities of a pension fund, where the following additional sources of risk are sometimes involved: - the salary risk, since premiums (also called contributions) are usually expressed as a function of salaries, e.g. a linear homogeneous function (rate of contribution) or a spline of linear homogeneous functions, - the inflation risk, when pension instalments are adjusted according to an inflation index, - the GDP risk (present in some National Pension Plans), when contributions are accumulated at a rate depending on the performance of the nominal Gross Domestic Product, - the family risk, because benefits are usually reversible to survivors.

34

AENORM

vol. 17 (65)

October 2009

Ballotta, L., S. Haberman and N. Wang. "Guarantees in With-Profit and Unitised With Profit Life Insurance Contracts: Fair Valuation Problem in Presence of the Default Option." The Journal of Risk and Insurance, 73.1 (2005): 97-121. Black, F. and M. Scholes. "The Pricing of Options and Corporate Liabilities." Journal of Political Economy, 81.3 (1973): 637-54. Boyle, P.P. and E.S. Schwartz. "Equilibrium Prices of Guarantees under Equity-Linked Contracts." The Journal of Risk and Insurance, 44 (1977): 639-660. Brennan, M.J. and E.S. Schwartz. "The Pricing of Equity-Linked Life Insurance Policies with an Asset Value Guarantee." Journal of Financial Economics, 3 (1976): 195-213. De Finetti, B. "Sul Divario tra Valutazioni di Probabilità per Operazioni Assicurative nei Due Sensi." In: Studi sulle Assicurazioni, I.N.A., Rome, (1963): 531-568. Grosen, A. and P.L. Jørgensen. "Fair Valuation of Life Insurance Liabilities: The Impact of Interest Rate Guarantees, Surrender Options, and Bonus Policies." Insurance: Mathematics and Economics, 26.1 (2000): 37-57. Merton, R.C. "Theory of Rational Option Pricing." Bell Journal of Economics and Management Science, 4 (1973): 141-83. Milevsky, M.A. and S.E. Posner. "The Titanic Option: Valuation of the Guaranteed Minimum Death Benefit in Variable Annuities and Mutual Funds." The Journal of Risk and Insurance, 68.1 (2001): 93-128. Møller, T. "Risk-Minimizing Hedging Strategies for UnitLinked Life Insurance Contracts." ASTIN Bulletin 28.1 (1998): 17-47.


Mathematical Economics

Voting Power: Bribing Lobbyists in Voters’ Networks by: Chen Yeh Voting power has been regarded as one of the most fundamental concepts in social choice theory; especially its measurement has caught considerable attention. Extensive research has resulted in power indices. However these classic measures only focus on a priori voting power, i.e. power that arises purely from the weighted decision rule itself. Thus traditional voting power measures do not account for voters’ interdependencies and preferences. In the following article, an a posteriori voting power model is presented that focuses on I-power (or influence power). Voters’ social relationships are explicitly modelled and power is decomposed into constitutional and network effects. It is shown that neither of the two latter effects is dominating.

Preliminaries and misconceptions The concept of power in voting scenarios has been a widely discussed subject in several areas of science such as philosophy, sociology and psychology. The past couple of decades even economists, social choice and game theorists to be exactly, have thrown themselves in the ring of voting power discussion. In most economic frameworks, only simple voting games are considered, which is defined as follows: Definition – A simple voting game consists of the voter set N={1,2,...,n}, thus there are n voters in total. All these voters have a weight wi≥0 and can either be in favour or against the proposed bill. Voters that are in favour of the proposed bill are gathered in the set S and a bill is passed if the weights of all voters in S exceed some quota q.1 Unfortunately measuring voting power is not an unambiguous task. A common misconception among the general public and on occasion even under political experts is that voting power and voting weights are proportional, i.e. people often think that more voting weight implies more voting power. However the following example illustrates this widely regarded fallacy. Example – Three voters are endowed with the following relative voting weights: 10, 1 and 10. Furthermore the quota is set at 11. Now it is clear that any two voters can form a winning coalition, thus power is equally distributed among the three voters: the voter with weight 1 is just as powerful as the remaining voters. To account for these fallacies, power indices have been created. The most well-known are those of PenroseBanzhaf (1946, 1965) and Shapley-Shubik (1954). These 1. To familiarize the reader with the used terminology, consider simple majority voting: every voter has a weight of 1 and the quota is set as half of all the weights plus one.

measures are primarily based on the notion of a swing: if a voter is able to change the outcome by changing his own vote, then he has a swing. According to classical voting power measures, more swings imply more power. Although the incorporation of the notion of a swing is a step in the right direction, it is not entirely satisfying. These classic measures only focus on a priori voting power, i.e. power that arises purely from the weighted decision rule itself. Thus traditional voting power measures do not account for voters’ interdependencies and preferences. Social choice theorists and philosophers however argue that power is more than simply the byproduct of an abstract shell of voting rules. A historical example that illustrates this argument is the case of Luxembourg under the qualified majority voting rule of the 1958 EU Council of Ministers. As opposed to the other six countries, Luxembourg was only endowed with a voting weight of 1. As a consequence, Luxembourg was unable to induce a swing. Thus according to traditional voting power indices, Luxembourg would have no power at all or a power index of zero. In modern terminology, Luxembourg at the time was considered a dummy voter. However it is highly questionable whether Luxembourg was indeed powerless. Being one of the world’s wealthiest counties and furthermore part of the Benelux, it could

Chen Yeh Chen Yeh has obtained a Bachelor of Science degree (Cum Laude) in Econometrics at the University of Amsterdam in 2009. In September 2009 he has started with his Master of Science in Econometrics and Mathematical Economics at the London School of Economics. Chen is since 2008 one of the editorial staff of Aenorm. This article is a simplified version of his Bachelor thesis ‘Social interaction in simple voting games’ written under the supervision of dr. M.A.L.Koster.

AENORM

vol. 17 (65)

October 2009

35


Mathematical Economics

Figure 1. Network structure of the hierarchy scenario

have influenced the voting behaviour of the Netherlands and Belgium or its neighbouring countries France and Germany substantially. Moreover consider the situation of lobbyists. Although they are not part of the formal voting body (which implies a power of zero according to traditional measures), their power should not be underestimated: approximately 15000 lobbyists reside in Brussels attempting to influence European Union (EU) policy.

Given a initial vector of a priori approval probabilities p0, it is possible to calculate the expected time that is needed to reach a certain quota q. This can be done for every possible quota q. The relationship between the expected time to acquire a quota q is captured through the power curve. We adopt the Markovian expected hitting times calculation method as displayed in Sirl (2005).

Actual voting power: voters’ influence spheres

To assign each voter an individual power measure, interpret the voting situation in the context of the following lobbyist. This lobbyist sets the passing of a bill with a certain quota q as its goal. Only the problem is that every voter at the start is against this bill. However the lobbyist has the resources to exactly bribe one voter. A bribed voter will vote in favour of the bill and cannot be persuaded anymore to vote differently. Furthermore this voter will use his influence spheres to persuade other voters to vote in favour of the bill. Of course the question remains: which voter should the lobbyist persuade? In our framework, the lobbyist should choose that voter that needs the least expected time to acquire a subset of voters. In other words: the lobbyist should bribe that voter whose influence sphere is capable of reaching as many voters as possible within a minimum amount of time.3 In the following sections, we will show two scenarios in which the results differ significantly from those of classical power measures.

We know that the incorporation of the notion of a swing is not sufficient for the measurement of a posteriori voting power. Another important factor is of course a voter’s influence sphere or his relationship with other voters. Intuitively this also makes sense: a voter is powerful if he is able to persuade a lot of other voters to vote similarly as him. To model this, we need to explicitly define a network of players (or voters) and their relationships. In our framework, this is captured through the social interaction matrix W and a vector of a priori approval probabilities pt. Each element of this matrix, denoted by wij, represents the influence voter j has on voter i.2 Furthermore the ith element of the vector pt denotes the probability of voter i voting in favour of the proposed bill at time t. The whole network of voters’ influence spheres is then described by the following simple dynamical system: pt+1=Wpt. Voters’ actions can be described to some extent by a “monkey see, monkey do” sort like behaviour: every period a voter’s opinion is simply the weighted average of all voters’ behaviour. Then these weights are the influence parameters wij. Notice that opposed to traditional voting power measures, we regard power as a dispositional or predetermined concept: power does not need to be exerted for its measurement (Morris, 2002).

The perspective of the lobbyist

Hierarchical structures Consider a simple voting game of 5 players, each endowed with a voting weight of 1. However the network resembles a hierarchical structure with seniority rank 1>2>3>4>5 . This is illustrated in figure 1. According to classical voting measures that ignore this hierarchical structure, power is distributed equally among the 5 players. However the results of our model seem to be more intuitive as can be seen in figure 2. The hierarchical structure of the network is clearly

2. The matrix W is a stochastic matrix, i.e. its elements lie between 0 and 1 and every row adds up to 1. For a more elaborate discussion of how these elements are determined, see Yeh (2009). 3. Thus in our framework, the lobbyist should choose that voter with the “lowest” power curve.

36

AENORM

vol. 17 (65)

October 2009


Mathematical Economics

Figure 2. Power curves of the hierarchy scenario

represented in the voters’ power curves. Voter 5 always needs the most amount of time to acquire a certain quota q, followed by voter 4 and so on. Thus voter 1 is considered to be the most powerful voter as opposed to traditional power indices. This is in accordance with the seniority rank. However note that our model can only indicate the cardinality (or ranking) of the voters’ power, thus it is not clear by what amount voter 1 is more powerful than the other voters.

Unsocial large weight voters and the smooth talking lobbyist In this scenario, we again consider 5 voters who are endowed with the following respective weights: 0, 1, 2, 5 and 6. Furthermore it is known that most people do not like the large weight voters (voter 4 and 5), thus they are a bit isolated in the network. Voter 1 on the contrary is considered to be a smooth talker and thus has good relationships with the other voters. However this “voter” is a lobbyist and therefore has no constitutional voting weight. The total network structure is illustrated in figure 3.

Classical power measures indicate that voter 4 and 5 are the most powerful as they have the most opportunities to induce a swing. Voter 1 on the other hand has voting weight 0 and no possibilities to induce a swing at all, thus classical power indices would assign this voter with no power at all. Once again our model shows different results than those of Penrose-Banzhaf (1946, 1965) and Shapley-Shubik (1954): One striking feature is the lobbyist’s power curve: for small q (usually irrelevant in practical voting situations) the lobbyist needs the most expected time to acquire a group of voters to vote similarly as him. However as q increases, the strength of the lobbyist’s social connections can be seen. Thus even though the lobbyist has no constitutional voting weight, his social connections clearly compensate for this lack of voting weight. Obtaining large weights (voters 4 and 5) and thereby neglecting social relationships does seem to have a large impact on voting power. One important insight from figure 4 is the fact that power curves are able to intersect each other. This implies that there is no general dominant effect when assessing voting power. Before the rules of the voting game are specified, voters should thus care about both acquiring voting weights as well as maintaining good relationships with relevant persons in their voting network. Note that we emphasize persons rather than voters as nonconstitutional voters such as lobbyists may also be of great importance.

Conclusion In the existing literature regarding voting power measurement, two power indices are considered as the standard, namely the Penrose-Banzhaf and the ShapleyShubik index. These measures are primarily based on the

Figure 3. Network structure of the second scenario (Voters 4 and 5 have large weights and voter 1 resembles the lobbyist)

notes

AENORM

vol. 17 (65)

October 2009

37


Mathematical Economics

Figure 4. Power curves of the second scenario (For illustration purposes, only three power curves are shown. From bottom to top: voter 1, 5 and 4 respectively.)

References Banzhaf, J.F. "Weighted voting does not work: A mathematical analysis." Rutgers Law Review, 19 (1965): 317-343. Morris, P. (2002). Power: A Philosophical Analysis. Manchester United Press. Penrose, L.S. "The elementary statistics of majority voting" Journal of the Royal Statistical Society, 109 (1946): 53-57.

notion of a swing: the ability to change the outcome by changing your own vote. However these measures do not take voters’ interdependencies and preferences into account as they only focus on a priori voting power, i.e. power that arises purely from the weighted decision rule itself. In the discussed framework, social networks or influence spheres were explicitly modelled. We focused on an a posteriori voting power model that was based on the concept of dispositional I-power, which was decomposed into two effects: constitutional (voting weights) and network effects. The idea of power in our model combined these two notions: a voter is perceived powerful when his influence sphere enables him to convince other voters (and acquire their voting weights) within a short amount of time to vote similarly. Formally this short amount of time was the expected needed time to acquire a certain subset of voters according to the prevailing quota. To clarify the link between expected hitting times and power, the idea was placed in the context of a bribing lobbyist. The usage of the bribe can be seen as the catalyst of the voting process: where initially no player voted in favour of the bill, the lobbyist sets the influence sphere of the bribed voter in action, enabling him to reach a situation where eventually enough voters do vote in favour of the bill. The relationship between expected hitting times and power was then captured through the power curve. The voter with the “lowest” power curve was then perceived as the most powerful voter in the given network. In some situations, the results differ dramatically from those of earlier work. As opposed to classical measures, our framework was capable of modelling hierarchical structures. In the scenario with the smooth talking lobbyist, it could be observed that none of the two described effects were dominating. Thus a voter’s priority should not lie in obtaining voting weights alone; the model illustrates that trying to enter infamous old boys networks can be just as important.

38

AENORM

vol. 17 (65)

October 2009

Shapley, L.S. and M. Shubik. "A method for evaluating the distribution of power in a committee system." American Political Science Review, 48 (1954): 787792. Sirl, D. Markov chains: An introduction review, MASCOS Workshop on Markov chains. University of Queensland, 2005. Yeh, C. Social interaction in simple voting games: Voting power, centrality and Markov chains. B.Sc. thesis, University of Amsterdam, 2009.


Actuarial Sciences

Market Consistent Valuation using Stochastic Scenarios by: Carlo Jonk Financial modeling of insurance contracts is central to the contribution of actuaries to the financial measurement and management of insurers. Until recently, such financial modeling in the life insurance sector has focused principally on deterministic projections of cash flows together with resulting liabilities and values. Market consistent valuation is a driver which moves the evolution of financial modeling from a deterministic basis to a stochastic basis. Market consistent economic scenarios are used for market consistent valuation of embedded options in insurance products. This contribution contains an example of market consistent valuation of an insurance product with a minimum return guarantee. For a thorough discussion of the models used see for example Brigo and Mercurio (2006). Table 1. Parameters Svensson model

Interest rate models Accurate estimates of the current term structure of interest rates are of crucial importance in many areas of finance. Therefore, it is not surprising that substantial research effort has been devoted to the question of how to optimally model the term structure of interest rates. One model that has the potential of providing satisfactory results is the Svensson model. This model is an extension of the popular Nelson and Siegel model. Svensson proposes to model interest rates with the equation  1 − e − t / τ1 r (t ) = β0 + β1   t / τ1

  1 − e − t / τ1  β + − e − t / τ1   2    t / τ1 

 1 − e − t / τ2  + β3  − e − t / τ2  .  t / τ2 

As the Svensson model is a popular term structure model among central banks and the model is also widely used among practitioners, this ranks it among the most popular term structure models. A straightforward way to estimate the parameters of the Svensson model is to minimize the sum of squared distances between the Svensson curve and current zero coupon interest rate curve. When applying the minimization procedure to the zero interest rate curve as of June 12th 2009, the parameter set for the Svensson model as given in table 1 is obtained. In generating a market consistent economic scenario set an important decision is which interest rate model to use. When deciding which model to use in certain applications several questions are relevant, for instance: • What distribution does the dynamics imply for the short rate? • Are bond prices and bond option prices explicitly computable from the dynamics?

Parameter

Value

β0

-2.00%

β1 β2 β3 τ1 τ2

-1.09% 2.11% 7.13% 7.63 9.02

• Is the model mean reverting? • Can the model be calibrated fast and accurately? • Is the model practical and transparent enough? These questions are essential for the understanding of the theoretical and practical implementation of any interest rate model. In table 2 different interest rate models are compared according to the above criteria. The market models (LFM/LSM) are qualitatively very good models, but these models are also rather complicated. The 1-factor Hull-White model is probably the most suitable model as this model is relatively easy

Carlo Jonk In the summer of 2009 Carlo finished the Master of Actuarial Sciences at the University of Amsterdam with Cum Laude distinction. Since May 2007 he works for Triple A – Risk Finance at the departments Pensions and Insurance. This article is a summary of his Master thesis he wrote under supervision of drs. A. van Haastrecht (University of Amsterdam) and drs. Frankie Gregorkiewicz (Triple A – Risk Finance).

AENORM

vol. 17 (65)

October 2009

39


Actuarial Sciences

Table 2. Properties of interest rate models

Property\Model

Vasicek

1f Hull-White

BlackKarasinski

CIR ++

2-f Hull-White

Basic variable

Short rate

Short rate

Short rate

Short rate

Short rate

Distribution Analytic formulas?

Normal Yes

Normal Yes

Lognormal No

Chi-squared Yes

Mean reversion? Calibration quality Transparancy/ practicality/general acceptance

Yes +

Yes + +

Yes + -

Yes ++ -

to understand and offers a wide range of closed form formulas. However, the expanded 2-factor Hull White model, or the 2-factor Gaussian (G2++) model, improves the quality of the model considerably. Disadvantage of the model is the lack of analytic tractability with respect to derivative securities. As for generating scenarios only parameter estimates are needed, recent high-quality swaption price approximation methods can be used for calibration of this model to the swaption market. Assume a calibration routine has provided the parameter set for the G2++ model as in table 3.

Equity models Next to the interest rate process, another important decision is the way movements in equity markets are modeled. Many insurance products contain options and guarantees which are based on the performance of some underlying share or stock index. In order to value these options on a market consistent basis, underlying payoffs should be projected and discounted using risk neutral scenarios. The famous model by Black and Scholes is a widely used model for equity returns. This model assumes normally distributed equity returns, as well as a constant volatility of these returns. Market data shows equity returns are usually not normally distributed, and volatility is far from constant over time. Stochastic volatility models try to overcome these limitations of the Black and Scholes model. The Heston model is probably the most widely used stochastic volatility model under practitioners. The Heston model can be further extended with jumps in stock prices, which is the Bates model. Assume a calibration process based on market implied volatilities has given the parameter sets in table 4 for the Black and Scholes, Heston and Bates model. The given sum of squared errors shows the Bates model gives a better fit than the Heston model, which in turn gives a better fit than the Black and Scholes model. However, this better fit comes at the price of extra parameters to be

40

AENORM

vol. 17 (65)

October 2009

LFM/LSM

LIBOR/swap rate Normal Lognormal Bonds: Yes Caps(LFM)/ Caps etc.: No Swaptions(LSM) Yes ++ +

Possibly +++ -

estimated.

Valuation example Having generated a market consistent simulation set, it can be used for valuation purposes. Given the riskneutral simulation sets, any contingent claim on the value of the underlying stock index can be priced using the Fundamental Theorem of Asset Pricing. This theorem states the contingent claim can be replicated by rebalancing a portfolio consisting of the underlying risky asset and the money market account with a selffinancing strategy. In general, if C(T,S(T)) is the amount of a contingent payoff at time T where S(T) = S , then the time zero price of this contingent claim equals  − r ( u ) du  P0 = E ℚ  e ∫0 C (T , S (T ) )  .   T

Intuitively, this theorem states the current value of a contingent claim is simply the expected discounted payoff under the risk neutral measure where the discount factor depends on the short rate process. When the payoff of the contingent claim also depends on mortality of the policy holder, also probabilities of survival and death must be taken into account. Let the symbols tpx and tqx represent, Table 3. Parameters G2++ model Parameter

Value

σ

0.0031

η a b ρ

0.0144 0.1102 0.0735 -0.8787


Actuarial Sciences

Table 4: Parameters Black and Scholes model, Heston model and Bates model

Parameter

Black and Scholes model

Heston model

Bates model

σ κ θ ξ ρ v0 λ μ δ Sum of squared errors

0.3098 176,491

7.344 0.0931 1.1121 -0.7589 0.0586 42,512

5.9948 0.0846 0.9502 -0.8437 0.0869 0.0067 -0.3789 0.0803 29,776

respectively, the probability of survival and death, with the convention that qx = tqx. If T(x) is the remaining lifetime of (x), these probabilities can be written as t

p x = Pr T ( x ) > t  = 1 − t q x .

In what follows, it is assumed the remaining lifetime T(x) is stochastically independent of the brownian motions driving the short interest rate and stock price process. An immediate implication is that the mortality risk is diversifiable by increasing the size of an insurance portfolio. Consider an Equity Indexed Annuity (EIA) contract with investment in one unit of the underlying stock index and with embedded minimum interest rate guarantee. Using the Fundamental Theorem of Asset Pricing, the time zero value of this contract is given by the following expression: T  − r ( u ) du  P0EIA = ∑ E ℚ  e ∫0 C ( s , S ( s ) )  s −1 p x ⋅ q x + s −1 s =1   T − ( ) r u du   + E ℚ  e ∫0 C (T , S (T ) )  T p x   s

where the contingent claim C(T,S(t)) depends on the form of the minimum guarantee. Here, two forms of the contingent claim are considered: t   t C1 ( t , S (t ) ) = S 0 max (1 + G ) , ∏ 1 + αRs  , s =1   t

C 2 ( t , S (t ) ) = S 0 ∏ max {(1 + G ) ,1 + αRs },

period t. The parameter α is the participation rate in the appropriate stock index, and G is the minimum guarantee. At initiation of the contract, this participation rate, the minimum guarantee and the index fund are specified. The difference between the two forms of the contingent claim is that the second variant has an annual resetting feature. When the stock index does not perform better than the minimum guarantee, this minimum guarantee is the return for that particular year. The first variant does not have this annual resetting feature. In this variant, when the return on the stock index has not outperformed the minimum guarantee, in every past year the return on the investment equals the minimum guaranteed return. Clearly, the second variant is more profitable for the insured. For a fixed minimum guarantee level G, the value of the EIA increases monotonically with the participation rate α. For certain ranges of α the time zero value of the EIA will be less than the initial value of the index. As the participation rate increases, the EIA becomes more valuable and hence can be more expensive than the initial value of the index. There exists a critical value α* such that T  − r ( u ) du  S 0 = ∑ E ℚ  e ∫0 C ( s, S ( s ) )  s −1 p x x q x + s −1 s =1   T − r u du ( )   + E ℚ  e ∫0 C (T , S (T ) )  T p x .   s

This value of the participation rate is called the fair participation rate, because with participation rate α*, the present value of the benefits equals the premium paid by the insured. The expectations under ℚ can be calculated as before by averaging the discounted payoffs under a large number of simulations.

s =1

where Rt equals the return on the equity index in

AENORM

vol. 17 (65)

October 2009

41


Actuarial Sciences

Table 5: Fair participation rates EIA contract using 10.000 simulations.

Product

Variant 1

Variant 2

Yearly guarantee 0%

Yearly guarantee 2%

Yearly guarantee 4%

Model

Age 20

Age 60

Age 20

Age 60

Age 20

Age 60

Black and Scholes Heston Bates Black and Scholes Heston Bates

86.1% 86.2% 86.6% 29.8% 30.7% 32.4%

82.2% 82.4% 82.7% 29.4% 30.4% 31.9%

73.4% 73.8% 74.2% 22.2% 23.8% 25.2%

69.0% 69.6% 70.2% 21.9% 23.4% 24.8%

44.3% 46.1% 47.4% 10.9% 13.0% 14.2%

39.4% 41.7% 43.4% 10.3% 12.4% 13.6%

Results Valuation of the EIA contract depends, among other parameters, on which models are used for generating a scenario set is used to generate simulations of the interest rate and the stock index. In what follows, it is assumed the EIA has a term of 20 years and the used mortality table is the GBM 2000-2005 table. For simplicity, costs and other fees are not taken into account. Table 5 shows estimated fair participation rates for the two variants of the contract and the three models. The fair participation rate is calculated using a numerical root-finding method. These estimations are based on 10.000 simulations. Table 5 shows fair participation rates an insurer can offer for the two variants of the EAI, for different yearly guarantees and age of the insured. As expected, the fair participation rate is lower for higher yearly guarantees. For the first variant of the product with low guarantee, the difference between the three models is rather small. For higher guarantees, this difference becomes larger. For the second variant of the product, the difference in valuation is substantial for all guarantee levels. The Black and Scholes model gives a lower fair participation level than the Heston model, which in turn gives a lower participation level than the Bates model. Thus, for low guarantees, valuation of the second variant of the contract, which is technically only slightly different than the first variant, is more subject to model risk than the first variant of the contract. As the typical guarantee is around 3%, for both products model risk should be taken into account when deciding which model to use for pricing of this product when offering this product to potential policy holders.

References Black, F. and M. Scholes. "The pricing of options and corporate liabilities." Journal of Political Economy, 81 (1973): 637-659. Brigo, D. and F. Mercurio. Interest Rate Models – Theory and Practice. Springer-Verlag Berlin Heidelberg. 2006.

42

AENORM

vol. 17 (65)

October 2009

Fang, F. and C.W. Oosterlee. "A novel pricing method for European options based on fourier-cosine series expansions," MPRA paper 9319, University Library of Munich, Germany (2008). Heston, S. "A closed-form solution for options with stochastic volatility with applications to bond and currency options." Review of Financial Studies, 6 (1993): 327-343. Schrager, D.F. and A. Pelsser. "Pricing swaptions and coupon bond options in affine term structure models." Mathematical Finance, 16.4 (2006): 673-694. Van Haastrecht, A. and A. Pelsser. "Efficient, almost exact simulation of the Hesston stochastic volatility model." Netspar discussion paper, September 2008:044.


Econometrics

Pricing and Hedging Asian Basket Spread Options in a Nutshell by: Griselda Deelstra, Alexandre Petkovic and Michèle Vanmaele In this paper we study the pricing and hedging of arithmetic Asian basket spread options of the European type and present the main results of Deelstra et al. (2008). Asian basket spread options are written on a multivariate underlying. Thus we first need to specify a financial market model containing multiple stocks. We choose to use the famous Black and Scholes model.

The set up and the problem More formally we assume a financial market composed of m risky assets such that the dynamics of the price of the jth asset under the historical probability measure P are  j (t ) dSj(t) = μjSj(t)dt+σjSj(t)d B

where μj is the constant instantaneous return, σj the   j (t ) is a standard Brownian return’s volatility and B  we assume that asset motion under P. Furthermore returns are correlated according to  

 jt , B it ) = ρ min(t , t ) cov( B v s ji v s 

The final pay off at time T of an Asian basket spread  option is of the form 

(

1 n m ∑ ∑ εj a j S j (ti ) − K )+ n i =1 j =1

with (x)+=max(x,0) and where aj is the weight given to asset j, εj its sign in the spread and K is the strike price. We assume that εj=1 for j=1,...p, εj=–1 for j=p+1,...,m, where p is an integer such that 1≤p≤m–1 and t0<t1< t2<...<tn=T. Such Asian basket spread options are frequently encountered in the energy markets where they are used by energy producers to cover their profit margins. From the fundamental theorem of arbitrage free pricing we know that the arbitrage free price of an Asian basket spread option can be obtained by evaluating e − rT EQ (

1 n m ∑ ∑ ε j a j S j (ti ) − K )+ n i =1 j =1

where EQ denotes the expectation with respect to the risk neutral probability measure Q and r is the risk-free interest rate. The risk neutral probability measure Q is a probability measure equivalent to P such that the discounted asset prices are martingales under Q. Unfortunately things are not as simple as they seem. Since the underlying is a linear combination of the asset prices, formula (1) does not have a closed form expression. This means that for evaluation we need to resort to numerical methods. One of the most popular methods is the use of Monte Carlo simulations. Unfortunately, Monte Carlo methods can be time consuming especially in the case of path dependency. This poses some serious problems to the implementation of arbitrage free pricing by traders. Furthermore, financial institutions are not only interested in the evaluation of the option price. In order to control the risk of their position they also want to evaluate the Greeks: the derivatives of the option price with respect to the parameters of the stock prices. This additional task can considerably increase the computational time making the use of Monte Carlo methods even more complicated.

Two Solutions In this paper we consider two ways of approximating the

Alexandre Petkovic Alexandre Petkovic has obtained a PhD in Economics from the European Center for Advanced Research in Economics and Statistics (ECARES) at the Université Libre de Bruxelles in 2009. In his thesis he studied the pricing of Asian basket spread option, the modelling of option with multivariate underlying using Lévy processes and the consequences of individual and temporal aggregation in panel data models.

AENORM

vol. 17 (65)

October 2009

43


Econometrics

price of an Asian basket spread option: the comonotonic approximations and the moment matching methods. The logic behind both techniques is similar. It consists in replacing the original underlying, denote it by S, by a new one whose structure is simpler.

First: the comonotonic bounds As said above a first way of approximating (1) is to use the so called comonotonic approximations. The idea behind the comonotonic approximations is to replace the underlying S in (1) by a new underlying Z of the form n

m

Z = ∑∑ ε j Z ji , i =1 j =1

where the Zji are chosen such that Zji and ajSj(ti) have the same marginal distribution. However the dependence structure between the components of Z will be different from the dependence within S. Indeed the dependence structure between the components of Z shall be maximal, it shall be comonotone. Two reasons explain the success of the theory of comonotonic random vectors in the option pricing literature. First, it can be shown that if X is a n-dimensional comonotonic random vector with marginal components Xi then n

n

i =1

i =1

) , E (∑ X i − K ) + = ∑ E ( X i − K i +

 can be evaluated using the marginal cumulative where K i distribution functions of the vector of ajSj(ti)'s. Thus the stop-loss premium of the sum of the components of a comonotonic random vector can be written as the sum of the marginal stop-loss premia. Something of interest since the expectation of the marginal stop-loss premium is simply the Black and Scholes price. Second, there are different ways of building comonotonic variables. And they do not lead to the same comonotonic random sum. Furthermore depending on which comonotonic sum is used we can tell whether our approximation will be an upper or a lower bound. Using the theory of comonotonic random vectors, we derived, in the first part of Deelstra et al. (2008), four different comonotonic approximations of the real price. In finance comonotonicity is used for pricing and hedging of Asian, basket or Asian basket options (see Simon et al. (2000), Vanmaele et al. (2006), Deelstra et al. (2004), Chen et al. (2008)). To our knowledge, this is the first time that this approach has been used in order to approximate basket spread or Asian basket spread options.

Second: moment matching The basic intuition behind the moment matching methods goes as follows: consider the original problem of

44

AENORM

vol. 17 (65)

October 2009

evaluating (1). Since the distribution of a correlated sum of log-normal random variables is not known, we cannot derive a closed form expression for this expectation. However, we can derive an approximation to the price by replacing the original underlying of (1) by a new random variable with a treatable distribution with p parameters whose stop-loss premium has a closed form expression. The p parameters are fixed such that the p first moments of the new random variable are equal to those of the original underlying. This is what is called moment matching. Deelstra et al. (2008) contributes to the literature on moment matching approximations in two ways. First, we study and improve the hybrid moment matching method that was introduced for basket spread options by Castellacci and Siclari (2003). Their original approach is the following: start by noticing that the original underlying of (1) can be split in two parts, one containing the stock prices with a positive sign (S1) and another containing the stock prices with a negative sign (S2): EQ (

1 n p 1 n m a j S j (ti ) − ∑ ∑ a j S j (ti ) − K ) + ∑ ∑ n i =1 j =1 n i =1 j = p +1      S1

S2

Castellacci and Siclari propose to replace S1 and S2 by two log-normal random variables X1 and X2, where Xi has the same mean and variance as Si, for i=1,2. Doing this, they transform the problem of evaluating the price of a basket spread option into the problem of evaluating the price of a spread option. The evaluation of the price of a spread option is a well studied problem in the literature and many approximations are available. We extend the approach of Castellacci and Siclari in two ways. First, we improve their method by choosing a better approximation technique for the spread option. Originally Castellacci and Siclari used the Kirk method to approximate the spread. We use two new approximations for the spread, one is based on a recent approximation due to Li et al. (2008), the second approximation is based on the improved comonotonic upper bound belonging to the comonotonic bounds that we proposed as a first set of approximations. Second, we also study the performances of this so-called hybrid moment matching on Asian basket spread options extending so the results obtained by Castellacci and Siclari who only considered basket spread options. Second, we mix and extend the approach of Borokova et al. (2007) and Zhou and Wang (2008). Borokova et al. (2007) studied the problem of approximating the price of basket spread options using a shifted log-normal random variable. While Zhou and Wang studied the problem of pricing Asian and basket spread options using a log-skew normal extended distribution (see Azzalini (1985)). More exactly we replace the underlying in (1) by a random variable of the form eμ+σX + η,


Het doel is om jou te overtuigen van een stage bij ons. Ga naar aegon.nl/stages

Eerlijk over werken bij AEGON.


Econometrics

Table 2.

Table 1. Strike price

ICUB

MC

Strike price

ICUB

MC

3,6643

3,6659

6,2174 9,7098 14,1659 19,5450 25,7600 32,6977

6,2191 9,7103 14,1661 19,5432 25,7580 32,6946

35

27,4968

27,4964

-40

40 45 50 55 60 65

25,1757 23,0587 21,1293 19,3715 17,7703 16,3117

25,1754 23,0585 21,1291 19,3715 17,7704 16,3119

-50 -60 -70 -80 -90 -100

where X has a skew extended distribution and where μ, σ and η are respectively location, scale and shift parameters. This is an improvement over the original paper of Borokova and al. (2007). By using a skew extended distribution for X instead of a normal distribution we gain two additional parameters to match, this should enhance the quality of our approximation. It is an extension of the approach of Zhou and Wang (2008) since by introducing a shift parameter we can control for the fact that our underlying can take negative values and can be applied to basket spreads.

Numerical Results We compare our approximations using extensive numerical simulations. The simulations are split in three parts. First, we compare the performances of the approximations on spread option prices. We find out that the improved comonotonic upper bound is an extremely accurate approximation to the price of a spread option. The quality of the approximation was even superior to the one obtained with the method of Li et al. (2008) which is one of the best approximations that can be found in the literature. This result justified the use of the improved comonotonic upper bound to approximate the spread in the hybrid moment matching method. Second, we study the approximation of the price of a basket spread option. Unfortunately it was not optimal to use a single approximation technique when dealing with basket spread options. The best approximation technique seems to be based on a combination of hybrid moment matching with the improved comonotonic upper bound and the shifted log-normal approximation. Finally, we study the approximation of Asian basket spread options. In this case we find that hybrid moment matching combined with the improved comonotonic upper bound (HybMMICUB) is the best approximation. Thus we recommend its use in this case. The performances of our approximations are briefly illustrated in table 1 and 2. The second column of table 1 gives the approximated price of a spread option when the price is approximated using the improved comonotonic upper bound (ICUB). The second column of table 2 contains the approximated HybrMMICUB price of an Asian basket spread option. The third column reports in

46

AENORM

vol. 17 (65)

October 2009

both tables the “real” price which is computed through Monte Carlo simulations (MC).

Greeks and Options Written in a Foreign Currency In the last two sections of Deelstra et al. (2008), we derive the Greeks of the approximate option prices and explain how the approximation techniques we develop could be applied to price options written in a foreign currency. To compute the approximations of the Greeks, we derive the approximation obtained by hybrid moment matching when the spread is approximated by an improved comonotonic upper bound. It should be emphasized that two things make this computation feasible. First, the improved comonotonic upper bound we use to approximate the spread has a nice closed form expression that can easily be derived. Second, since we used log-normal random variables to approximate S1 and S2, we only had to solve a linear system to compute the matching distribution. This linearity introduces considerable simplifications in our problem and allows us to use a chain rule to compute the approximation of the Greeks.

References Azzalini, Adelchi. "A Class of Distributions which Includes the Normal Ones." Scandinavian Journal Statistics, 12 (1985): 171-178. Borovkova Svetlana, Ferry Permana and Hans v.d. Weide. "A Closed-Form Approach to the Valuation and Hedging of Basket and Spread Options." The Journal of Derivatives, 14.4 (2007): 8-24. Castellacci, Giuseppe and Michael Siclari. "Asian Basket Spreads and Other Exotic Averaging Options." Energy Power Risk Management, March 2003. Chen, Xinliang, Griselda Deelstra, Jan Dhaene and Michèle Vanmaele. "Static Super-Replicationg Strategies for a Class of Exotic Options." Insurance: Mathematics and Economics, 42.3 (2008):1067-1085. Deelstra, Griselda, Jan Liinev and Michèle Vanmaele


Econometrics

"Pricing of Arithmetic Basket and Asian Basket Options by Conditioning." Insurance: Mathematics and Economics, 34.1 (2004): 55-77. Deelstra, Griselda, Alexandre Petkovic and Michèle Vanmaele. Pricing and Hedging Asian Basket Spread Options. ECORE discussion paper 2008/2. Li, Minqiang, Shijie Deng and Jieyun Zhou. "Closed-Form Approximations for Spread Option Prices and Greeks." The Journal of Derivatives, 15.3 (2008): 58-80. Simon, Steven, Marc Goovaerts and Jan Dhaene. "An Easy Computable Upper Bound for the Price of an Arithmetic Asian Option." Insurance: Mathematics and Economics, 26.2 (2000): 175-184. Vanmaele, Michèle, Griselda Deelstra, Jan Liinev, Jan Dhaene and Marc Goovaerts. "Bounds for the Price of Discretely Sampled Arithmetic Asian Options." Journal of Computational and Applied Mathematics, 185.1 (2006): 51-90. Zhou, Jinke and Xiaolu Wang. "Accurate Closed-Form Approximation for Pricing Asian and Basket Options." Applied Stochastic Models in Business and Industry, 24.4 (2008): 343-358.

AENORM

vol. 17 (65)

October 2009

47


17

20

3

1

4

26

22

5

2

17

6

9

10

7

15

5

13

7

13

6

8

9

1

18

5

14

17

5

16

7

16

7

9

10

23

6

26

1

18

26

9

11

10

9

24

5

21

9

5

18

4

“We are looking for Junior Traders”.

11

Can you look beyond the figures?

6 15

Did you crack the code fast enough?

4

19

3

15

If so, you might be the new colleague we are looking for.

26

9

2

5

Who we are? We’re a dynamic team of 9

5

18

21

11

4

2

1

18

traders, IT specialists, and professionals. Who are the best at what we do.

15

6

12

15

15

11

9

14

7

7

26

2

14

17

5

16

7

17

15

17

4

13

26

8

21

19

2

4

6

15

18

26

23

7

15

4

26

3

12

18

6

17

2

11

22

5

7

19

11

1

25

4

8

12

We’re peer - recognized as Europe’s leading ETF market maker, trading on- and off-screen all day to provide the prices on which Investors trade. We train our traders in-house and use custom-built technology, which means our successes are a joint effort from which everyone can profit. Our culture? Work hard and play harder. We offer a performance based incentive scheme, training opportunities, luxury lifestyle perks, and an open collegial

1

18

10

21

14

9

15

18

7

environment. In addition, we offer the opportunity to work overseas.

4

16

8

6

4

12

1

5

9

Interested? Send your application (CV 13

7

22

20

1

3

1

4

26

including grades and motivation letter) to jobs@flowtraders.com. For more

26

9

11

5

10

23

21

11

4

information and In-house days (only at Amsterdam headquarters) contact

7

13

6

8

5

5

17

21

7

25

20

18

1

4

5

18

19

8

5

26

8

21

3

10

12

20

9

Recruitment +31 (0)20 799 6799 or check out www.flowtraders.com.


Mathematical Economics

The Restricted Core for Totally Positive Games with Ordered Players by: René van den Brink Recently, applications of cooperative game theory to economic allocation problems have gained popularity. In many such allocation problems, such as river games, queueing games and auction games, the game is totally positive (i.e., all dividends are nonnegative), and there is some hierarchical ordering of the players. In van den Brink, van der Laan and Vasil’ev (2009) the Restricted Core for such totally positive games with ordered players is introduced. This is a refinement of the famous Core of these games which is based on the distribution of dividends taking into account the hierarchical ordering of the players. We discuss properties, provide an axiomatization and apply this solution to river games.

Cooperative games A situation in which a finite set of players can obtain certain payoffs by cooperation can be described by a cooperative game with transferable utility, or simply a TU-game. A TU-game is a pair (N, v), where N={1,..., n} is a finite set of players, and v:2N→ ℝ is a characteristic function on N such that v(Ø)=0. A subset S ⊆ N is called a coalition. The players in any coalition S can agree to cooperate and generate some joint worth v(S) ∈ ℝ . In a TU-game this worth can be split among the players in S in any possible way. We denote by ΩN=2N \{Ø} the set of all nonempty coalitions. Besides representing a TU-game by its characteristic function v, it often is helpful to use the dividend representation of a game. For every characteristic function v the dividends are defined uniquely as those numbers ∆v(S), Ø≠S ⊆ N, such that for every coalition S the sum of the dividends of all subsets of S is equal to the worth of S, i.e. the dividends can be found by solving v(S)=∑T ⊆ S,T≠Ø∆v(T) for all S ⊆ N. In other words, we can find the dividends of coalitions recursively starting with ∆v({i})=v({i}) for the single player coalitions, and ∆v(S)=v(S)−∑T ⊆ S,T≠Ø,T≠S∆v(T) for S ⊆ N with |S|≥2, see Harsanyi (1959). (For example, if we have a three player game with v({1})=v({1,2})=v({1,3})=v({2,3})= 1, v({1,2,3})=2 and all other coalitions have worth zero, then coalitions {1} and {2,3} both have a dividend of 1, while all other coalitions have dividend zero. The total worth of 2 of the ‘grand coalition’ is generated as follows: player 1 earns 1 on its own, while players 2 and 3 earn 1 when they cooperate.) When we assume that eventually the ‘grand coalition’ N consisting of all players forms, then the main question is how to allocate the worth v(N) over the individual

players, taking into account what they can earn when forming different coalitions. Various solutions have been proposed in the literature. A solution for TU-games assigns a set of payoff vectors (possibly empty or consisting of a unique element) to every TU-game. In a payoff vector provided by a solution, the payoff assigned to a particular player depends on the payoffs that can be obtained by any coalition of players. Here we mention three of such solutions. First, the Shapley value Sh(v) (Shapley, 1953) distributes the dividend of any coalition S equally among the players in S (so players outside S do not share in the dividend of S): Shi (v ) =

v (S ) for every i ∈ N . { S ⊆ N |i∈S } | S |

Note that the Shapley value is well defined and assigns a unique payoff vector to every game. The Core (Gillies, 1953) of TU-game v is the set Core(v) of all efficient payoff vectors that are stable in the sense that no coalition can do better by separating and redistributing its own

René van den Brink Rene van den Brink is Associate Professor in Mathematical Economics at the Department of Econometrics of VU University, and fellow of the Tinbergen Institute. His main research interests are in (cooperative) game theory, social choice theory and network theory.

AENORM

vol. 17 (65)

October 2009

49


Mathematical Economics

worth, i.e., Core(v ) = { x ∈ ℝ n | ∑ xi = v ( N ) and ∑ xi ≥ v ( S ) i∈ N

i∈S

for all S ⊂ N }.

One problem with the Core is that it might be empty. Finally, the Harsanyi set (Vasil’ev 1978) H(v) is the set of payoff vectors obtained by distributing for any coalition the dividend of that coalition in any possible way among its players. To define this solution, for coalition T ∈ΩN, the set of sharing vectors pT ∈ ℝ n+ is defined as  (i ) piT = 0 for all i ∈ N \ T ,    P T =  p T ∈ | (ii ) piT ≥ 0 for al l i ∈ T and    (iii ) ∑ i∈T piT = 1  

and a sharing system is a tuple p=[pT] T ∈ N with pT ∈ ℝ n+ for every T ∈ΩN=2N\{Ø}. Then every sharing system p=[pT] T ∈ N yields the corresponding Harsanyi payoff vector given by φ (v ) = p i

∑

v

(T ) p for all i ∈ N , T i

T ∈ N

i.e., the payoff to player i is given by the sum of its shares in the dividends of the game. The Harsanyi set H(v) is the collection of all Harsanyi payoff vectors1, thus H(v) = {φp(v) ∈ ℝ n|pT ∈ PT for any T ∈ΩN}. The Shapley value of a game always belongs to the Harsanyi set (using the equal sharing system pTi=1/|T| for all i ∈T). Moreover, for every game it contains the Core, and is equal to the Core if and only if all dividends of coalitions with at least two players are nonnegative.

Totally positive games with ordered player set In this article we only consider totally positive games. A cooperative TU-game is called totally positive if all dividends are nonnegative. From above it follows that for such games the Core and Harsanyi set coincide. Many economic allocation problems are modeled as totally positive games. Moreover, in several of these applications there is some hierarchical ordering of the players. For example, in the water allocation problem of Ambec and Sprumont (2002) agents are located along a river from upstream to downstream, in sequencing situations as considered in, e.g. Curiel et. al. (1989) the players (jobs)

are ordered in an initial queue, in auction situations of Graham et.al. (1990) the agents can be ordered by their valuations of the good to be auctioned, and in the airport game of Littlechild and Owen (1973) the airplanes can be ordered by the cost of the landing strip necessary to build for these airplanes. Therefore, in this paper we assume that the players are part of some hierarchical structure that is represented by a directed graph or digraph being a pair (N,D) where N={1,...,n} is a finite set of nodes (representing the players) and D ⊆ N×N is a binary relation on N. In such games with ordered players (v,D) the payoffs of players may depend both on the worths of the coalitions in the game v as well as their positions in the digraph D. When distributing dividends over players, we take account of the ordering of the players, by distributing the dividends in such a way that the share of a player that is dominated in a coalition is at most equal to the share of a player by which it is dominated. So, for digraph D and coalition T ∈ΩN we consider the restricted set of sharing vectors PDT = { pT ∈ PT | piT ≥ pTj for all i, j ∈ T with (i, j ) ∈ D}.

Obviously, since PDT ⊆ PT for all T ∈ΩN, only allowing sharing systems from PDT we obtain a refinement of the Harsanyi set. Since the Core and Harsanyi set coincide for totally positive games, we obtain a refinement of the Core for totally positive games, and therefore we refer to the set of payoff vectors RC (v, D ) = {x ∈ ℝ n | x =

∑

T ∈

v

(T ) pT ,

N

p T ∈ PDT for any T ∈  N }.

as the Restriced Core of totally positive game with ordered players (v,D). Since the equal sharing system respects the restrictions of PDT , the Shapley value of the unrestricted game v always belongs to the Restricted Core. So, for every totally positive game with ordered players (v,D) it holds that Sh(v) ∈ RC(v,D) ⊆ Core(v) Further, it is clear that the Shapley value is the unique element in the Restricted Core if the digraph is complete (i.e. D ={(i,j) | i,j ∈ N, i ≠ j}), and the Restricted Core equals the unrestricted Core if the digraph is empty (i.e. D=Ø).

Properties We refer to the solution that assigns to every totally

1. It should be noticed that the Harsanyi set of a game is equal to its Selectope, as introduced in Hammer et al. (1977), see also Derks et al. (2000).

50

AENORM

vol. 17 (65)

October 2009


Mathematical Economics

positive game with ordered players its Restricted Core as the Restricted Core solution. Let F be a generic solution that assigns a set of payoff vectors F(v,D) ⊆ ℝ n to any totally positive game with ordered players (v,D). The first four axioms are generalizations of standard axioms in cooperative game theory. Efficiency ∑i ∈N xi= v(N) for all x ∈F(v,D). Null player property xi=0 for all x ∈F(v,D), whenever i is a null player in v. (Player i ∈N is a null player in game v if v(S)=v(S \{i}) for all S ⊆ N.) Additivity F(v + w,D) = F(v,D) + F(w,D). Nonnegativity F(v,D) ⊆ ℝ n+ Next we introduce a property that reflects the hierarchical dominance. If a player vetoes one of its successors, then we require that this vetoing predecessor earns at least as much as its vetoed successor2. Player i vetoes player j ∈N\{i} in game v if v(S)=v(S\{j}) for ⊆ all S N \{i}, i.e., i vetoes j if the marginal contribution of j to any coalition not containing i is equal to zero. Structural monotonicity xi ≥ xj for every x ∈F(v,D) whenever (i, j) ∈D and i vetoes j in v. In the paper we show the following results: 1. On the class of totally positive games with ordered players, the Restricted Core solution RC satisfies efficiency, the null player property, additivity, nonnegativity and structural monotonicity. 2. If solution F satisfies efficiency, the null player property, additivity, nonnegativity and structural monotonicity, then F(v,D) ⊆ RC(v,D) for all totally positive games with ordered players (v,D).

where ti is a monetary compensation to player i, xi is the amount of water allocated to player i, and bi: ℝ +→ ℝ is a continuous nondecreasing function yielding the benefit bi(xi) to player i of the consumption xi of water. An allocation is a pair (x,t) ∈ ℝ n+× ℝ n of water distribution and compensation scheme, satisfying n

∑t i =1

i

≤ 0 and

j

j

∑ x ≤ ∑e , i =1

i

i =1

j = 1,..., n.

i

The first condition is a budget condition and says that the total amount of compensations is nonpositive, i.e., the compensations only redistribute the total welfare. The second condition reflects that any player can use the water that entered upstream, but that the water inflow downstream of some player can not be allocated to this player. So, for any j, the sum of the water uses x1,...,xj is at most equal to the sum of the inflows e1,...,ej. Because of the quasilinearity and the possibility of making money transfers, an allocation is Pareto optimal (efficient) if and only if the distribution of the water streams maximizes the total benefits, i.e., the optimal water distribution x* ∈ ℝ n+ solves the maximization problem: n

j

j

i =1

i =1

i =1

max ∑ bi ( xi ) s.t. ∑ xi ≤ ∑ ei , j = 1,..., n and

x1 ,..., xn

xi ≥ 0, i = 1,..., n

A welfare distribution distributes the total benefits of an optimal water distribution x* over the players, i.e., it is a vector z ∈ ℝ n assigning utility zi to player i and satisfying n n distribution z ∑ i =1 zi = ∑ i =1 bi ( x*i ) . Clearly, any welfare n n zi = with can be implemented by the allocation (x,t) xb i=( x*i )and ∑ ∑ i i =1 i = 1 n n ii * b ( x i ), ) i=1,...,n. Adding to these properties the properties ∑ of i =1 zi = ∑ ti=zi =i−b 1 nonemptyness, convexity and a coalitional consistency The problem to find a ‘fair’ welfare distribution can property we obtain a full axiomatization of the Restricted be modelled by the following game (N,v). Obviously, the n n z = ∑ i =1 b i ( x*i ) with x* ∈ ℝ n+ a Core solution with 8 logically independent axioms. worth v(N) is given by ∑ iv(N) =1 i solution of the maximization problem above. Further, for An application: the water distribution any pair of players i,j with j>i it holds that water inflow entering the river before the upstream player i can only problem be allocated to the downstream player j if all players In their paper ‘Sharing a river’, Ambec and Sprumont between i and j cooperate, otherwise any player between (2002) consider the problem of the optimal distribution i and j can take the flow from i to j for its own use. Hence, of water to agents located along a river from upstream only coalitions [i,j]:={i,i+1,...,j} of consecutive players to downstream. Let N={1,...,n} be the set of players are admissible. For any coalition [i,j] its worth v([i,j]) is representing the agents on the river, numbered given by successively from upstream to downstream, and let ei ≥ 0 be the flow of water entering the river between player j v ([i, j ]) = ∑ b h ( x[ji , j ] ) where x[ji , j ] = ( x[ji , j ] ) hj =i solves i−1 and i, i=1,...,n, with e1 the inflow before the most h =i upstream player 1. Further it is assumed that each player i i has a quasilinear utility function given by u (xi,ti)=b (xi)+ti 2. For point-valued solutions this is weaker than a similar property introduced in van den Brink and Gilles (1996) who do not require the predecessor to veto the successor. 3. A subset T of S is maximal consecutive if T is consecutive, i.e., T = [i,j] for some i,j ∈ S, and T ∪ h is not consecutive for any h ∈ S \ T.

AENORM

vol. 17 (65)

October 2009

51


Mathematical Economics

j

h

h

h =1

k =i

k =i

max ∑ b h ( xh ) s.t. ∑ xk ≤ ∑ ek , h = 1,..., n and

x1 ,..., x j

xk ≥ 0, k = i,..., j

For any other coalition S it holds that v(S) is equal to the sum of the worths of its maximal consecutive subsets.3 We refer to this game as the river game. In case all functions bi are differentiable with derivative going to infinity as xi tends to zero, strictly increasing and strictly concave, Ambec and Sprumont (2002) have shown that this river game is convex. It can even be shown that every river game is totally positive, i.e. all dividends are nonnegative, and only coalitions [i,j] of consecutive players can have a positive dividend. But this implies that the Core of a river game is equal to its Harsanyi set, i.e any distribution of dividends yields a Core payoff vector. Ambec and Sprumont (2002) propose as solution for the water distribution problem the marginal vector m(v) given by m1(v) = v({1}) and mi(v) = v([1,i]) − v([1,i−1]), i=2,...,n, being the unique payoff vector that is both stable (i.e. lies in the core) and, what they call, fair. This fairness means that no coalition S gets a payoff above its aspiration level, defined as the maximum worth S can attain by an optimal distribution among its members of the water inflows of all players 1,...,s, where s=max{h|h ∈S}, so also using the water inflows of the players not in S, but upstream to the most downstream member of S. Since, for coalition [1,j] the aspiration level is given by v([1,j]), stableness and fairness imply that the total payoff to the players in coalition [1,j] should be equal to v([1,j]), which yields m(v) as the unique outcome. Although corestableness and fairness seem to be reasonble properties, the unique outcome resulting from these two properties is quite counterintuitive. For every i<n, we have that the total payoff of the players in the consecutive coalition [1,i] upstream of i (including i itself) is equal to v([1,i]), while the total payoff to the downstream coalition [i+1,n] is equal to v(N)−v([1,i])≥v([i+1,n]), i.e., all additional profit that is realised when the two coalitions [1,i] and [i+1,n] merge to the grand coalition N goes to the downstream coalition. However, any upstream coalition [1,i] can prevent that coalition [i+1,n] gets more than v[i+1,n] by using all inflows e1,...,ei by itself. So, although the coalition [1,i] can play some type of ultimatum game by claiming that they i will use their total water inflow ∑ h =1 ei by themselves unless the players of the downstream coalition [i+1,j] are willing to give almost all profit of cooperation to the upstream coalition, the solution proposed by Ambec and Sprumont does the reverse, all profit goes to the downstream coalition. Putting it differently, the solution m(v) is the payoff vector in the Core that is obtained by giving any dividend ∆v([i,j]) fully to the last player j in the coalition [i,j]. Therefore, as a drawback it can be argued that a player is not rewarded for letting pass the water from its upstream players to its downstream player, and thus does not give any incentive to a player to cooperate

52

AENORM

vol. 17 (65)

October 2009

with its successors downstream on the river. Even if one agrees with the idea that downstream players should get a share in the dividends at least as high as the shares of upstream players, it seems that giving dividends fully to downstream players is too extreme. We conclude that the unique stable and fair solution according to Ambec and Sprumont (2002) does not seem to be a very attractive and reasonable outcome. Instead of assigning the dividend of any consecutive coalition to the most downstream player. It seems more intuitive to assign that dividend to the most upstream player. Maybe assigning all dividends to the most upstream player is to extreme (of course, not more extreme as assigning it to the most downstream player as done by Ambec and Sprumont (2002)), but it seems reasonable to assign to upstream players a share that is at least equal to the share of more downstream players. This is done by the Restricted Core where the ordering of the players is from upstream to downstream. Besides the payoff vector that assigns dividends fully to the most upstream player in the corresponding coalitions, another extreme point of this set is the Shapley value which distributes the dividend of any consecutive coalition equally among all players in that coalition.

Example Consider the water distribution problem among players N={1,2,3}, such that e1 =1, e2 =e3 =0, b1(x)=0, b2(x)=0 and b3(x)=x for all x≥0. So, the only water inflow is at the most upstream player 1, while the most downstream player 3 is the only one that obtains a positive benefit from water consumption. Player 2 has neither a water inflow nor positive benefit from water consumption, but is needed to let water pass from player 1 to player 3. The corresponding river game is the unanimity game of the ‘grand’ coalition given by v(N)=1, and v(S)= 0 otherwise. The Core is the full set of efficient nonnegative payoff vectors Core(v)={z ∈ ℝ n+|∑i ∈N zi=1}. The solution m(v) of Ambec and Sprumont (2002) assigns the worth v(N) fully to the most downstream player 3, while players 1 and 2 get nothing for letting the water pass through to player 3. This payoff vector is an extreme point of the Restricted Core corresponding to digraph D−={(3, 2), (2, 1)} where the ordering of the players is from downstream to upstream. Since ∆v(N)=1 and ∆v(S)=0 otherwise, the Restricted Core is given by RC(v,D−) = PDN− = {p ∈PN | p3 ≥ p2 ≥ p1}= Conv({(0,0,1)', (0,1/2,1/2)', (1/3,1/3,1/3)'}) On the other hand, the Restricted Core on the more natural ordering from upstream to downstream, reflected by the digraph D={(1,2), (2,3)}, is given by RC(v,D)= PDN ={p ∈PN | p1 ≥ p2 ≥ p3}= Conv({(1,0,0)', (1/2,1/2,0)', (1/3,1/3,1/3)'}), see Figure 1. This is also a subset of the Core with three extreme points. One of these extreme points is the allocation assigning the dividend fully to the most upstream player 1. Note that the Shapley value, which distributes the worth equally among the


Mathematical Economics

Figure 1.

Gillies, D.B. Some Theorems on n-Person Games, Princeton University Press, Princeton, NJ, 1953. Graham, D.A., R.C. Marshall, and J.F. Richard. "Differential Payments within a Bidder Coalition and the Shapley Value." American Economic Review, 80 (1990): 493-510. Hammer, P.L., U.N. Peled, and S. Sorensen. "PseudoBoolean functions and game theory. I. Core Elements and Shapley Value." Cahiers du CERO, 19 (1990): 159-176.

three players, is an extreme point of the Restricted Core corresponding to both orderings D and D−. We end with the remark that our critique on the solution of Ambec and Sprumont (2002) is the modeling, i.e., ordering the players from downstream to upstream. As shown in the example above, the Restricted Core can be applied to this ordering (yielding m(v) as one of its extreme points), but also to the reverse ordering from upstream to downstream, which seems more intuitive. For any ordering the Shapley value always is an extreme point of the Restricted Core.

References

Harsanyi, J.C. "A bargaining model for cooperative n-person games, in: Contributions to the Theory of Games IV (eds. Tucker A.W., and R.D. Luce)." Princeton UP, Princeton (1959): 325-355. Littlechild, S.C, and G. Owen. "A simple expression for the Shapley value in a special case." Management Science, 20 (1973): 370-372. Shapley, L. S. "A value for Nperson games, in: contributions to the Theory of Games, Vol II (eds. H. W. Kuhn, and A. W. Tucker)." Princeton University Press, Princeton (1953): 307-317. Vasil’ev, V.A. "Support function of the core of a convex game." Optimizacija Vyp, 21 (1978): 3035 (in Russian).

Ambec, S., and Y. Sprumont. "Sharing a river." Journal of Economic Theory, 107 (2002): 453-462. Brink, R. van den, and R. P . Gilles. "Axiomatizations of the conjunctive permission value for games with permission structures." Games and Economic Behavior, 12 (1996): 113-126. Brink, R. van den, G. van der Laan, and V.A. Vasil’ev. The Restricted Core for totally positive games with ordered players. Tinbergen Discussion Paper 09/0381,Tinbergen Institute and Free University, Amsterdam, 2009. Brink, R. van den, G. van der Laan, and V.A. Vasil’ev. "Component efficient solutions in linegraph games with applications." Economic Theory, 33 (2007): 349364. Curiel, I., G. Pederzoli, and S. Tijs. "Sequencing games." European Journal of Operational Research, 40 (1989): 344-351. Derks, J., H. Haller, and H. Peters. "The selectope for cooperative TU-games." International Journal of Game Theory, 29 (2000): 23-38.

AENORM

vol. 17 (65)

October 2009

53


Actuarial Sciences

Stochastic Rates of Return in a Hybrid Pension Fund by: Denise Gómez-Hernández Hybrid pension funds represent a combination of Defined Contribution and Defined Benefit Pension Funds. In this article, a hybrid pension fund is proposed which is accumulated through time based, on a pre-defined target. The only source of uncertainty assumed in this work, is through volatile rates of return. The contributions are assumed non constant and adjusted to eliminate any possible deficits arisen, by using the modified spreading method developed by Owadally (2003). Static asset allocation is assumed, however, the effects of varying the proportion of the fund invested in UK equities on the value of the fund, contribution and deficits. are investigated. Also, when the adjustment to the contribution is made, a certain proportion of deficits has to be paid off during a certain period of time. The effects on the fund and contribution variance when this proportion is varied, are also investigated. The results obtained, show that as the porportion of the hybrid fund invested in UK equities is increased, the lower the values of the contributions and the deficits are. Also, that the variance of the contribution decreases as the proportion of deficits paid off through time is increased, at the expense of the variance of the fund being increased.

Introduction In response to the uncertainty that both employers and employees face with Defined Benefit (DB) and Defined Contribution (DC) pension schemes, many authors such as Khorasanee (1995), Khorasanee & Ng (2000) and Blake et al. (2001) have proposed alternative models which offer a combination of these pension plans. In this article a hybrid scheme is also proposed, which consists on accumulating a pension fund similar to a DC plan but with non-constant contributions adjusted at each point time t depending on the value of a notional target based on a DB fund. From the employee’s point of view, these variable contributions may reduce the uncertainty on the benefit at retirement that constant contributions represent. Moreover, the employee is willing to pay more into his or her fund when this is under deficits providing that he or she can afford it, or reduce the value of the contributions when there are surpluses. This change in the contributions is made by assuming a method called the modified spreading developed by Owadally (2003)

Denise Gómez-Hernández Denise is a full-time professor at the “Universidad Autónoma de Querétaro” in Mexico. She has a Ph.D. in Actuarial Science with specialization in Pensions by Cass Business School in London and an M. Sc. in Actuarial Science by Heriot-Watt University in Edinburgh. Her research interests are: funding of pension funds, smoothing of contributions in pension funds, stochastic interest rates, investment risks and mortality rates.

54

AENORM

vol. 17 (65)

October 2009

The practicalities of using this modified spreading method by an individual accumulating a pension fund through the working lifetime are that he or she must be willing to increase the value of his or her contributions, when the value of the fund differs negatively from the value of the pre-defined target. Considering also that this value of the contributions may also decrease for some periods of time, when the value of the fund differs positively from the value of the pre-defined target. Other practicality to consider is that when an individual leaves or changes jobs, he or she must keep making contributions into the fund.

The Model The hybrid pension fund proposed, assumes an individual starting a pension scheme at age 25 (time 0) and retiring at age 65 (time 40). To simplify our model, salary growth is not modelled in our simulations. That is, the projected final salary at retirement is assumed to be equal to 1. The pension fund has market value ft at time t, with initial value f0=0. The contribution Ct is paid at the start of year (t, t+1), where 0 ≤ t ≤ 39. Then, the fund is governed by the following recursive equation: f t +1 = ( f t + C t ) * ( pe (1 + iet +1 ) + p g (1 + igt +1 ))

(1)

where pe is the proportion of the fund invested in equities and pg is the proportion invested in gilts and where pe+pg=1. Also, ie and ig are the UK real rates of return at some given year, t, for equities and gilts, respectively. These returns are assumed stochastic because the bootstrap sampling method with historical rates of return is used. t

t


Actuarial Sciences

The contribution that should be paid by the individual in year t (where 0 ≤ t ≤ 39) is as follows: Ct = c + St

Figure 1. Fund and contribution values and the size of the deficit for a stochastic fund with K1 = 5 and K2 = 0.8

(2)

where c is a fixed contribution calculated by assuming the Entry Age Method (EA). St is the adjustment to the contribution which will depend on the variation of the actual value of the fund with the value of the pre-defined target and simulated as: ∞

S t = λ1 Dt + λ2 ∑ Dt − j

(3)

j =0

where λ1= [1−(uAK1K2)] and λ2= [vA(1−uAK1)(1−uAK2)]. uA= (1+iA) and vA= (1+iA)−1, and iA the actual rate of return on assets. K1 and K2 represent the period of time to spread any deficits arisen and the proportion of these deficits to be paid off through time, respectively. The deficits are then defined as: Dt = Ft − ft

(4)

where Dt measures how far the actual fund ft is from a notional target Ft. This target represents the standard fund (or actuarial liability), which will be compared to the fund ft, in order to make any necessary adjustment to the value of the contributions. This Ft is calculated under the Entry Age method.

Results The assumptions made in this section are: an individual accumulating a fund during his/her working life over a total of 40 years, an stochastic version of the hybrid pension fund by assuming the bootstrap sampling method on rates of return; a conservative scenario for the returns, for the sake of brevity; 10,000 simulations for each scenario and different values of pe, K1 and K2 in order to investigate the effects on the values of the fund, contribution and deficits; and on the variance of the fund and contribution. The results in Figure 1, show that as we decrease the value of pe, that is, as we decrease the proportion of the fund invested in UK equities, a higher value of the contribution is required to match the value of the actual fund with the standard fund. This is happening as when pe is small, less returns on the fund are accumulated making this smaller in value. The opposite happens when a high proportion of the fund is invested in equities. That is, the more this proportion the more returns on the fund and the less contribution is required. Note that the size of the deficits is also smaller when assuming a higher value of pe. That is, when a high proportion of the fund is invested in equities, high value of returns and the more gains are

obtained. Figure 2, on the other hand, shows the effects on the fund and contribution volatility when the value of the proportion of deficits to be paid off through time (i.e. K2) is varied. For this purpose, the value of K1 (i.e. the period of time to spread deficits) has been fixed to 5, as this lies within the interval of feasible values.1 The results show then, that the variance of the fund increases as the value of K2 increases, whereas the variance of the contribution decreases indefinitely. That is, as we increase the value of K2, the smaller the proportion of deficits paid through time making the volatility of the fund to increase, as

1. Gomez-Hernandez (2008).

AENORM

vol. 17 (65)

October 2009

55


Actuarial Sciences

Figure. 2. Fund and contribution variance for a stochastic fund with K1 = 5 and pe = 0.6.

Conclusions The main conclusions drawn from the simulations of this work are as follows. First, that the higher the proportion of the hybrid pension fund invested in UK equities, the lower the value of the contributions are required for the value of the fund to reach the pre-defined target. At some periods of time, these become lower than zero; that is, the individual is able to withdraw money from his or her fund. As a consequence, the value of the deficits become negative (positive gains). Second, when different values of the proportion of deficits to be paid off through time, i.e. K2, are investigated; the variance of the contribution decreases as we use ’more’ modified spreading method (i.e. for higher values of K2). This is at the expense of having higher variance of the fund. Then, an ’optimal’ value of this proportion can be found at around 0.5, when looking at the trade off of the fund and contribution variance.

References Blake, David, Andrew Cairns, and Kevin "Pensionmetrics: stochastic pension plan and value-at-risk during the accumulation Insurance: Mathematics and Economics, 29 187–215.

Dowd. design phase." (2001):

Gomez-Hernandez, Denise. Pension Funding and Smoothing of Contributions. Ph.D. thesis, City University, 2008. Khorasanee, Zaki. "Applying the defined benefit principle to a defined contribution scheme." Actuarial Reports, City University 77 1995: 30 pages. Khorasanee, Zaki, & Ho Kuen Ng. "A retirement plan based on fixed accumulation and variable accrual." North American Actuarial Journal, 4.1 (2000): 63– 79. Owadally, M.I. "Pension Funding and the Actuarial Assumption Concerning Investment Returns." ASTIN Bulletin, 33.2 (2003): 289–312. higher proportions of deficits are accumulated, making the volatility of the contributions to decrease. Then as we use more modified spreading the variance of the fund decreases, at the expense of increasing the variance of the contribution. This result suggest that the individual would have to choose a value of K2, which balances the trade off between the fund and contribution volatility. The interval of feasible values of K2 may be [0.4,0.6], as shown in Figure 2.2

2. The interested reader should refer to Gomez-Hernandez (2008) for a more detailed explanation of the model.

56

AENORM

vol. 17 (65)

October 2009



Econometrics

Social Security and Temptation Problems by: Alessandro Bucciol Unfunded Social Security is the largest transfer program in most industrialized countries and has an enormous impact on macroeconomic variables and individual saving decisions. It is therefore important to assess whether it actually improves welfare in the economy. A standard result in the macroeconomic literature is that unfunded Social Security is detrimental for welfare and individuals would be better-off if they were allowed to use payroll taxes for their own purpose. The result was first theorized comparing steady-state scenarios in models with two overlapping generations under partial or general equilibrium, and later supported quantitatively by means of computers. With the development of the modern technology, new effort has been made to set up models as close as possible to the reality, noticeably expanding the number of generations and introducing uncertainty. Applications draw a large number of times (say, 10,000) random numbers from a data generating process. The average individual behavior resulting from these simulations determines the welfare in the economy, which is then compared under different levels of Social Security taxation. Even if one takes into account those sources of uncertainty that a pension program is able to reduce with inter-generational risk sharing (particularly in income and death age), the general conclusion that Social Security reduces welfare still holds true (for a quantitative analysis see, e.g., Auerbach and Kotlikoff, 1987). The main reason for this result is that the forced intertemporal transfer of resources from working years to retirement years makes the youngest liquidity constrained, hence substantially worse-off. One might argue, however, that Social Security exists primarily for a paternalistic reason, as some individuals lack the foresight necessary to accumulate savings for retirement. Previous research on this topic however found that, to justify a Social Security program with small tax rates, the economy should include a large fraction of

Alessandro Bucciol Alessandro Bucciol is currently post-doc researcher of Macroeconomics at the University of Amsterdam, and assistant professor of Econometrics at the University of Verona. He holds a first degree in Statistical and Economic Sciences, taken in 2003 at the University of Padua, and a Ph.D. in Economics and Management, taken in 2007 at the same university. His research interests include Social Security design, households' saving and portfolio choice, and behavioral economics.

58

AENORM

vol. 17 (65)

October 2009

short-sighted individuals, or at least some should be completely myopic, that is, in any given year they should consume exactly their income and make no saving at all (Feldstein, 1985). Although either assumption is hard to reconcile with the reality, the paternalistic motive is the key argument to justify Social Security, and focusing on the mechanisms behind individual choices seems the best way to tackle the issue.

Standard modeling In economic modeling, individual choices arise from the maximization of a utility function U(.). The lifetime consumption stream ct * , t = 1,..., T results from an intertemporal problem like T  Vt = max Et  ∑ β s −tU ( cs )  T {cs }s =t  s =t 

, t = 1,..., T

(1)

where future utility functions are discounted exponentially using a factor β. The maximization of equation (1) is subject to a budget constraint like xs+1 ≤ r(xs – cs) + ys+1

(2)

where r is the return on savings, ys+1 is income and xs+1 is cash-on-hand, that is, the sum of assets and income. Every year t an individual solves (1) subject to * 1,..., T (2) to find the optimal consumption at time t , ct , t, = and * make predictions on its future realizations, ct ,,st ==t+1,..,T. 1,..., T Dynamic programming shows that the solution to this model is implicit in the so-called Euler equation U'(ct) = βrEt[U'(ct+1)]

(3)

which links consumption at time t with consumption at time t+1. An implication of (3) is that, while the actual


Econometrics

consumption choice at different points of time may differ because the initial resources are different, the tradeoff between the marginal utility of consumption in two successive periods remains identical and equal to βr.

Preference reversal Growing experimental evidence from psychology, nevertheless, suggests that ‘preference reversal’ is a common occurrence in intertemporal decision-making. Agents resolve the same intertemporal trade-off differently depending on when the decision is made. In the experiments, subjects would choose the larger and later of two prizes when both are in a distant time, but they would prefer the smaller and earlier one as both prizes draw nearer to the present (Rabin, 1998). In terms of consumption-saving decisions, individuals featuring preference reversal enter retirement with fewer assets than necessary to support their consumption during seniority. Imrohoroglu et al. (2003) simulate welfare in an economy with preference reversal. Despite the promise, they also find welfare gain when Social Security is abolished. Their conclusion may be driven by their choice to model preferences using the popular quasi-hyperbolic discounting model (QHD; Laibson, 1997). QHD agents form expectations discounting the future at nonexponential rates {1, δβ, δβ2,..} with δ<1, which gives more importance to present choices. Preference reversal arises since consumption at time t weighs 1/δβ more than consumption at time t+1 when the comparison is made at time t, and only 1/β (as in the benchmark, exponential case) when the comparison is made at different times. This creates time inconsistency of preferences: current predictions of future consumption choices are not consistent with the choices that will be actually made. Importantly, time inconsistency casts doubts on how welfare should be measured because standard measures, such as the objective function (1) at time t = 1, do not describe the actual behavior, and measures incorporating non-exponential discounting do not predict future choices correctly.

Temptation

with the difference v(xt) – v(ct)>0 measuring the ‘cost of self-control’, i.e., the cost in terms of utility of choosing ct rather than its tempting alternative xt. Temptation affects the optimal choice, as the decision-maker aims to reduce the cost of self-control. The implication is that the Euler equation (3) rewrites as U'(ct) = βrEt[U'(ct+1) + U'(xt+1)]

(5)

with U'(xt+1) the derivative of the utility function with respect to xt. Equation (5) differs from the standard Euler equation for the presence of the next-time most tempting alternative. Under the common assumption that u(ct) is a Constant Relative Risk Aversion (CRRA) utility function with parameter γ, u ( ct ) =

ct1− γ − 1 1− γ

and v(ct) = λu(ct), equation (5) simplifies into ct− γ = βrEt ( ct−+γ1 − τxt−+γ1 ) 

(6)

with τ = λ/(1+λ) ∈ [0,1] degree of temptation. The extremes of this parameter represent two stylized types of behavior: an agent with τ = 0 is the forward-looking decision maker in standard life-cycle models, whereas an overwhelmingly tempted agent with τ = 1 is a completely myopic decision maker who every year consumes exactly all her income. Any value of the parameter between 0 and 1 describes a mixed behavior driven by both forward-looking and myopic forces, consistently with psychological theories (DellaVigna, 2009). Although the temptation model also features preference reversal (the trade-off between consumption in two periods depends on the size of the temptation), it is timeconsistent. This fact is not negligible as it determines a completely different behavior: while time-inconsistent individuals want to save for the future but are incapable Figure 1. Age-savings profile

A more recent approach, based on the seminal work of Gul and Pesendorfer (2001), is the temptation and selfcontrol preference model. Rather than changing the discount factor, it sets a different utility function. Under the temptation framework, utility includes not only the choice variable but also its ‘most tempting alternative’, namely, the choice arisen from a purely short-sighted, one-period perspective. In terms of consumption, this tempting alternative is described by the sum of all the available resources (cash-on-hand xt) in a given year t. The utility function U(.) then becomes U(ct, xt) = u(ct) – (v(xt) – v(ct))

(4)

AENORM

vol. 17 (65)

October 2009

59


IN DE LIFT BIJ

IK WORD HIER DIRECTEUR.

TRAINEES De tijd van traditioneel verzekeren is voorbij. ‘All-finance’ is de toekomst. En Delta Lloyd Groep wil hierin haar leidende marktpositie uitbouwen. Daarvoor hebben we mensen nodig. Heel goede mensen. Zo selecteren we ieder jaar een aantal afgestudeerde academici voor onze Trainee Programma’s die opleiden tot een leidinggevende functie binnen het concern. Ook hebben we een Business Course en het Young Talent Network waarin jonge, hoog opgeleide medewerkers elkaar inspireren tot bijzondere prestaties. Waarmee we maar willen zeggen: als je wilt, kun je bij Delta Lloyd Groep heel ver komen. Aan ons zal het niet liggen. Zet jezelf in de lift. Kijk op werkenbij deltalloydgroep .nl

D E LTA L LO Y D G R O E P I S O N D E R A N D E R E D E LTA L LO Y D , O H R A E N A B N A M R O V E R Z E K E R I N G E N


Econometrics

of doing so, ‘tempted’ individuals rationally choose to postpone saving – and therefore the burden of self-control costs – to the years immediately before retirement. See for instance Figure 1, which compares the optimal agesavings profile (normalized to income) for individuals with τ = 0 and a small τ = 0.10. Savings of tempted agents are here more highly concentrated around the retirement age 65. It should also be noticed that another advantage of time consistency is that it provides a natural framework for welfare analysis, as the intertemporal utility function at time consistently predicts future choices.

Temptation and Social Security A crucial implication of the temptation model is that it generates a demand for commitment assets. Commitment improves welfare when it limits the set of available consumption choices, reducing the cost of self-control in (4). In principle, there should then be welfare gains from having a Social Security program in an economy populated by tempted agents. However, Kumru and Thanopoulos (2008) find that only unreasonably large degrees of temptation can justify a program providing a small pension at retirement. To obtain this result, the authors simulate a realistic model with many overlapping

economy with idiosyncratic income and mortality risk. Output arises from an aggregate production technology to which agents rent capital and labor. There is a fixed retirement age, after which tempted agents stop working and start to receive a benefit from an unfunded Social Security scheme. The government funds the benefits with a constant tax on labor income. The paper avoids making concrete assumptions on the agents’ preference parameters and considers a range of possible values for the degree of temptation, setting the remaining preference parameters to reproduce key features of the real US economy, as the aggregate consumption-output and capital-output ratios. The paper then simulates and compares steady states with different Social Security arrangements. Welfare is measured using the objective function (1) at time t = 1, where the utility function includes temptation as in (4). This measure of welfare incorporates general equilibrium, insurance and distortionary effects of Social Security. This numerical exercise finds that a Social Security program with a small tax rate increases welfare of agents with a degree of temptation of τ = 0.11 or higher. The size of the degree is within the range of values estimated in the literature (from 0.073 in De Jong and Ripoll, 2007, to 0.206 in Huang et al., 2005).

alternative sources of commitment create larger welfare improvements generations and compare welfare with and without Social Security, in economies with various degrees of temptation τ. In all the cases, they let the discount factor β vary with τ to match some macro-data observed in the reality, but keep the parameter γ fixed to an exogenously given value. In comparing scenarios where one preference parameter changes, however, one should carefully take into account the potential interplay of such parameter with all the other parameters of the utility function. This lesson is valid not only with temptation models, but in general with any model featuring two or more parameters in the utility function. With regard to the temptation model, the parameter γ measures the agent’s willingness to substitute current temptation for future one. It is then likely that it does vary when the degree of temptation changes. Failing to consider this variation generates a different individual behavior and different welfare implications; tempted agents with preference parameters fixed to those of nontempted agents bear costly self-control only for a few years. Hence the gain derived from the commitment to a Social Security program is limited. In fact, Bucciol (2009) draws a different conclusion from a model with temptation preferences similar to Kumru and Thanopoulos (2008). The paper simulates a quantitative overlapping-generation model for a closed

Conclusions Assuming temptation preferences it is then possible to find a rationale for the provision of Social Security, once one recognizes the interplay among the various preference parameters of the utility function. However, it is still unclear whether a mandatory, unfunded Social Security scheme is the most efficient commitment device available in the economy to support retirement consumption. Under this framework it is well possible that alternative sources of commitment – even voluntary ones such as housing or pension funds, widespread in the reality – create larger welfare improvements. Future research will address this issue.

References Auerbach, Alan J. and Laurence J. Kotlikoff. Dynamic Fiscal Policy. Cambridge, MA: Cambridge University Press, 1987. Bucciol, Alessandro. “Social Security, Self-Control Problems and Unknown Preference Parameters.” Netspar Discussion Paper, 01/2009-001 (2009).

AENORM

vol. 17 (65)

October 2009

61


Econometrics

DeJong, David N. and Maria Ripoll. “Do Self-Control Preferences Help Explain the Puzzling Behavior of Asset Prices?” Journal of Monetary Economics, 54.4 (2007): 1035-1050. DellaVigna, Stefano. “Psychology and Economics: Evidence from the Field.” Journal of Economic Literature, 47.2 (2009): 315-372. Feldstein, Martin. “The Optimal Level of Social Security Benefits.” Quarterly Journal of Economics, 100.2 (1985): 303-320. Gul, Faruk and Wolfgang Pesendorfer. “Temptation and Self-Control.” Econometrica, 69.6 (2001): 1403-1435. Huang, Kevin X.D., Zheng Lui, and Qi Zu. “Temptation and Self-Control: Some Evidence from the Consumer Expenditure Survey.” Emory Economics Discussion Paper, 0507 (2005). Imrohoroglu, Ayse, Selahattin Imrohoroglu, and Douglas H. Joines. “Time Inconsistent Preferences and Social Security.” Quarterly Journal of Economics, 118.2 (2003): 745-784. Kumru, Cagri S. and Athanasios C. Thanopoulos. “Social Security and Self Control Preferences.” Journal of Economic Dynamics and Control, 32.3 (2008): 757778. Laibson, David I. “Golden Eggs and Hyperbolic Discounting.” Quarterly Journal of Economics, 112.2 (1997): 443-477. Rabin, Matthew. “Psychology and Economics.” Journal of Economic Literature, 36.1 (1998): 11-46.

62

AENORM

vol. 17 (65)

October 2009


Econometrics

Inferences on some Heterogeneous Models by: Zhengyuan Gao Homogenous models are simpler than heterogeneous models and usually generate enough power for prediction or forecasting. However, homogenous models cannot explain the differences caused by intrinsic differences of consumers or distinguish the agent based effects. For example, an economist would be interested in how a trigging policy affects groups of people, a decision maker in one company worries about how his strategy influences on his big component and small buddies and so on. Heterogeneous models based on micro-level theorems can exploit the complexity of rational agents’ models. This is extremely important for the inferential work on micro-econometrics models as well as on aggregated macro data. An econometrician conducts a test on a model and makes a suggestion on how good the model is. But prior to this, the econometrician should bear in mind that the model is useful for interpreting a specific issue. It often happens that a simple linear homogenous model has a satisfied p-value but it does not mean that the model is suitable for illustrating a nonlinear dynamic phenomenon. Thus inferential processes on heterogeneous models are essential. This paper will briefly talk about some “compromised complex” models and methodology developments for their inferential processes.

Introduction A simple enough setting is y = f(x,ε) where f is any kind of function and ε is the disturbance. The explanatory variables x can be a vector of independent variables or a lag variable of y. For a structural model y=[y1,x1]', and =[x1,y1]'. The complexity of this setting depends on three parts: 1. what kinds of ε; 2. what kinds of combination between ε and x; 3. what kinds of f. In the following, we give a detail discussion on how these deviations affect the complexity and how they affiliate to heterogeneities. For the first question, it is common to assume agent’s choices are distributed as multinomial logit/ probit in discrete choice models. But such assumption scarifies the heterogeneity by imposing every agent is identical. Parametric assumption on ε asks for a full specification, however, a heterogeneous model usually is too complicated to be sufficient for fully specifying. An eclectic way is to extract some common features among the agents and express the rest behaviors in a distribution free way. If the common features are possibly expressed in terms of moment conditions, the model becomes more flexible and inherits essential properties from the original “full” model. Moment based method, unfortunately, unlike Maximum Likelihood (ML), will not necessarily be efficient in both asymptotic and computational senses. We are doing series of works on improving computational

efficiency while preserving the ML asymptotic properties and robust features under moment based method. For the second question, to set up likelihood or an objective function of ε for optimizing, one needs to extract ε from the rest of f, therefore, for simplicity, the disturbance ε is usually assumed as an additive term for the function f(x). However, non-additive ε is necessary in some cases, for example, dynamic games. In dynamic games, the private shocks for each company will only affect its current decision while future values depend on the history of decision rules; the current private shocks make indirect effects on the future values. Apparently, these effects from ε are nonlinear for y, thus the model is highly nonlinear with respect to (w.r.t.) the parameters and disturbances. Standard inferential processes are not directly applicable. A special treatment has to be taken for this endogenous dynamic model. For the last question, in classical models f(x) is a deterministic function. To improve the fitting ability, f(x) can be set as a random function such that f(x,β)

Zhengyuan Gao Zhengyuan Gao is a Ph.D candidate at Tinbergen Institute and University of Amsterdam. Before his Ph.D, he obtained BS.c in Computing Mathematics at Xi’an Jiaotong University (China) and MS.c in Economics and Econometrics at University of Southampton (UK). He spent one year working as a consultant in KPMG risk management department in China. His research interests are econometrics and computational economics.

AENORM

vol. 17 (65)

October 2009

63


Econometrics

where the function f is parameterized by a random β. β is called random coefficient. This model evolves from original deterministic f model such that f(x)=∫f(x,β) dG(β) where G is a cumulated distribution function (CDF) of β. It is called a mixture model. The mixture model can approximate a complicated shape by a simple functional form. For example, nonlinear data has a poor fitting if only one linear regression is used, but the fitting degree increases consistently with the number of linear regressions. The idea is to decompose nonlinear data in several seemingly linear regressions and then give weights to different regressions where the total weights sum up to one. Therefore, the unknown function G is the most essential element in this problem. But identification of G is a non-trivial problem. Gao (2008) develops a method to approximate G and the approximation holds even if the data is partially observable. We can see that a small extension for a simple model will generate significant difficulties due to the increasing complexity, but the extensions allow the models to capture more complex phenomena in the real world. The next section illustrates a semi-parametric likelihood inference and its improvement for some specific heterogeneous models. Such method is applied to a dynamic discrete choice model in section dynamic games. Section mixture models describes a deviation of this inference and its application to censoring mixture models.

Robust Semi-parametric Inference In the neo-classical or new classical economics, the model shows that series of actions of agents, households or firms will initiate the specific economic phenomena for the society. Unfortunately, these micro-level behaviors have to depend on some unobservable element in economy and therefore it is hard to testify the theories via feasible information. Econometricians should extract the useful information from noisy observable events. Since people have little knowledge about what kind of random pattern are embedded in human behaviors and how the useful randomness can be extracted, specifications on the randomness are crucial. A simple specification of the random variable (r.v.) often uses a parametric assumption and obtains parameter values based on likelihood optimization. If the r.v. is assumed to be normally distributed, least square method is sufficient. But a small deviation from this fully parametric assumption may evolve a disaster effect on recovering the random behaviors from the agents. One can alter the optimization criterion function in order to obtain statistical robustness (see Huber 1981) or can discard redundant strong specifications and only use the information in lower order moments in order to obtain robustness in economics sense (see Hansen 1982) or can do both. The statistical robustness depends on the sensitivity of the derivative of objective function w.r.t. parameters (Jacobian). If one applies delta method to analyze the

64

AENORM

vol. 17 (65)

October 2009

asymptotic behavior of the estimator, he will worry about the sensitivity of the estimator when the input variables of Jacobian change slightly. When the analytical Jacobian is not available or the delta method is infeasible, a more advanced functional derivative should be considered, e.g. Frechet or Hadamard derivative. An ideal robust estimator in statistical sense should have a stable Jacobian for a class of similar r.v. , meanwhile the objective function should be able to detect the dissimilarity once the distribution of underlying r.v. is quite different from the specified distribution. A simple example is Least Squares (LS) estimation. It uses L2 norm in the objective function, [f–g(x,θ)]2 and its Jacobian/Influence Function (IF) depends on ∂g(x,θ)⁄∂θ. The difference between f and g(∙) is detected by the squares operation. However, ∂g(x,θ)⁄∂θ is very sensitive to a “big” x. An alternative way is to use L1 norm for the criterion function, namely least absolute deviation (LAD). Like LS, LAD can measure the difference between f and g(∙) but does not amplify the outliers’ effect. However the IF of LAD is more complicated. Since absolute operation is not differentiable, the derivative has to be extended to a more general space. In Bananch space, existence theorem guarantees a differentiation operator for LAD. Perturbation theorem illustrates that the derivative is robust in a more general space, even with infinite x value, which coincides with our expectation. The robustness from economics viewpoint requires different techniques. A specification of r.v. has direct effects on the economic model’s outputs. Too restrictive specification not only ignores many interesting applications but also has bigger misspecification opportunities. Therefore, instead of assuming fully specified r.v., namely assuming all moments are known, people only focus on the specification of lower order moments. First order and second order moments can capture fruitful information of the stochastic behavior. Furthermore they are easily derived from economic theory. Hansen (1982) gives an example on asset pricing models where fat tail phenomena are quite common. In the model, he derives a first order moment of the stochastic process based on Euler equations of the dynamic system. There is no assumption on higher order moment, so that a boarder class of models can be taken into account. The parameters of the system can be identified and estimated with relative little information. This method is robust because a class of models with similar properties may achieve approximately results, namely a small perturbation cannot lead to a disaster change. It is essential to obtain an estimation method that is robust in both senses. Gao (2009) constructs a Local Empirical Likelihood (LEL) estimator based on empirical local log-likelihood criterion subjected to moment conditions. The likelihood function comes from moment conditions and inherits ML properties. It does not require fully specified model and has a robust score function. The idea of the method is to separate the parameter space into several grids. In each grid, we consider a


Econometrics

fixed parameter value and evaluate the corresponding likelihood function of the neighborhoods of this value. There are two advantages: an objective function is more tractable and IF is robust under many imperfect situations. In the local neighborhoods, a log-likelihood function approximates a linear-quadratic function with non-degenerated Hessian matrix in the quadratic term. Therefore for any optimization software, the Hessian matrix is available and Newton type algorithm is feasible which achieves a higher order convergent rate. Moreover, robust IF induces that this estimation can be applied to a boarder class of problems.

Dynamic Games In general, differentiable assumption on the objective function and constraints is necessary for computing a feasible solution. But such assumption may be not suitable in a complicated model, e.g. a highly nonlinear moment constraint. In LEL, the nonlinear moment condition can be reduced to a linear condition of Lagrangians or slack variables. This property allows us to apply LEL to some special models where the moment condition is nondifferentiable. In dynamic games, the model bases on agents’ interactions. The future value of a single agent depends on his state, his current decision and the decisions of his opponents. The public states are known to all the players in the game while the private states are only known to the individuals and usually are considered as random effects. Unlike theoretical economists who want to solve the dynamic game, econometricians are interested in estimating the parameters of the model. The estimation is difficult to implement if one does not impose parametric assumption on the unknown random variables. Gao (2009) proposes a process to construct moment conditions rather than assume full parametric forms of the r.v.. The moment condition is non-differentiable so we use LEL to estimate the parameters. The dynamic choice model generates a Bellman equation in a dynamic programming problem such that V=TV', where T is a non-linear operator and V' is the next period value. It is possible to use a nonparametric approximation to solve the system. The difference between parametric  generates a moment V and nonparametric solution TV  depend on the condition function. Since both V and TV  structural parameters and TV is nonlinear w.r.t. them, the moment condition function is non-differentiable. LEL separates the parameter space and consider a fixed value for θ in each grid. Therefore the moment function m(x,θ) is a numerical vector in this grid. The moment condition ∫m(x,θ)dF(x)=0 is a linear constraint on dF(∙). The dual representation only depends on linear constraint of Lagrangian γ such that ∑γm(x,θ)=0. Put these linear constraints into the optimization problem, we can estimate F and furthermore the structural parameters θ. Therefore, LEL mitigates the computational burden and solves seemingly unfeasible problems.

Mixture Models Finally, we consider a distribution F with mixing function G such that F(x)=∫f(x,β)dG(β). The mixing function G may be a condition distribution such that G(β)=G(β|L<β<U) where U and L are threshold values. In this case, one needs to impose additional restriction to identify F. As previous models, the mixture model also has a moment restriction ∫m(x,θ)dF(x). The difference is that F here is a mixture function and it is quite possible that the distribution of G is censoring. Econometricians can consider this model with some clusters and investigate the clustering structure. An example is the demand estimation in the housing market data. The potential buyer may have a private valuation β for the housing market. This valuation is independent with other factors such as income, marriage status or education background, but the valuation will affect his intention of buying a house. In addition, people whose valuation is below a certain threshold level will not appear in the dataset. Because the data is collected by housing companies, households who are not interested in this market may not visit the companies. So the valuation is unobservable and censoring. The approach of identifying the mixing distribution requires an additional constraint called a self-consistency condition. The constraint basically says that there exists one and only one mixing density dG satisfying a surjective mapping to the mixture distribution F. Empirical likelihood can incorporate this condition as a linear constraint on F thus the implementation is trivial. Asymptotic theory for the estimator can be derived via functional delta method on G.

Conclusion The paper briefly discusses three types of heterogeneous models. The models emphasize different aspects of deviation to the standard homogenous models. Then we introduce three inferential processes those all evolve from Empirical Likelihood. Among these estimations, robustness is the crucial point on which we focus. Beside robustness, these estimators can achieve standard consistent and efficient properties.

References Hansen, L. "Large Sample Properties of Generalized Method of Moments Estimators." Econometrica, 50 (1982): 1029-54. Huber, P. Robust Statistics, Wiley, 1982. Gao, Z. "Empirical Likelihood of Censored Mixture Models." Working Paper (2008). Gao,Z . "Robust Semi-parametric Inference on Dynamic Models." Job market Paper (2009).

AENORM

vol. 17 (65)

October 2009

65


Actuarial Sciences

Embedded Options and Solidarity Transfers within Pension deals by: Pim van Diepen Remarkable is the rapid change in social-economic discussions around pensions as a result of the recent turmoil in financial markets. Currently the big question is who will be paying the bill as funding ratios of Dutch pension funds now have dropped significantly, whereas 2007 headlines were quoting “Who owns the excessive pension fund buffers?” The rapid change and intensity of these discussions in media and by politicians illustrate that pension deals are often incomplete. Many actions under normal circumstances have been agreed upon in contracts, but often not how stakeholders within a pension deal should process certain extreme events. This article discusses which loose ends can be identified within pension deals. Subsequently, it is shown how option pricing theory and stochastic discount factors can be useful instruments in an approach to address value.

Embedded options The incompleteness of pension deals could imply actions by stakeholders that are not predetermined, but still required under stressed circumstances. An example of such an action is the ‘pension put’ which can be described as the obligation of a sponsoring corporate to pay an additional premium in case the funding ratio of the pension fund falls below a certain required solvency level (see figure 1). For a Dutch pension scheme this level is generally the minimal required solvency (MVEV) as stated in the Financial Assessment Framework (FTK) which is initially based on the EU Directive (2003/41/ EC). Another kind of option is the compensation for price inflation that pension fund boards can choose to award to participants. This typical Dutch system is called a conditional indexation policy since the choice is highly related to the level of the nominal funding ratio. In case of very high funding ratios, sponsors of the pension plan are often granted a premium reduction or

Pim van Diepen Pim van Diepen has obtained a Master of Science degree (Cum Laude) in Actuarial Science at the University of Amsterdam in May 2009. This article is a summary of his master thesis written under the supervision of Prof. Dr. A.A.J. Pelsser. For the past five years he has been working for Mercer in the Amstelveen office, currently as a senior ALM consultant. Besides this, he is a member of the examiners board Mathematics & Statistics at the Dutch Actuarial Institute.

66

AENORM

vol. 17 (65)

October 2009

even a refund. The value of this possible cashflow is often referred to as the ‘corporate call’ (see figure 1). This third option has a strike that is in many cases linked to the real funding ratio or full indexation funding ratio of the pension fund. Mainly due to the restrictions imposed to refunds by article 129 of the Dutch pension act. All three examples can be summarized as embedded options within pension deals. Especially the ‘pension put’ is currently an important topic to most (company-wide) pension funds providing defined benefit plans. Mainly because this embedded option is deep in-the-money as a result of the significant decrease in nominal funding ratios during 2008 (The Dutch Central Bank stated on their website an average decrease from 144% to 95%). In case the ‘pension put’ is not defined in the financing agreement, the pension fund board and the sponsoring corporate should debate who will be paying the bill. Is the sponsor willing and able to raise premium levels, or will the participants have their accrued benefits being cut? These questions should also be addressed in recovery plans of Dutch pension funds.

Valuing embedded options The described embedded options can be (approximately) valued using option pricing theory, for example by using the analytical formula for a plain vanilla European option (Black & Scholes, 1973). The starting point is the assumption that the value of the pension assets (S) follows a geometric Brownian motion. The following process can be obtained under the real-world probability measure P: dS = μsSdt + σsSdzP


Actuarial Sciences

Figure 1: Graphical presentation of the ‘pension put’ and ‘corporate call’

with dz= ε dt and ε~φ(0,1) (a standardized normal distribution). After applying Itô’s Lemma it was derived that under the measure P: σs2 ) dt + σ s dz P 2 Under the risk-neutral measure Q, this same process can be described as: d ln( S ) = (  s −

d ln( S ) = ( r −

σ s2 ) dt + σ s dz Q 2

A European put option has a payoff of the form f=max(K–S, 0) . By constructing the replicating portfolio, determining the riskless payoff of this portfolio and by discounting the determined payoff at the risk-free rate (all under a no-arbitrage assumption), Black & Scholes found the following closed-form solution for the value of a European put option: put = e-rTKφ(–d2)–S0φ(–d1) with d1 =

ln( S 0 / K ) + ( r + σ s 2 / 2)T

σs T ln( S 0 / K ) + ( r + σ s 2 / 2)T

d1 = and d2 = d1– σ s T .

The parameters S0 (current value of pension assets), K (strike price), σs (volatility of pension asset

returns), r (risk-free rate) and T (time to maturity) are input variables. φ(.) is the standard normal cumulative distribution function. By setting T equal to the recovery period and K equal to the expected pension liability value after this period plus minimal required solvency, one could (approximately) value the ‘pension put’. The same method could be applied to the ‘corporate call’ by using the closed-form solution for a European call option. If the embedded option is structured in a more complex way, often one can still use the analytical formulas of exotic options. Focussing on the ‘pension put’, one could use: ● European options - in case the employer has to pay the negative difference between the value of the pension assets and the value of the pension liabilities after a certain maximum recovery period; ● Lookback options - in case the employer has to pay the negative difference between the minimum value of the pension assets and the value of the pension liabilities after a recovery period; ● Asian options - in case the employer has to pay the negative difference between the geometric average value of the pension assets and the value of the pension liabilities after a recovery period; ● Basket options - in case the employer has to pay the negative difference between the arithmetic average value of the pension assets and the value of the pension liabilities after a recovery period (rough approximation); and ● Binary options - in case the employer has to pay the a fixed additional premium in case the market value of pension assets is lower than the value of the pension

AENORM

vol. 17 (65)

October 2009

67


Actuarial Sciences

liabilities after a recovery period. More information on these exotic option pricing formulas can be found in Hull (2006).

The Margrabe result One could argue that the presented pricing of embedded options would only work when liabilities are valued against a fixed interest rate. Since the implementation of the FTK at January 1st, 2007, pension liabilities are valued against market interest rates which imply an additional source of volatility. Using European option pricing formulas could then lead to counterintuitive results. For example, investing in long-term government bonds would increase the volatility of the pension assets, but they would also be a good match with respect to the market value of pension liabilities. This would reduce the overall volatility of the funding ratio under the FTK framework and should thus reduce the value of a ‘pension put’. European options would incorrectly result in a higher value of the ‘pension put’ since the volatility of the pension liabilities is not incorporated. An interesting solution can still be found by using the Margrabe result (1978) on how to value options that exchange one asset for another. An additional geometric Brownian motion can be assumed for the value of the pension liabilities: dL=μLLdt+σLLdz. Then a closed-form solution for the ‘pension put’ exists: put = L0φ(–d2)–S0φ(–d1) with d1 =

ln( S 0 / L0 ) + ( σ

2 S ,L

/ 2)T

σ S ,L T ln( S 0 / L0 ) + ( σ S2 , L / 2)T

d1 = and d2 = d1– σ S , L T and σ

S ,L

= σ S2 + σ L2 − 2 ρS , L σ S σ L .

The parameter σS,L should be interpreted as the volatility of the pension assets with respect to the market value of the pension liabilities, or as the standard deviation of the nominal funding ratio. This measure is often referred to as the ‘tracking error’. A risk measure that is especially important to pension funds that abandoned traditionally fixed asset allocations and have switched to a modern risk-budget driven investment policy.

Fair Value ALM Asset Liability Management (ALM) is developing from the ‘classical’ approach towards a more market consistent value based approach. Classical ALM can be characterized by expressing results in ratios and probabilities. Over the projection horizon many figures are calculated such as the average funding ratio, underfunding probabilities, cumulative indexation realization and premiums in percentage of the pension base. However Fair Value ALM has an additional focus

68

AENORM

vol. 17 (65)

October 2009

on consistently pricing solidarity transfers and embedded options. Solidarity transfers are value transfers between stakeholders within a pension deal as a result of a certain policy decision. The construction of a Fair Value ALM framework starts with a market model that generates consistent economic scenarios under the risk-neutral probability measure Q or the real-world probability measure P. To be able to calculate realistic ‘classical’ ALM results, it is necessary to simulate at least under the real-world measure. Eventually it shouldn’t matter if one calculates the value of a derivative under the risk-neutral measure (by discounting at the risk-free rate) or under the real-world measure (by using a stochastic discount factor or deflator), they should lead to the same result (Pelsser, 2003).

The BSHW model In this article a Hull-White (one-factor) model is used to simulate interest rates: drt = (θt − a ⋅ rt ) dt + σ r dBtQ . Interesting property of this model is that it can be initially calibrated to an actual interest rate curve derived from the swap or bond market. In this setting a Nelson-Siegel (1987) expression has been used for the instantaneously forward rate. By integrating this expression the following model for the spot rate can be derived: r(0,t ) = b0 + (b1 + b2 ) ⋅ (

1 − e−t / τ ) − b2 ⋅ e − t / τ t/τ

with b0, b1, b2, and τ respectively the level parameter, slope parameter, curvature parameter and scale parameter. The calibration of these parameters to the current term structure of interest rates can be done by using a Nonlinear Least Squares (NLS) approach. The (risky) pension assets follow a combined BlackScholes Hull-White (BSHW) process: dS t = rt S t dt + (1 − ρ 2 ) σ S S t dWɶt Q + ρ ⋅ σ S S t dBɶ tQ − t / τ

Both are here formulated under the risk-neutral measure Q. One should define the link between the Q-measure and the P-measure so that there could be switched between these measures by using the Radon-Nikodym derivative (for a more extensive derivation see Yip, 2005). This link is here defined by the market prices of risk κi,t: dWɶt Q = dWɶt P + κ1,t dt

κ1,t =

dBɶ tQ = dBɶ tP + κ 2, t dt

κ 2,t =

S − rt σS 1− ρ

2

−ρ

κ 2,t 1 − ρ2

 r t − θt σr

Important to note is that simulation of these both


Actuarial Sciences

processes requires to sample correlated drawings from a normal distribution by using a Cholesky decomposition. After several mathematical operations the stochastic discount factors under the real-world probability measure P can be derived. The HW deflator is calculated as:   2κ σ  1  DTHW = exp  −  κ 22T − 22 r  aT − (1 − e − aT )    a  2     σ2 ⋅ exp  − r 3  2a

{

1  − aT −2 aT   )   aT − 2(1 − e ) + (1 − e 2   T

⋅ exp − κ 2 BT + σ r ∫ B ( s , T ) dBs 0

}

Simulated results should be checked by performing several tests such as an interest rate test, stock price test and option price test. The interest rate test is based on the HW deflator from which the initial interest rate curve can be derived:

  1 DTBSHW = exp  − κ 2 + κ 22 − 2 ρκ1 κ1 ) T  2 ( 1  2(1 − ρ )  2    ρκ − κ  2σ ⋅ exp  − 1 22  2r ( aT − (1 − e − aT ))    2(1 − ρ )  a    1  − aT −2 aT    )   aT − 2(1 − e ) + 2 (1 − e    

  1  ( κ − ρκ )W + ρσ T B ( s , T ) dW   ⋅ exp  −  T r s 1 2 ∫ 0 2     1 − ρ  

{

⋅ exp − κ 2 BT + σ r ∫ B ( s, T ) dBs T

0

Currently the Dutch commissions Don and Goudswaard are evaluating the FTK parameters and solvency requirements. The investment manager (APG) of the largest Dutch pension fund (APB) has recently suggested to reduce the funding ratio volatility by discounting liabilities against a 7-year moving average market interest rate. This is a very interesting case to further research with a Fair Value ALM model because such a decision could imply huge value transfers between stakeholders.

Consistency checks

The BSHW deflator is calculated as:

 σ r2 1  ⋅ exp  − 2  3  2(1 − ρ )  a

Business Case 2009 - link to actuality

}

 N HW  T RZC , HW = − LN  ∑ DT , i / N  / T  i =1 

The stock price test is based on the BSHW deflator. On average, the simulated future stock prices discounted with the BSHW deflator should equal the initial stock price at all times T:

Table 1: An example of a Fair Value ALM approach

Pension Fund Balance Sheet Assets

Liabilities

Pension Assets (t=0) Contributions (actuarial required) Contributions (additional solvency) Contributions (recovery premiums)

1,000 368 0 0

21 0 21 976

Pension Liabilities (t=0) Future Pension accrual Limit pension accrual - actives Cutting benefits - actives Cutting benefits - pensioners Future indexation (x%) - pensioners Future Benefit payments Option on surplus (T=15) - option value at t=0 - delta option value (t=0, T=15) Total Pension Liabilities (T: 0-15)

1,000 368 0 0 0 1 -414 20 0 20 976

Benefit payments

-414

Option on deficit (T=15) - option value at t=0 - delta option value (t=0, T=15) Total Pension Assets (T: 0-15) Nominal funding ratio (t=0) Nominal funding ratio (T=15) Fair Value funding ratio (T=15)

100.0% 115.9% 99.9%

Prob. (FR<100% | T=15) Probability of indexation in a year Probability of cutting benefits in a year

29.35% 16.44% 0.84%

AENORM

vol. 17 (65)

October 2009

69


Actuarial Sciences

N

S 0BSHW = ∑ DTBSHW STBSHW /N ,i ,i i =1

The option price test is based on comparing the value of European options using deflators and the same value by using analytical formulas. In the latter formula a correction must be made that allows for the changing volatility structure of interest rates over time in the Hull-White model (mainly due to the mean-reversion parameter a). The performed simulation (2,000 scenarios over a 15 year time horizon) of this market model showed correct results at a 5% significance level.

Deflators in ALM context The consistent economic scenarios and derived deflators are then applied to the asset and liability modules of a pension ALM model. All future cashflows such as employer contributions, pension accrual, indexations, benefit cuts and benefit payments can then be discounted using the appropriate deflators from every year t towards t=0. The surpluses and deficits that result in the different scenarios after the project horizon can be discounted against the applicable deflators leading to the values of respectively the ‘option on deficit’ and the ‘option on surplus’. In the example (see table 1) it is illustrated that the initial nominal funding ratio (NFR = pension assets /pension liabilities) is 100%. This same ratio after the 15 year projection horizon is 115.9%. The Fair Value funding ratio (FVFR = (pension assets -/- ‘option on deficit’ + ‘option on surplus’) / (pension liabilities)) is 99.9%. How can this be? The NFR is a non risk adjusted figure simulated under the real-world probability measure benefiting from the risk premium on risky pension assets. Whereas the FVFR is a risk adjusted figure constructed by discounting values against the appropriate deflators. The reason that the ratio is slightly below 100% is that in this pension deal all employer contributions are exactly equal to the value of the pension accrual (no solvency loading), while the indexation is financed from the fund’s own assets (capital drain). This means that a value transfer takes place resulting in a higher value of the ‘option on deficit’ compared to the ‘option on surplus’ and thus a FVFR slightly below 100%. This approach is very interesting when it comes to investigating different policy decisions and how they would affect the different stakeholders within the pension deal. Policy decisions one could think of are: derisking the investments, increasing premiums, cutting benefits, limiting pension accrual and limiting future indexations. The included stakeholders in the example are the employer, actives and pensioners. In a more advanced modelling one could even allocate value transfers to different age cohorts of active and inactive participants.

70

AENORM

vol. 17 (65)

October 2009

Conclusion The objective of this article was to identify options that are embedded in many pension deals. Another goal is to hand modern techniques to address value to these embedded options and solidarity transfers, both by closed-form solutions and in a path-dependent ALM context. Closed-form valuation can be performed by using traditional Black-Scholes option pricing techniques and their exotic variants. In an ALM context it is required to build a market model that is able to simulate under the real-world probability measure P, which creates the possibility to derive ‘classical’ ALM results as well. Then, it is needed to define stochastic discount factors (deflators) to ultimately derive risk-neutral values of options and solidarity transfers. The discussed model was a Black-Scholes Hull-white framework for risky assets and a Hull-White (one-factor) model for interest rates combined with the Nelson-Siegel expression to calibrate to an observed market interest rate structure. The comprehensive model presents a framework that is able to optimize pension policy decisions by taking into account the different value transfers that would result between stakeholders. Important to note is that it specifically was not a goal to find the most optimal modelling of interest rates and asset values. Models that allow for non-constant volatility structures and switching-correlations have not been investigated, but could have dynamics that better represent today’s market circumstances. Though, these models would lead to way more complex definitions of the stochastic discount factors. Also, in further research it is advised to perform a sensitivity analysis on the input parameters of the market model and how they would affect the values and standard errors of the embedded options and solidarity transfers.

References Black, F. and M. Scholes. "The pricing of options and corporate liabilities." Journal of Political Economy, 1973. Pelsser, A.A.J. "Waarderen van derivaten: risiconeutraal of deflators?" De Actuaris, Januari 2003: 29-31. Hull, J.C. Options, Futures and other derivatives (sixth edition), Prentice Hall, 2006. Margrabe, W. "The value of an option to exchange one asset for another." Journal of Finance, 1978. Nelson, C.R. and A.F. Siegel. "Parsimonious modelling of yield curves." Journal of Business, 1987. Yip, H.W. Deflators in a Black-Scholes Hull-White model. M.Sc. thesis, Erasmus University of Rotterdam, 2005.


Puzzle No correct submissions for last edition’s puzzles were submitted, so therefore we come up with some easier ones in this edition. These puzzles should be solvable for most of you. But first the correct solutions to the puzzles of last edition.

Annual event 900 students started for the event in 100 buses, 9 to a bus.

Strange clock If the minute hand goes twelve times faster than the hour hand, then they will meet eleven times during every 12hour period. By taking the eleventh part of 12 hours for a constant, you’ll find there will be a meeting of hands every 65 minutes, 300/11 seconds. The hans will therefore next come together at 5 minutes, 300/11 seconds past 7 o’clock. New puzzles of this edition:

A long walk You and a friend take a long walk from the university to the library in the center of the city and you’re both planning to drink a beer in a pub on the way, but only your friend is aware of all the distances inbetween. After you have been walking for forty minutes you ask your friend how far you have gone. He replies: “Just half as far as it is to the pub.” After walking along for seven kilometres more you ask your friend how far it is to the library? He replies as before: “Just half as far as it is to the pub”. In another hour you both arive at the library. Are you able to determine the distance between the university and the library?

Budget decrease Last week I spend half of my budget in two different stores, leaving me with as many cents as I had Euro’s before and half as many Euro’s as I has cents before. What was my budget and how much did I spend of it?

Solutions Solutions to the two puzzles above can be submitted up to December 1st. You can hand them in at the VSAE room; C6.06, mail them to info@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 65, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English as in Dutch.

AENORM

vol. 17 (65)

October 2009

71


The last few months the VSAE members have enjoyed their summer holiday, some abroad and some in Amsterdam itself. At the end of the month August a new group of more than 100 freshmen has start with their study Econometrics, Actuarial Science or Operational Research Management at the University of Amsterdam. Before they started, the VSAE organized an introduction days in Friesland where 56 new freshmen get to know each other better. The coming period will be filled with interesting projects. At the beginning of October, the Beroependagen will take place in Krasnapolsky Grand NH Hotel in Amsterdam. This two-day career event is annually organized with the study association FSA (Financial Study Association Amsterdam). At the end of November a group of 50 VSAE members will visit Keulen for four days. In December the annually Actuarial Congress will be organized in Tuschinski Amsterdam. The theme of the day is ‘Actuary of the future’ and several interesting speakers will give a presentation regarding this theme.

With the start of a new academic year our study association welcomed a lot of new students. The new students started the year with the famous introduction period of the VU, followed by a weekend to Texel. A new academic year also means a change of the board at Kraket. The new board has planned a lot of great activities in the upcoming months, like a kart tournament and a pool tournament. Also there is the National Econometricians Soccer Tournament, which will take place in Rotterdam this year. The Kraket board hopes this will be a very successful year for Kraket.

As VSAE board we look forward to welcome all the members on our projects the upcoming months.

72

Agenda

Agenda

6-7 October

October

3 October 1 Monthly drink

October

17 November

0-23 November 2 Short trip abroad

8 December

15 December

Beroependagen

Monthly drink

Actuarial Congress: Actuary of the future General Members Meeting

AENORM

vol. 17 (65)

October 2009

Pool Tournament New Students Activity


Wat als haar pensioenleeftijd samenvalt met een beurskrach? Je kunt er niet vroeg genoeg mee beginnen. Want voor een beetje pensioen moet je behoorlijk lang sparen. Zij kan zich er nog niet druk om maken. En dat is maar goed ook. Maar wat als over 60 jaar haar pensioenfonds in financiële nood verkeert? En daarmee haar zorgvuldig opgebouwde pensioen in gevaar komt? Daarom houdt de Nederlandsche Bank (DNB) toezicht op de soliditeit van financiële instellingen. We stellen eisen aan pensioenfondsen, banken en verzekeraars en houden de vinger aan de pols. Toezicht houden is niet de enige taak van DNB. Als onderdeel van het Europese Stelsel van Centrale Banken dragen we ook bij aan een solide monetair beleid en een zo soepel en veilig mogelijk betalingsverkeer. Zo maken we ons sterk voor de financiële stabiliteit van Nederland. Want vertrouwen in ons financiële stelsel is de voorwaarde voor welvaart en een gezonde economie. Wil jij daaraan meewerken? Kijk dan op www.werkenbijdnb.nl. | Juristen | Accountants | Actuarissen

Werken aan vertrouwen.

| HBO-ers (economisch/ juridisch/actuarieel/ accountancy)



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.