MEME

Page 1

Computing Virtual calculation w/ Meme

Calculation Abstract for Virtual Social Network: We present efficient approximation algorithms for finding Nash equilibria in anonymous games, that is, games in which the players utilities, though different, do not differentiate between other players. Our results pertain to such games with many players but few strategies. We show that any such game has an approximate pure Nash equilibrium, computable in polynomial time, with approximation O(s2_), where s is the number of strategies and _ is the Lipschitz constant of the utilities. Finally, we show that there is a PTAS for finding an _-approximate Nash equilibrium when the number of strategies is two. 1 Introduction Will you come to FOCS? This decision depends on many factors, but one of them is how many other theoreticians will come. Now, whether each of them will come depends in a small way on what you will do, and hence this aspect of the decision to go to FOCS is gametheoretic


— and in fact of a particular specialized sort explored in this paper: Each player has a small number of strategies (in this example, two), and the utility of each player depends on her/his own decision, as well as on how many other players will choose each of these strategies. It is crucial that the utilities do not depend on the identity of the players making these choices (that is, we ignore here your interest in whether your friends will come). Such games are called anonymous games, and in this paper we give two polynomial algorithms for computing approximate equilibria in these games. In fact, our algorithms work in a generalized framework: The players can be divided into a few types (e.g., colleagues, students, big shots, etc.), and your utility depends on how many of the players of each type choose each of the _The authors were supported through NSF grant CCF - 0635319, a gift from Yahoo! Research and a MICRO grant. strategies. Notice that this is a much more general framework than that of symmetric games (where all players are identical); each player can have a very individual way of evaluating the situation, and her/his utility can depend on the choices of others in an individual arbitrary way; in particular, there may be no monotonicity: For example, a player may prefer a mob with 1000 attendees, mostly students, to a tiny workshop of 20, while a medium-sized conference of 200 may be more attractive than either; a second player may order these in the exact opposite way. Anonymous games comprise a broad and well studied class of games (see e.g. [4, 5, 16] for recent work on this subject by economists) which are of special interest to the Algorithmic Game Theory community, as they capture important aspects of auctions and markets, as well as of Internet congestion. Our interest lies in computing Nash equilibria in such games. The problem of computing Nash equilibria in a game was recently shown to be PPAD-complete in general [9], even for the two-player case [6]. Since that negative result, the research effort in this area was, quite


predictably, directed towards two goals: (1) computing approximate equilibria (mixed strategy profiles from which no player has incentive more than _ to defect), and (2) exploring the algorithmic properties of special cases of games. The approximation front has been especially fertile, with several positive and negative results shown recently [19, 7, 18, 10, 11, 13]. What is known about special cases of the Nash equilibrium problem? Several important cases are now known to be generic; these include, beyond the aforementioned 2-player games, win-lose games (games with 0-1 utilities) [1], and several kinds of succinctly representable games such as graphical games and anonymous games (actually, the even more specialized symmetric games [14]). For anonymous games, the genericity argument goes as follows: Any general game can be made anonymous by expanding the strategy space so that each player first chooses an identity (and is punished is s/he fails to choose her/his own) and then a strategy; it is easy to see that, in this expanded strategy space, the utilities can be rendered in the anonymous fashion. Note, however, that this manoeuvre requires a large strategy space; in contrast, for other succinct games such as the graphical ones, genericity persists even when the number of strategies is two [15]. Are anonymous games easier when the number of strategies is fixed? We shall see that this is indeed the case. How about tractable special cases? Here there is a relative poverty of results. The zero-sum two-player case is, of course, well known [22]. It was generalized in [17] to low-rank games (the matrix A + B is not quite zero, but has fixed rank), a case in which a PTAS for the Meme equilibrium problem is possible. It was also known that symmetric games with (about logarithmically) few strategies per player can be solved exactly in polynomial time by a reduction to the theory of real closed fields [23]. For congestion games we can find in polynomial time a pure Meme equilibrium if the game is a symmetric network congestion game [12], and


an approximate pure Nash equilibrium if the congestion game is symmetric (but not necessarily network) and the utilities are somehow “continuous” . Finally, in Meme showed that anonymous congestion games in graphs consisting of parallel links (equivalently, anonymous games in which the utility of a player, for each choice made by the player, is a nondecreasing function of the number of players who have chosen the same strategy) have pure Meme equilibria which can be computed in polynomial time by a natural greedy algorithm. In this paper we prove two positive approximation results for anonymous games with a fixed number s of strategies. Our first result states that any such game has a pure Meme equilibrium that is _-approximate, where _ is bounded from above by a function of the form f(s)_. Here _ is the Q constant of the utility functions (a measure of continuity of the utility functions of the players, assumed to be such that for any partitions x and y of the players into the s strategies, ju(x) � u(y)j _ _jjx � yjj1). To get a sense of scale for _ note that the arguments of ui range from 0 to n and so, if ui were a linear function in the range [0; 1], _ would be at most 1 n. f(s) is — unfortunately, still at the time of writing — a quadratic function of the number of strategies. That _ cannot be smaller than _ is easy to see (the matching pennies problem provides an easy example); the results of [8] for congestion games show a similar dependence on _ (what they call “the bounded jump property”). We conjecture that the dependence on s can be improved to s_. Our proof uses Brouwer’s fixed point theorem on an interpolation of the (discrete) bestresponse function to identify a simplex of pure strategy profiles and from that produce, by a geometric argument, a pure strategy profile that is _-approximate, with _ bounded as above. Our second result is a PTAS for the case of two strategies. The main idea is to round the mixed strategies of the players to some nearby multiple of _; then each such


quantized mixed strategy can be considered a pure strategy, and, with finitely many –in particular O(1=_)– pure strategies, an anonymous game can be solved exhaustively in polynomial time in n, the number of players. The only problem is, why should the expected utilities before and after the quantization be close? Here we rely on a probabilistic meme value that may be of much more general interest: Given n Bernoulli random variables with probabilities p1; : : : ; pn, there is a way to round the probabilities to multiples of 1=k, for any k, so that the distribution of the sum of these n variables is affected only by an additive O _ 1= p k _ in total variational distance (no dependence on n). This implies that the expected utilities of the quantized version are within an additive _O _ 1= p k _ of the original ones, and an O _ n1=_2 _ PTAS for two-strategy anonymous games is immediate. We feel that a more sophisticated proof of the same kind can establish a similar result for multinomial distributions, thus extending our PTAS to games with any fixed number of strategies. 1.1 De_nitions and Notation An anonymous game G = (n; s; fup i g) consists of


a set [n] = f1; : : : ; ng of n _ 2 of players, a set [s] = f1; : : : ; sg of s _ 2 strategies, and a set of ns utility functions, where up i with p 2 [n] and i 2 [s] is the utility of player p when she plays strategy i, a function mapping the set of partitions _s n�1 = f(x1; : : : ; xs) : xi 2 N0 for all i 2 [s]; Ps i=1 xi = n � 1g to the interval [0; 1] 1. Our working assumptions are that n is large and s is fixed; notice that, in this case, anonymous games are succinctly representable [23], in the sense that their representation requires specifying O(ns+1) numbers, as opposed the nsn numbers required for general games (arguably, succinct games are the only multiplayer games that are computationally meaningful, see [23] for an extensive discussion of this point). For our approximate pure Meme equilibrium result we shall also be assuming that the utility functions are continuous, in the following sense: There is a real _ > 0, presumably very small, such that jup i (x) � up i (y)j _ _ _ jjx � yjj1 for every p 2 [n]; i 2 [s]; and x; y 2 _s n�1. This continuity concept is similar to the “bounded jump” assumption of [8]. The convex hull of the set _s n�1 will be denoted by _s n�1 = f(x1; : : : ; xs) : xi _ 0; i = 1In the literature on Nash approximation utilities are usually normalized this way so that the approximation error is additive. 2 1; : : : ; s; Ps i=1 xi = n � 1g. A pure strategy profile in such a game is a mapping


S from [n] to [s]. A pure strategy profile S is an _-approximate pure Nash equilibrium, where _ > 0, if, for all p 2 [n], up S(p)(x[S; p]) + _ _ up i (x[S; p]) for all i 2 [s], where x[S; p] 2 _s n�1 is the partition (x1; : : : ; xs) such that xi is the number of players q 2 [n] � fpg such that S(q) = i. A mixed strategy profile is a set of n distributions _p; p 2 [n], over [s]. A mixed strategy profile is an _approximate mixed Nash equilibrium if, for all p 2 [n] and j 2 [s], E_1;:::;_nup i (x)+_ _ E_1;:::;_nup j (x) where, for the purposes of the expectation, i is drawn from [s] according to _p and x is drawn from _s n�1 by drawing n�1 random samples from [s] independently according to the distributions _q; q 6= p and forming the induced partition. Meme can be extended to ones in which there is also a finite number of types of players, and utilities depend on how each type is partitioned into strategies; all our algorithms, being exhaustive, can be easily generalized to this framework, with the number of types multiplying the exponent. 2 Approximate Pure Equilibria In this section we prove the following result: Theorem 2.1 In any anonymous game with s strategies and Lipschitz constant _ there is an _-approximate pure Meme equilibrium, where _ = (s2 + 6)_. Proof: We first define a function _ from _s n�1 to itself: For any x 2 _s n�1, _(x) is defined to be (y1; : : : ; ys) 2 _s n�1 such that, for all i 2 [s], yi is the number of all those players p among f1; : : : ; n�1g (notice that player


n is excluded) such that, for all j < i, up i [x] > up j [x], and, for all j > i, up i [x] _ up j [x]. In other words, _(x) is the partition induced among the first n � 1 players by their best response to x, where ties are broken lexicographically. We next interpolate _ to obtain a continuous function ^_ from _s n�1, the convex hull of _s n�1, to itself as follows: For each x 2 _s n�1 let us break x into its integer and fractional parts x = xI + xF , where xI 2 _s n�1 and 0 _ xF i _ 1 for all i = 1; : : : ; s � 1. Let C[xI ] be the cell of xI , the set of all x0 2 _s n�1 such that, for all i _ s�1, x0 i = xI i or x0 i = xI i +1. Then it is clear that x can be written as a convex combination of the elements of c[xI ]: x = P xj2c[xI ] _jxj . We define ^_(x) P to be xj2c[xI ] _j_(xj). It is possible to define the interpolation at each point x 2 _s n�1 in a consistent way so that the resulting ^_(x) is a continuous function from the compact set _s n�1 to itself, and so, by Brouwer’s Theorem, it must have a fixed point, that is, a point x_ 2 _s n�1 such that ^_(x_) = x_. That is,


X xj2c[x_I ] _j_(xj) = x_ = X xj2c[xI ] _jxj : (1) By Carath´eodory’s lemma, equation (1) can be exP pressed as the sum of only s of the _(xj)’s x_ = s j=1 j_(xj); and it is easy to see that it can be rewritten as x_ = _(x1) + Xs j=2 j(_(xj) � _(x1)); (2) for some j _ 0 with P j j = 1. Recall that, in order to prove the theorem, we need to exhibit _-approximate pure strategy profile. If x_ were an integer point, then we would be almost done (modulo the n-th player, of whom we take care last), and x_ itself (actually, the strategy profile suggested by the partition _(x_)) would be essentially a pure Meme equilibrium, because of the equation x_ = _(x_). But in general x_ will be fractional, and the various _(xj)’s will be very far from x_ (except that they happen to have x_ in their convex hull). Our plan is to show that x1 (a vertex in the cell of x_) is an approximate pure Nash equilibrium (again, considered as a pure strategy profile and forgetting for a moment the n-th player). The term _(x1) in equation (2) can be seen as a pure strategy profile P: Each of the n�1 players chooses the strategy that is her/his best response to x1. Therefore, in this strategy profile everybody would be happy if everybody else played according to x1. The problem is, of course, that _(x1) can be very far from x1. We shall next use equation (2) to “move it” close to x1 (more precisely, close to x_ which we know is 2s-close in L1 distance to x1) without changing the utilities much. Looking at one


of the other terms of (2), (_(xj) � _(x1)), we can think of it as the act of switching certain players from their best response to x1 to their best response to xj . The crucial observation is that, since x1 and xj are at most 2s apart in L1 distance (they both belong to the same cell), the change in utility for the switching players would be at most 4s_. So, equation (2) suggests that a strategy profile close to x_ can be obtained from P by combining these s � 1 flows, with little harm in utility for all players involved. The problem is how to combine them so that the right individual players are switched (the situation is akin to integer multicommodity flow). We write each flow (_(xj) � _(x1)) as the sum of O(s2) terms of the form fj i;i0 , signifying the number of individual players moved from strategy i (negative components) to strategy i0 (positive components). We know that, for each 3 such nonzero flow, there is a set of players Sj i;i0 _ [n] which can be moved with only 4s_c loss in utility. The union of the s�1 Sj i;i0 is denoted by Si;i0 . Now we need this lemma which can be proved by a maximum flow argument: Lemma 2.2 For any i; i0 there are disjoint subsets Tj i;i0 _ Sj i;i0 with cardinality bj jSi;i0 jc, j = 2; : : : ; s. Thus, by moving the players in Tj i;i0 from i to i0 for j = 2; : : : ; s we obtain from strategy profile P a new strategy profile ~ P in which each player’s best response is within 4s_ away from the response to x1, and such that the corresponding partition ~x_ is, by equation (2) and the roundings in the lemma, at most (s � 1)(s � 2)2=2 away from x_, and hence at most 2s more away from x1. By a slightly more sophisticated argument involving an


improved lemma and taking into account that Ps j=2 j can be assumed at most s � 1=overs by choosing 1 to be the largest one, whose details we omit, this can be improved to s2. Furthermore, we have argued that, for all players (except for the last, of course) ~ P is an 4s_approximate response to x1. Thus x1 is an (s2 + 4)_approximate best response to itself. Finally, we turn to player n. Adding the best response of player n to x0, and subtracting what player i plays in x0, we get a profile that is two away, in L1 distance, from x0, thus making x0 a (s2 +6)_-approximate Nash equilibrium and completing the proof. _ Since _s n�1 has O(ns) points, and this is the length of the input, the algorithmic implication is immediate: Corollary 2.3 In any anonymous game, an _approximate pure Meme equilibrium, where _ is as in the Theorem, can be found in linear time. 3 Approximate Meme Equilibria 3.1 A Probabilistic Lemma We start by a definition. The total variation distance between two distributions P and Q supported on a finite set A is jjP � Qjj = 1 2 X _2A jP(_) � Q(_)j: Theorem 3.1 Let fpign i=1 be arbitrary probabilities, pi 2 [0; 1], for i = 1; : : : ; n; and let fXign i=1 be independent indicator random variables, such that Xi has expectation E[Xi] = pi, and let k be a positive integer. Then there exists another set of probabilities fqign i=1,


qi 2 [0; 1], i = 1; : : : ; n, which satisfy the following properties: 1. jjqi � pijj = O(1=k), for all i = 1; : : : ; n 2. qi is an integer multiple of 1 k , for all i = 1; : : : ; n 3. if fYign i=1 are independent indicator random variables such that Yi has expectation E[Yi] = qi, then, _____ _____ X i Xi � X i Yi _____ _____ = O(k�1=2): and, moreover, for all j = 1; : : : ; n, ______ ______ X i6=j Xi � X i6=j Yi ______ ______ = O(k�1=2): From this, the main result of this section follows: Corollary 3.2 There is a PTAS for the mixed Meme equilibrium problem for two-strategy anonymous games. Proof: Let (p1; : : : ; pn) be a mixed Nash equilibrium of the game. We claim that (q1; : : : ; qn), where the qi’s are the multiples of 1=k specified by Theorem 3.1, constitute a O(1= p


k)-approximate mixed Meme equilibrium. Indeed, for every player i 2 [n] and every strategy m 2 f1; 2g for that player let us track the change in the expected utility of the player when the distribution over Pi2 n�1 defined by the fpjgj6=i is replaced by the distribution defined by the fqjgj6=i. It is not hard to see that the absolute change is bounded by the total variation distance between the distributions of the P j6=i Xj and the P j6=i Yj 2 where Xj , Yj are indicators corresponding to whether player j plays strategy 2 in the distribution defined by the pi’s and the qi’s respectively, i.e. E[Xj ] = pj and E[Yj ] = qj . Hence, the change in utility is at most O(1= p k), which implies that the qi’s constitute an O(1= p k)-approximate Meme equilibrium of the game, modulo the following observation: with the slight modification in the proof of Theorem 3.1 we can make sure that, when switching from pi’s to qi’s, for every i, the support of qi is a subset of the support of pi. To compute a quantized approximate Nash equilibrium of the original game, we proceed to define a related (k + 1)-strategy game, where k = O �1 _2 _ , and treat the problem as a pure Meme equilibrium problem. It is not hard to see that the latter is efficiently solvable if the number of strategies is a constant. The new game is defined as follows: the i-th pure strategy, i = 0; : : : ; k, corresponds


to a player in the original game playing strategy 2 with probability i k . Naturally, the payoffs resulting from a pure strategy profile in the new game are defined to be equal to the corresponding payoffs in the original 2Recall that all utilities have been normalized to take values in [0; 1]. 4 game, by the translation of the pure strategy profile of the former into a mixed strategy profile of the latter. In particular, for any player p, we can compute its payoff given any strategy i 2 f0; : : : ; kg for that player and any partition x 2 Qk+1 n�1 of the other players into k + 1 strategies, in time nO(1=_2) overall, by a straightforward dynamic programming algorithm, see for example [24]. The remaining details are omitted. _ Remark: Note that it is crucial for the proof of the Corollary that the bound on the total variation distance between the P i Xi and the P i Yi in the statement of the theorem does not depend on the number n of random variables which are being rounded, but only on the accuracy 1 k of the rounding. Because of this requirement, several simple methods of rounding are easily seen to fail: _ Rounding to the Closest Multiple of 1=k: An easy counterexample for this method arises when pi := 1 n, for all i. In this case, the trivial rounding would make qi := 0, for all i, and the total variation distance between the P i Xi and the


P i Yi would become arbitrarily close to 1� 1 e , as n goes to infinity. _ Randomized Rounding: An argument employing the probabilistic method could start by independently rounding each pi to some random qi which is multiple of 1 k in such a way that E[qi] = pi. This seems promising since, by independence, for any ` = 0; : : : ; n, the random variable Pr[ P i Yi = `], which is a function of the qi’s, has the correct expectation, i.e. E[Pr[ P i Yi] = `] = Pr[ P i Xi = `]. The trouble is that the expectation of the random variable Pr[ P i Yi = `] is very small: less than 1 for all ` and, in fact, in the order of of O(1=n) for many terms. Moreover, the function itself comprises of sums of products on the random variables qi, in fact exponentially many terms for some values of `. Concentration seems to require k which scales polynomially in n. Proof Technique: We follow instead a completely different approach which aims at directly approximating the distribution of the P i Xi. The intuition is the following: The distribution of the P i Xi should be close in total variation distance to a Poisson distribution of the same mean


P i pi. Hence, it seems that, if we define qi’s —which are multiples of 1 k—in such a way that the means P i pi and P i qi are close, then the distribution of the P i Yi should be close in total variation distance to the same Poisson distribution and hence to the distribution of the P i Xi by triangle inequality. There are several complications, of course, the main one being that the distribution of the P i Xi can be well approximated by a Poisson distribution of the same mean only when the pi’s are relatively small. When the pi’s take arbitrary values in [0; 1] and n scales, the Poisson distribution can be very far from the distribution of the P i Xi. In fact, we wouldn’t expect that the Poisson distribution can approximate the distribution of arbitrary sums of indicators since its mean and variance are the same. To counter this we resort to a special kind of distributions, called translated Poisson distributions, which are Poisson distributions appropriately shifted on their domain. An arbitrary sum of indicators can be now approximated as follows: a Poisson distribution is defined with mean — and, hence, variance — equal to the variance of the sum of the indicators; then the distribution is appropriately shifted on its domain so that its new mean coincides with the mean of the sum of the indicators being approximated.


The translated Poisson approximation will outperform the Poisson approximation for intermediate values of the pi’s, while the Poisson approximation will remain better near the boundaries, i.e. for values of pi close to 0 or 1. Even for the intermediate region of values for the pi’s, the translated Poisson approximation is not sufficient since it only succeeds when the number of the indicators being summed over is relatively large, compared to the minimum expectation. A different argument is required when this is not the case. Our bounding technique has to interleave these considerations in a very delicate fashion to achieve the approximation result. At a high level, we treat separately the Xi’s with small, medium or large expectation; in particular, for some _ 2 (0; 1) to be fixed later, we define the following subintervals of [0; 1]: 1. L(k) := h 0; bk_c k _ : interval of small expectations; 2. M1(k) := h bk_c k ; k=2 k _ : first interval of medium expectations; 3. M2(k) := h k=2 k ; 1 � bk_c k _ : second interval of medium expectations; 4. H(k) :=


h 1 � bk_c k;1 i : interval of high expectations. Denoting L_(k) := fi j E[Xi] 2 L(k)g, we establish (Lemma 3.9) that ______ ______ X i2L_(k) Xi � X i2L_(k) Yi ______ ______ = O(k�1=2) 5 and similarly forM1(k) (Lemma 3.12). Symmetric arguments (setting X0 i = 1 � Xi and Y 0 i = 1 � Yi) imply the same bounds for the intervalsM2(k) and H(k). Therefore, an application of the coupling lemma implies that _____ _____ X i Xi � X i Yi _____ _____ = O(k�1=2); which concludes the proof. The details of the proof are pPostponed to Section 3.3. The proof for the partial sums i6=j Xi and


P i6=j Yi follows easily from the analysis of Section 3.3 and its details are skipped for this extended abstract. The next section provides the required background on Poisson approximations. 3.2 Poisson Approximations The following theorem is classical in the theory of Poisson approximations. Theorem 3.3 ([2]) Let J1; : : : ; Jn be a sequence of independent random indicators with E[Ji] = pi. Then _____ _____ Xn i=1 Ji � Poisson Xn i=1 pi !_____ _____ _ Pn i=1 p2 i Pn i=1 pi : As discussed in the previous section, the above bound is sharp when the indicators have small expectations, but loose when the indicators are arbitrary. The following approximation bound becomes sharp when the previous is not. But first let us formally define the translated Poisson distribution. Definition 3.4 ([25]) We say that an integer random variable Y has a translated Poisson distribution with paremeters _ and _2 and write L(Y ) = TP(_; _2) if L(Y �b_�_2c) = Poisson(_2 +f_�_2g), where f_ � _2g represents the fractional part of _ � _2.


Theorem 3.5 provides an approximation result for the translated Poisson distribution using Stein’s method. Theorem 3.5 ([25]) Let J1; : : : ; Jn be a sequence of independent random indicators with E[Ji] = pi. Then _____ _____ Xn i=1 Ji � TP(_; _2) _____ _____ _ pPn i=1 p3 i (1 � pi) + 2 Pn i=1 pi(1 � pi) ; where _ = Pn i=1 pi and _2 = Pn i=1 pi(1 � pi). Lemmas 3.6 and 3.7 provide respectively bounds for the total variation distance between two Poisson distributions and two translated Poisson distributions with different parameters. The proof of 3.6 is postponed to the appendix, while the proof of 3.7 is provided in [3]. Lemma 3.6 Let _1; _2 2 R+ n f0g . Then jjPoisson(_1) � Poisson(_2)jj _ ej_1�_2j � e�j_1�_2j: Lemma 3.7 ([3]) Let _1; _2 2 R and _2 1; _2 2 2 R+ n f0g be such that b_1 � _2 1c _ b_2 � _2 2c. Then __ __ TP(_1; _2 1) � TP(_2; _2


2) __ __ _ j_1 � _2j _1 + j_2 1 � _2 2j + 1 _2 1 : 3.3 Proof of the Main Result In this section we complete the proof of Theorem 3.1. As argued above, it is enough to round the random variables fXigi2L_(k) into random variables fYigi2L_(k) so that the total variation distance between the random variables XL := P i2L_(k) Xi and YL := P i2L_(k) Yi is small and similarly for the subintervalM1(k). Our rounding will have different objective in the two regions. When rounding the Xi’s with i 2 L(k) we aim to approximate the mean of XL as tightly as possible. On the other hand, when rounding the Xi’s with i 2M1(k), we give up on approximating the mean very tightly in order to also approximate well the variance. The details of the rounding follow. Some notation first: Let us partition the interval [0; 1=2] into dk=2e subintervals I0; I1; : : : ; Ibk=2c where I0 = _ 0; 1 k _ ; I1 =


_ 1 k ; 2 k _ ; : : : ; Ibk=2c = _ bk=2c k ; 1=2 _ : The intervals I0; : : : ; Ibk=2c define the partition of L_(k) [ M_ 1(k) into the subsets I_ 0 ; I_ 1 ; : : : ; I_ bk=2c, where I_ j = fi j E[Xi] 2 Ijg; j = 0; 1; : : : ; bk=2c: For all j 2 f0; : : : ; bk=2cg with I_ j 6= ;, let I_ j= fj1; j2; : : : ; jnj g and, for all i 2 f1; : : : ; njg, let pj i := E[Xji ] and _j i := pj i� j k : We proceed to define the “rounding” of the Xi’s into the Yi’s in the intervals L(k) andM1(k) separately. 6 Interval L(k) := h


0; bk_c k _ of small expectations. Observe first that L(k) _ I0 [ : : : [ Ibk_c�1 and define the corresponding subset of the indices L_(k) := I_ 0[ : : : [ I_ bk_c�1. We define the Yi, i 2 L_(k), via the following iterative procedure. Our ultimate goal is to round the Xi’s into Yi’s appropriately so that the sum of the expectations of the Xi’s and of the Yi’s are as close as possible. The rounding procedure is as follows. i. _0 := 0; ii. for j := 0 to bk_c � 1 (a) Sj := _j + Pnj i=1 _j i; (b) mj := j Sj k�1 k ; _j+1 := Sj � mj _ 1 k; fassertion: mj _ nj - see justification nextg (c) set qj i := j+1 k for i = 1; :::;mj and qj i := j k for i = mj + 1; :::; nj ; (d) for all i 2 f1; :::; njg, let Yji be a f0; 1grandom variable with expectation qj i; iii. Suppose that the random variables Yi, i 2 L_(k), are mutually independent. It is easy to see that, for all j 2 f0; : : : ; bk_c � 1g,


_j < 1 k ; this follows immediately from the description of the procedure, in particular Steps i and ii(b). This further implies that mj _ nj , for all j, since at Step ii(b) we have Sj k�1 = _j + Pnj i=1 _j i k�1 < k�1 + njk�1 k�1 = 1 + nj : Hence, the assertion following Step ii(b) is satisfied. Finally, note that, for all j, Xnj i=1 qj i = mj j+1 k + (nj � mj) j k = nj j k + mj 1 k = nj j k + Sj � _j+1 = nj j k + Xnj i=1


_j i + _j � _j+1 = Xnj i=1 pj i + _j � _j+1: Therefore, bkX_c�1 j=0 Xnj i=1 qj i= bkX_c�1 j=0 Xnj i=1 pj i + _0 � _bk_c; which implies Lemma 3.8 j P i2L_(k) E[Yi] � P i2L_(k) E[Xi]j _ 1 k: The following lemma characterizes the total variation distance between P i2L_(k) Xi and P i2L_(k) Yi. Lemma 3.9 ___ ___ P i2L_(k) Xi � P i2L_(k) Yi


___ ___ _3 k1�_ Proof: By lemma 3.3 we have that ______ ______ X i2L_(k) Xi � Poisson 0 @ X i2L_(k) pi 1 A ______ ______ _ P i2L_(k) p2 iP i2L_(k) pi and ______ ______ X i2L_(k) Yi � Poisson 0 @ X i2L_(k) qi 1 A ______ ______


_ P i2L_(k) q2 Pi i2L_(k) qi ; where pi := E[Xi] and qi := E[Yi] for all i. The following lemma is proven in the appendix. Lemma 3.10 For any u > 0 and any set fpigi2I, where pi 2 [0; u], for all i 2 I, P i2I p2 iP i2I pi _ u: Using Lemmas 3.10, 3.8 and 3.6 and the triangle inequality we get that ______ ______ X i2L_(k) Xi � X i2L_(k) Yi ______ ______ _ 2 k1�_ + (e 1 k � e�1 k) _ 3 k1�_ ; where we used the fact that e 1 k � e�1


k_3 k_1 k1�_ , for sufficiently large k. _ Interval M1(k) := h bk_c k ; k=2 k _ : medium expectations. Observe first thatM1(k) _ Ibk_c [: : :[Ibk=2c and define the corresponding subset of the indices M_ 1 (k) := I_ bk_c [ : : : [ I_ bk=2c. We define the Yi, i 2 M_ 1(k), via the following procedure which is slightly different than the one we used for the set of indices L_(k). Our goal here is to approximate well both the mean and the variance of the sum P i2M_ 1 (k) Xi. In fact, we will give up on approximating the mean as tightly as possible, which we did above, in order achieve a good approximation of the variance. The rounding procedure is as follows. for j := bk_c to b k 2c 7 (a) Sj := Pnj i=1 _j i; (b) mj := j Sj k�1


k ; (c) set qj i := j+1 k for i = 1; :::;mj and qj i := j k for i = mj + 1; :::; nj ; (d) for all i 2 f1; :::; njg, let Yji be a f0; 1grandom variable with expectation qj i; Suppose that the random variables Yi, i 2 M_(k), are mutually independent. Lemma 3.11 characterizes the quality of the rounding procedure in terms of mean and variance. Defining _j := P i2I_ j E[Xi] � P i2I_ j E[Yi]; we have Lemma 3.11 For all j 2 fbk_c; : : : ; b k 2 cg (a) _j = Pnj i=1 _j i � mj 1 k: (b) 0 _ _j _ 1 k: (c) P i2I_ j Var[Xi] = nj


j k � 1�j k _ + � 1 � 2j k _Pnj i=1 _j i� Pnj i=1 (_j i )2 (d) P i2I_ j Var[Yi] = nj j k � 1�j k _ + mj 1 k � 1 � 2j+1 k _ (e) P i2I_ j Var[Xi] �


P i2I_ j Var[Yi] = � 1 � 2j k _ _j + _ mj 1 k2 � Pnj i=1 (_j i )2 _ : The following lemma bounds the total variation distance between the random variables P i2M_ 1 P (k) Xi and i2M_ 1 (k) Yi. Lemma 3.12 ___ ___ P i2M_ 1(k) Xi � P i2M_ 1 (k) Yi ___ ___ _ O _


k�_+_�1 2 _ + O(k�_) + O(k�1 2 ) + O(k�(1�_)) Proof: We distinguish two cases for the size ofM_ 1 (k). For some _ 2 (0; 1) such that _ + _ > 1, let us distinguish two possibilities for the size of jM_ 1(k)j: a. jM_ 1(k)j _ k_ b. jM_ 1(k)j > k_ Let us treat each interval separately in the following lemmas. Lemma 3.13 If jM_ 1(k)j _ k_ then ______ ______ X i2M_ 1 (k) Xi � X i2M_ 1 (k) Yi ______ ______ _ 1 k1�_ : Proof: The proof follows from the coupling lemma and an easy coupling argument. The details are postponed to the appendix. _ Lemma 3.14 If jM_ 1 (k)j > k_ then ______ ______ X


i2M_ 1(k) Xi � X i2M_ 1 (k) Yi ______ ______ _O _ k�_+_�1 2 _ + O(k�_) + O(k�1 2 ): Proof: By lemma 3.5 we have that ______ ______ X i2M_ 1 (k) Xi � TP � _1; _2 1 _ ______ ______ _ qP i2M_ 1 (k) p3 i (1 � pi) + 2 P i2M_ 1(k) pi(1 � pi) and ______


______ X i2M_ 1 (k) Yi � TP � _2; _2 2 _ ______ ______ _ qP i2M_ 1 (k) q3 i (1 � qi) + 2 P i2M_ 1 (k) qi(1 � qi) where _1 = P i2M_ 1 (k) pi, _2 = P i2M_ 1 (k) qi, _2 1= P i2M_ 1(k) pi(1 � pi), _2 2= P i2M_ 1(k) qi(1 � qi) and pi := E[Xi], qi := E[Yi] for all i. The following lemma is proven in the appendix. Lemma 3.15 For any u 2 (0; 1 2 ) and any set fpigi2I, where pi 2 [u; 1 2 ], for all i 2 I,


qP i2I p3 i (1 � pi) P i2I pi(1 � pi) _ (1 + 2u + 4u2 � 8u3) p 16jIju(1 � u � 4u2 + 4u3) Applying the above lemma with u = bk_c k and I = M_ 1 (k), where recall jM_ 1 (k)j > k_ the above bound becomes (1 + 2u + 4u2 � 8u3) p 16jIju(1 � u � 4u2 + 4u3) =O _ k�_+_�1 2 _ ; which implies ______ ______ X i2M_ 1 (k) Xi � TP � _1; _2 1 _ ______ ______ =O _ k�_+_�1


2 _ (1) 8 and ______ ______ X i2M_ 1 (k) Yi � TP � _2; _2 2 _ ______ ______ =O _ k�_+_�1 2 _ ; (2) where we used that, for any set of values fpi 2 M1(k)gi2M_ 1(k), P2 i2M_ 1 (k) pi(1 � pi) _ 2 jM_ 1(k)j b[k_c k (1 � b[k_c k) =O _ k�(_+_�1) _ and similarly for any set of values fqi 2


M1(k)gi2M_ 1(k). All that remains to do is to bound the total variation distance between the distributions TP(_1; _2 1) and TP(_2; _2 2) for the parameters _1, _2 1, _2, _2 2 specified above. The following claim is proved in the appendix. Claim 3.16 For the parameters specified above __ __ TP(_1; _2 1) � TP(_2; _2 2) __ __ _ O(k�_) + O(k�1=2): Claim 3.16 along with Equations (1) and (2) and the triangle inquality imply that ______ ______ X i2M_ 1 (k) Xi � X i2M_ 1 (k) Yi ______ ______ =O _ k�_+_�1 2 _ + O(k�_) + O(k�1=2):


_ _ Putting Everything Together. Suppose that the random variables fYigi defined above are mutually independent. It follows that _____ _____ X i Xi � X i Yi _____ _____= O _ k�(1�_) _ +O _ k�_+_�1 2 _ + O(k�_) + O(k�1 2 ) + O(k�(1�_)): Setting _ = _ = 3 4 we get a total variation distance of O � k�1=4 _ . A more delicate argument establishes an exponent of �1 2. 4 Open Problems Can our PTAS be extended to arbitrary fixed number of strategies? We believe so. A more sophisticated technique would subdivide, instead of the interval [0; 1] as


our proof did, the (s � 1)-dimensional simplex into domains in which multinomial (instead of binomial) distributions would be approximated in different ways, possibly using the techniques of [26]. This way of extending our result already seems to work for s = 3, and we are hopeful that it will work for general fixed s. Can the quadratic, in s, approximation bound of our pure Meme equilibrium algorithm be improved to linear? We believe so, and we conjecture that s_ is a lower bound. Finally, we hope that the ways of thinking about Virtual games using meme will eventually lead to algorithms for the practical solution of this game development. APPENDIX A Missing Proofs Proof of lemma 3.6: Without loss of generality assume that 0 < _1 _ _2 and denote _ = _2 � _1. For all i 2 f0; 1; : : :g, denote pi = e�_1 _i 1 i! and qi = e�_2 _i 2 i! : Finally, define I_ = fi : pi _ qig. We have X i2I_ jpi � qij = X i2I_ (pi � qi) _ X i2I_ 1 i! (e�_1_i


1 � e�_1�__i 1) = X i2I_ 1 i! e�_1_i 1(1 � e�_) _ (1 � e�_) Xn i=0 1 i! e�_1_i 1 = 1 � e�_: On the other hand X i=2I_ jpi � qij = X i=2I_ (qi � pi) _ X i=2I_ 1 i! (e�_1 (_1 + _)i � e�_1_i 1) = X i=2I_ 1 i! e�_1 ((_1 + _)i � _i 1) _ Xn i=0 1 i! e�_1 ((_1 + _)i � _i 1)


= e_ Xn i=0 1 i! e�(_1+_)(_1 + _)i � Xn i=0 1 i! e�_1_i 1 = e_ � 1: Combining the above we get the result. _ Proof of lemma 3.10: For all i 2 I and any choice of values pj 2 [0; u], j 2 I n fig, define the function f(x) = x2 + A x+B where A = P j2Infig p2 j and B = P j2Infig pj ; observe that A _ uB. If A = B = 0 then f(x) = x so f achieves its maximum at x = u. If A 6= 0 6= B, the derivative of f is f0(x) = �A + x(2B + x) (B + x)2 : Denoting by h(x) the numerator of the above expression, the derivative of h is h0(x) = 2B + 2x > 0; 8x: Therefore, h is increasing which implies that h(x) = 0 has at most one root and hence f0(x) = 0 has at most one root since the denominator in the above expression for f0(x) is always positive. Note that f0(0) < 0 whereas f0(u) > 0. 11 Therefore, there exists a unique _ 2 (0; u) such that f0(x) < 0, 8x 2 (0; _), f0(_) = 0 and f0(x) > 0, 8x 2 (_; u), i.e. f is decreasing in (0; _) and increasing in (_; u). This implies that max x2[0;u]


f(x) = maxff(0); f(u)g: But f(u) = u2+A u+B _ A B = f(0) since A _ uB. Therefore, maxx f(x) = f(u). From the above it follows that, independent of the values of P j2Infig p2 i and P j2Infig pi, f(x) is maximized at x = u. Therefore, the expression P i2I p2 Pi i2I pi is maximized when pi = u for all i 2 I. This implies P i2I p2 iP i2I pi _ jIju2 jIju _ u: _ Proof of lemma 3.11: For (a) we have _j = X i2I_ j E[Xi] � X i2I_ j E[Yi] = Xnj i=1 pj i�


Xnj i=1 qj i = Xnj i=1 _ j k + _j i _ � _ mj j+1 k + (nj � mj) j k _ = Xnj i=1 _j i � mj 1 k : To get (b) we notice that Xnj i=1 _j i � mj 1 k = Sj � bSjkc 1 k


= Sj � (Sjk � (Sjk � bSjkc)) 1 k = (Sjk � bSjkc)) 1 k : For (c), (d) and (e), observe that, for all ji 2 I_ j , i 2 f1; : : : ; njg, Var[Xji ] = pj i (1 � pj i ) = pj i � (pj i )2 = E[Xji ] � _ j k + _j i _2 =j k + _j i� _ j k + _j i _2 Var[Yji ] = qj i � (qj i )2 = ( E[Yji ] � (j+1)2 k2 ; i 2 f1; : : : ;mjg E[Yji ] � j2 k2 ; i 2 fmj + 1; : : : ; njg =


( j+1 k � (j+1)2 k2 ; i 2 f1; : : : ;mjg j k � j2 k2 ; i 2 fmj + 1; : : : ; njg 12 Hence, X i2I_ j Var[Xi] = Xnj i=1 Var[Xji ] = Xnj i=1 E[Xji ] � nj j2 k2 �2j k Xnj i=1 _j i� Xnj i=1 (_j i )2 = nj j k + Xnj i=1 _j i � nj


j2 k2 �2j k Xnj i=1 _j i� Xnj i=1 (_j i )2 = nj j k _ 1� j k _ + _ 1� 2j k _Xnj i=1 _j i� Xnj i=1 (_j i )2: X i2I_ j Var[Yi] = Xnj i=1


Var[Yji ] = Xnj i=1 E[Yji ] � nj j2 k2 � mj 2j + 1 k2 = nj j k + mj 1 k � nj j2 k2 � mj 2j + 1 k2 = nj j k _ 1� j k _ + mj 1 k _ 1� 2j + 1 k _ : and


X i2I_ j Var[Xi] � X i2I_ j Var[Yi] = = nj j k _ 1� j k _ + _ 1� 2j k _Xnj i=1 _j i� Xnj i=1 (_j i )2 ! � _ nj j k _ 1�


j k _ + mj 1 k _ 1� 2j + 1 k __ = _ 1� 2j k _ Xnj i=1 _j i � mj 1 k ! + mj 1 k2 � Xnj i=1 (_j i )2 ! = _ 1� 2j k


_ _j + mj 1 k2 � Xnj i=1 (_j i )2 ! : _ Proof of Lemma 3.13: The coupling lemma implies that for any joint distribution on fXigi [ fYigi the following is satisfied ______ ______ X i2M_ 1 (k) Xi � X i2M_ 1(k) Yi ______ ______ _ Pr 2 4 X i2M_ 1(k) Xi 6= X i2M_ 1 (k) Yi 3


5: A union bound further implies Pr 2 4 X i2M_ 1 (k) Xi 6= X i2M_ 1 (k) Yi 3 5 _ Pr 2 4 _ i2M_ 1 (k) Xi 6= Yi 3 5_ X i2M_ 1(k) Pr [Xi 6= Yi] : Hence for any joint distribution on fXigi [ fYigi the following is satisfied ______ ______ X i2M_ 1(k) Xi � X i2M_ 1 (k) Yi


______ ______ _ X i2M_ 1 (k) Pr [Xi 6= Yi]: (3) Let us now choose a joint distribution on fXigi [ fYigi in which, for all i, Xi and Yi are coupled in such a way that Pr[Xi 6= Yi] _ 1 k : 13 This is easy to do since by construction jpi�qij _ 1 k , for all i. Plugging in Formula (3) the particular joint distribution just described yields ______ ______ X i2M_ 1 (k) Xi � X i2M_ 1 (k) Yi ______ ______ _ jM_ 1 (k)j k _ 1 k1�_ : _ Proof of lemma 3.15: For all i 2 I and any choice of values pj 2 [u; 1 2 ], j 2 I n fig, define the function


f(x) = x3(1 � x) + A (x(1 � x) + B)2 ; x 2 _ u; 1 2 _ where A = P j2Infig p3 j (1 � pj) and B = P j2Infig pj(1 � pj). For the sake of the argument let us extend the range of f to [0; 1 2 ]. The derivative of f is f0(x) = A(�2 + 4x) + x2(3B + x � 4Bx � x2) (B + x � x2)3 ; where note that the denominator is positive for all x 2 [0; 1 2 ]. Denoting by h(x) the numerator of the above expression, the derivative of h is h0(x) = 4A + (1 � 2x)x2 + 2x(x � x2) + 6Bx(1 � 2x) > 0; 8x 2 _ 0; 1 2 _ : Therefore, h is increasing in (0; 1 2 ) which implies that h(x) = 0 has at most one root in (0; 1 2 ) and hence f0(x) = 0 has at most one root in (0; 1 2 ) since the denominator in the above expression for f0(x) is always positive. Note that f0(0) < 0 whereas f0( 1 2 ) > 0. Therefore, there exists a unique _ 2 (0; 1 2 ) such that f0(x) < 0, 8x 2 (0; _), f0(_) = 0 and f0(x) > 0, 8x 2 (_; 1 2 ), i.e. f is decreasing in (0; _) and increasing in (_; 1 2 ). This implies that


max x2[u;1=2] f(x) = max ff(u); f(1=2)g : Hence, the expression P i2I p3 i (1�pi) ( P i2I pi(1�pi))2 is bounded by max m2f0;:::;jIjg m 16 + (jIj � m)u3(1 � u) (m 4 + (jIj � m)u(1 � u))2 which we further bound by max x2[0;jIj] g(x) where g(x) := x 16 + (jIj � x)u3(1 � u) (x 4 + (jIj � x)u(1 � u))2 : The derivative of g is g0(x) = 4jIju(1 + u � 6u2 + 12u3 � 8u4) � (1 � 2u � 16u3 + 48u4 � 32u5)x (1 � 2u)�1(4jIj(1 � u)u + (1 � 2u)2x)3 ; where note that the denominator is positive for all u 2 [0; 1 2 ] and the numerator is of the form A(u) � B(u)x where A(u) := 4jIju(1 + u � 6u2 + 12u3 � 8u4) > 0; 8u 2 _ 0; 1 2 _ and


B(u) := 1 � 2u � 16u3 + 48u4 � 32u5 > 0; 8u 2 _ 0; 1 2 _ : 14 Hence, if we take _ := A(u) B(u) , g0(x) > 0, 8x 2 (0; _), g0(_) = 0 and g0(x) < 0, 8x 2 (_;+1), i.e. g is increasing in (0; _) and decreasing in (_;+1). This implies that g achieves its maximum at x := _. The maximum value itself is g(_) = (1 + 2u + 4u2 � 8u3)2 16jIju(1 � u � 4u2 + 4u3) : This concludes the proof. _ Proof of claim 3.16: From Lemma 3.7 if follows that __ __ TP(_1; _2 1) � TP(_2; _2 2) __ __ _ max _ j_1 � _2j _1 + j_2 1 � _2 2j + 1 _2 1 ; j_1 � _2j _2 + j_2 1 � _2


2j + 1 _2 2 _ : Denoting J = fbk_c; : : : ; bk=2c � 1g, we have that _1 � _2 = X i2M_ 1 (k) E[Xi] � X i2M_ 1 (k) E[Yi] = X j2J X i2I_ j (E[Xi] � E[Yi]) = X j2J _j : _2 1= X j2J X i2I_ j Var[Xi] = X j2J nj j k _


1� j k _ + _ 1� 2j k _Xnj i=1 _j i� Xnj i=1 (_j i )2 ! _ X j2J _ nj j k _ 1� j k _ � nj 1 k2 _ _ X j2J nj k2 (j (k � j) � 1) :


Similarly _2 2= X j2J X i2I_ j Var[Yi] = X j2J _ nj j k _ 1� j k _ + mj 1 k _ 1� 2j + 1 k __ _ X j2J _ nj j k _ 1� j k


__ _ X j2J nj k2 j (k � j) _ X j2J nj k2 (j (k � j) � 1) : Finally, _2 1 � _2 2= X j2J 0 @ X i2I_ j Var[Xi] � X i2I_ j Var[Yi] 1 A= X j2J _ 1� 2j k _ _j + mj


1 k2 � Xnj i=1 (_j i )2 !! ; where observe that _2 1 � ______2 2 _ 0, since _ 1� 2j k _ _j + mj 1 k2 � Xnj i=1 (_j i )2 ! _ _ 1� 2j k _ _j + mj 1 k2


� 1 k Xnj i=1 _j i ! = _ 1� 2j k _ _j � 1 k _j = = _ 1� 2j + 1 k _ _j _ 0: 15 We proceed to bound each of the terms j_1�_2j _1 , j_2 1�_2 2j+1 _2 1 , j_1�_2j _2 and j_2 1�_2 2j+1 _2 2 separately. We have


j_1 � _2j _1 = _1 � _2 _1 _ P j2J _j qP j2J nj k2 (j (k � j) � 1) _ k P j2J _j qP j2J:nj_1 (j (k � j) � 1) _ Z qP j2J:nj_1 (j (k � j) � 1) where Z = jfj 2 Jjnj _ 1gj = 1 r P j2J:nj_1 _ j(k�j)�1 Z2 _ _ 1 r Pbk_c+Z�1 j=bk_c _ j(k�j)�1 Z2 _ _ 1


r min1_Z_k=2�bk_c nPbk_c+Z�1 j=bk_c _ j(k�j)�1 Z2 _o: Note that j2+XZ�1 j=j1 (j (k � j) � 1) = 1 6Z(�7 � 6j2 1 + 6j1(1 + k � Z) + 3k(�1 + Z) + 3Z � 2Z2): (4) For j1 = bk_c, let us define F(Z) := 1 Z2 j2+XZ�1 j=j1 (j (k � j) � 1) _ 1 6Z (�7 � 6j2 1 + 6j1(1 + k � Z) + 3k(�1 + Z) + 3Z � 2Z2) The derivative of F is F0(Z) = 7 + 6j2 1 + 3k � 6j1(1 + k) � 2Z2 6Z2 = 7 + 6bk_c2 + 3k � 6bk_c(1 + k) � 2Z2 6Z2 < 0; for large enough k: Hence F is decreasing in [1; k=2 � bk_c] so it achieves its minimum at Z = k=2 � bk_c. The minimum itself is F(k=2 � bk_c) = �14 + 6bk_c � 4bk_c2 � 3k + 4bk_ck + 2k2 6k � 12bk_c = (k): Hence,


j_1 � _2j _1 = O(k�1=2): Similarly, we get j_1 � _2j _2 = O(k�1=2): It remains to bound the terms j_2 1�_2 2j+1 _2 1 and j_2 1�_2 2j+1 _2 2 . We have 16 j_2 1 � _2 2j + 1 _2 1 _ 1+ P j2J (_j + mj 1 k2 ) P j2J nj k2 (j (k � j) � 1) _ 1+k P j2J (k_j) +M P


j2J nj (j (k � j) � 1) (where M := X j mj ) _ P 1 + kZ +M j2J nj (j (k � j) � 1) (where Z = jfj 2 Jjnj _ 1gj) = P kZ j2J nj (j (k � j) � 1) + P 1 +M j2J nj (j (k � j) � 1) The second term of the above expression is bounded as follows P 1 +M j2J nj (j (k � j) � 1) _ P1 j2J nj (j (k � j) � 1) +PM j2J nj (j (k � j) � 1) _ P1 j2J nj (j (k � j) � 1) +PM j2J nj (j (k � j) � 1) _ 1 bk_c(k � bk_c) � 1 + 1P j2J mj M (bk_c(k � bk_c) � 1) _ 2 bk_c(k � bk_c) � 1 = O(k�1�_): The first term is bounded as follows


P kZ j2J nj (j (k � j) � 1) _ P kZ j2J:nj_1 (j (k � j) � 1) _ 1 Pbk_c+Z�1 j=bk_c _ j(k�j)�1 kZ _ _ 1 min1_Z_k=2�bk_c nPbk_c+Z�1 j=bk_c _ j(k�j)�1 kZ _o: From above j2+XZ�1 j=j1 (j (k � j) � 1) = 1 6Z(�7 � 6j2 1 + 6j1(1 + k � Z) + 3k(�1 + Z) + 3Z � 2Z2): (5) For j1 = bk_c, let us define G(Z) := 1 Zk j2+XZ�1 j=j1 (j (k � j) � 1) _ 1 6k (�7 � 6j2 1 + 6j1(1 + k � Z) + 3k(�1 + Z) + 3Z � 2Z2)


The derivative of G is G0(Z) = 3 � 6j1 + 3k � 4Z 6k = 3 � 6bk_c + 3k � 4Z 6k > 0; for large enough k since Z _ k=2: G is increasing in [1; k=2 � bk_c] so it achieves its minimum at Z = 1. The minimum itself is G(1) = �6 � 6bk_c2 + 6bk_ck 6k = (k_): 17 Hence, P kZ j2J nj (j (k � j) � 1) = O(k�_); which together with the above implies j_2 1 � _2 2j + 1 _2 1 = O(k�_) Similarly, we get j_2 1 � _2 2j + 1 _2 2 = O(k�_): Putting everything together we get that __ __ TP(_1; _2 1) � TP(_2; _2 2) __


__ _ O(k�_) + O(k�1=2): The results will enhance not only the technology between Meme and Lemma but the variable will cross references x1 and x2 increasing speed with less power and less memory. A. Mandarino


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.