VOLUMEN 9 NÚMERO 1 ENERO A JUNIO DE 2005 ISSN: 1870-6525

Morfismos Comunicaciones Estudiantiles Departamento de Matem´ aticas Cinvestav Editores Responsables • Isidoro Gitler • Jes´ us Gonz´alez

Consejo Editorial • Luis Carrera • Samuel Gitler • On´esimo Hern´ andez-Lerma • Hector Jasso Fuentes • Miguel Maldonado • Ra´ ul Quiroga Barranco • Enrique Ram´ırez de Arellano • Enrique Reyes • Armando S´ anchez • Mart´ın Solis • Leticia Z´ arate

Editores Asociados • Ricardo Berlanga • Emilio Lluis Puebla • Isa´ıas L´ opez • Guillermo Pastor • V´ıctor P´erez Abreu • Carlos Prieto • Carlos Renter´ıa • Luis Verde

Secretarias T´ecnicas • Roxana Mart´ınez • Laura Valencia

Morfismos puede ser consultada electr´onicamente en “Revista Morfismos” de la direcci´ on http://www.math.cinvestav.mx. Para mayores informes dirigirse al tel´efono 50 61 38 71. Toda correspondencia debe ir dirigida a la Sra. Laura Valencia, Departamento de Matem´ aticas del Cinvestav, Apartado Postal 14-740, M´exico, D.F. 07000 o por correo electr´ onico: laura@math.cinvestav.mx.

VOLUMEN 9 NÚMERO 1 ENERO A JUNIO DE 2005 ISSN: 1870-6525

Informaci´ on para Autores El Consejo Editorial de Morfismos, Comunicaciones Estudiantiles del Departamento de Matem´ aticas del CINVESTAV, convoca a estudiantes de licenciatura y posgrado a someter art´ıculos para ser publicados en esta revista bajo los siguientes lineamientos: • Todos los art´ıculos ser´ an enviados a especialistas para su arbitraje. No obstante, los art´ıculos ser´ an considerados s´ olo como versiones preliminares y por tanto pueden ser publicados en otras revistas especializadas. • Se debe anexar junto con el nombre del autor, su nivel acad´ emico y la instituci´ on donde estudia o labora. • El art´ıculo debe empezar con un resumen en el cual se indique de manera breve y concisa el resultado principal que se comunicar´ a. • Es recomendable que los art´ıculos presentados est´ en escritos en Latex y sean enviados a trav´ es de un medio electr´ onico. Los autores interesados pueden obtener el formato LATEX 2ε utilizado por Morfismos en “Revista Morfismos” de la direcci´ on web http://www.math.cinvestav.mx, o directamente en el Departamento de Matem´ aticas del CINVESTAV. La utilizaci´ on de dicho formato ayudar´ a en la pronta publicaci´ on del art´ıculo. • Si el art´ıculo contiene ilustraciones o figuras, ´ estas deber´ an ser presentadas de forma que se ajusten a la calidad de reproducci´ on de Morfismos. • Los autores recibir´ an un total de 15 sobretiros por cada art´ıculo publicado. • Los art´ıculos deben ser dirigidos a la Sra. Laura Valencia, Departamento de Matem´ aticas del Cinvestav, Apartado Postal 14 - 740, M´ exico, D.F. 07000, o a la direcci´ on de correo electr´ onico laura@math.cinvestav.mx

Author Information Morfismos, the student journal of the Mathematics Department of the Cinvestav, invites undergraduate and graduate students to submit manuscripts to be published under the following guidelines: • All manuscripts will be refereed by specialists. However, accepted papers will be considered to be “preliminary versions” in that authors may republish their papers in other journals, in the same or similar form. • In addition to his/her aﬃliation, the author must state his/her academic status (student, professor,...). • Each manuscript should begin with an abstract summarizing the main results. • Morfismos encourages electronically submitted manuscripts prepared in Latex. Authors may retrieve the LATEX 2ε macros used for Morfismos through the web site http://www.math.cinvestav.mx, at “Revista Morfismos”, or by direct request to the Mathematics Department of Cinvestav. The use of these macros will help in the production process and also to minimize publishing costs. • All illustrations must be of professional quality. • 15 oﬀprints of each article will be provided free of charge. • Manuscripts submitted for publication should be sent to Mrs. Laura Valencia, Departamento de Matem´ aticas del Cinvestav, Apartado Postal 14 - 740, M´ exico, D.F. 07000, or to the e-mail address: laura@math.cinvestav.mx

Lineamientos Editoriales “Morfismos” es la revista semestral de los estudiantes del Departamento de Matem´ aticas del CINVESTAV, que tiene entre sus principales objetivos el que los estudiantes adquieran experiencia en la escritura de resultados matem´ aticos. La publicaci´ on de trabajos no estar´ a restringida a estudiantes del CINVESTAV; deseamos fomentar tambi´en la participaci´ on de estudiantes en M´exico y en el extranjero, as´ı como la contribuci´ on por invitaci´ on de investigadores. Los reportes de investigaci´ on matem´ atica o res´ umenes de tesis de licenciatura, maestr´ıa o doctorado pueden ser publicados en Morfismos. Los art´ıculos que aparecer´ an ser´ an originales, ya sea en los resultados o en los m´etodos. Para juzgar ´esto, el Consejo Editorial designar´ a revisores de reconocido prestigio y con experiencia en la comunicaci´ on clara de ideas y conceptos matem´ aticos. Aunque Morfismos es una revista con arbitraje, los trabajos se considerar´ an como versiones preliminares que luego podr´ an aparecer publicados en otras revistas especializadas. Si tienes alguna sugerencia sobre la revista hazlo saber a los editores y con gusto estudiaremos la posibilidad de implementarla. Esperamos que esta publicaci´ on propicie, como una primera experiencia, el desarrollo de un estilo correcto de escribir matem´ aticas.

Morfismos

Editorial Guidelines “Morfismos” is the journal of the students of the Mathematics Department of CINVESTAV. One of its main objectives is for students to acquire experience in writing mathematics. Morfismos appears twice a year. Publication of papers is not restricted to students of CINVESTAV; we want to encourage students in Mexico and abroad to submit papers. Mathematics research reports or summaries of bachelor, master and Ph.D. theses will be considered for publication, as well as invited contributed papers by researchers. Papers submitted should be original, either in the results or in the methods. The Editors will assign as referees well–established mathematicians. Even though Morfismos is a refereed journal, the papers will be considered as preliminary versions which could later appear in other mathematical journals. If you have any suggestions about the journal, let the Editors know and we will gladly study the possibility of implementing them. We expect this journal to foster, as a preliminary experience, the development of a correct style of writing mathematics.

Morfismos

Contenido

Approximation of general optimization problems ´ Jorge Alvarez-Mena and On´esimo Hern´ andez-Lerma . . . . . . . . . . . . . . . . . . . . . . 1

Linear programming relaxations of the mixed postman problem Francisco Javier Zaragoza Mart´ınez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

A nonmeasurable set as a union of a family of increasing well-ordered measurable sets Ju´ an Gonz´ alez-Hern´ andez and C´esar E. Villarreal . . . . . . . . . . . . . . . . . . . . . . . 35

Noncooperative continuous-time Markov games H´ector Jasso-Fuentes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Morfismos, Vol. 9, No. 1, 2005, pp. 1–20

Approximation of general optimization problems∗ ´ Jorge Alvarez-Mena

On´esimo Hern´andez-Lerma†

Abstract This paper concerns the approximation of a general optimization problem (OP) for which the cost function and the constraints are defined on a Hausdorﬀ topological space. This degree of generality allows us to consider OPs for which other approximation approaches are not applicable. First we obtain convergence results for a general OP, and then we present two applications of these results. The first application is to approximation schemes for infinite-dimensional linear programs. The second is on the approximation of the optimal value and the optimal solutions for the so-called general capacity problem in metric spaces.

2000 Mathematics Subject Classification: 90C48. Keywords and phrases: minimization problem, approximation, infinite linear programs, general capacity problem.

1

Introduction

A constrained optimization problem (OP) is, in general, diﬃcult to solve in closed form, and so one is naturally led to consider ways to approximate it. This in turn leads to obvious questions: how ”good” are the approximations? Do they ”converge” in some suitable sense? These are the questions studied in this paper for a general constrained OP, where general means that the cost function and the constraints are defined on a Hausdorﬀ topological space. This degree of generality ∗ †

Invited Article. Partially supported by CONACyT grant 37355-3.

1

2

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

is important because then our results are applicable to large classes of OPs, even in infinite-dimensional spaces. For instance, as shown in Section 3, we can deal with approximation procedures for infinite linear programming problems in vector spaces with (dual) topologies which are Hausdorﬀ but, say, are not necessarily metrizable. To be more specific, consider a general constrained (OP) IP∞ : minimize {f∞ (x) : x ∈ F∞ }, and a sequence of approximating problems IPn : minimize {fn (x) : x ∈ Fn }. (The notation is explained in section 2.) The questions we are interested in are: (i) the convergence of the sequence of optimal values {min IPn } —or subsequences thereof— to min IP∞ , and (ii) the convergence of sequences of optimal solutions of {IPn } —or subsequences thereof— to optimal solutions of IP∞ . We give conditions under which the convergence in (i) and (ii) holds —see Theorem 2.3. We also develop two applications of these results. The first one is on aggregation (of constraints) schemes to approximate infinite-dimensional linear programs (l.p.’s). In the second application we study the approximation of the optimal value and the optimal solutions for the so-called general capacity (GC) problem in metric spaces. This paper is an extended version of [2] which presents the main theoretical results concerning (i) and (ii), including of course detailed proofs. Here, we are mainly interested in the applications mentioned in the previous paragraph. The main motivation for this paper was that the convergence in (i) and (ii) is directly related to some of our work on stochastic control and Markov games [1, 3], but in fact general OPs appear in many branches of mathematics, including probability theory, numerical analysis, optimal control, game theory, mathematical economics and operations research, to name just a few [4, 5, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23]. The problem of finding conditions under which (i) and (ii) hold is of great interest, and it has been studied in many diﬀerent settings — see e.g. [7, 8, 11, 14, 15, 16, 17, 19, 21, 22, 23] and their references. In

Approximation of general optimization problems

3

particular, the problem can be studied using the notion of Γ-convergence (or epi-convergence) of sequences of functionals [6, 10]. However, the approach used in this paper is more direct and generalizes several known results [6, 10, 11, 19, 21, 22] —see Remark 2.4. Even more, Example 4.7 shows that our assumptions are strictly weaker than those considered in the latter references. Namely, in Example 4.7 we study a particular GC problem in which our assumptions are satisfied, but the assumptions considered in those references fail to hold. The GC problem has been previously analyzed in e.g. [4, 5, 13] from diﬀerent viewpoints. The remainder of the paper is organized as follows. In section 2 we present our main results on the convergence and approximation of general OPs. These results are applied in section 3 to the aggregation schemes introduced in [15] to approximate infinite l.p.’s. In section 4 our results are applied to the GC problem, and a particular case of the GC problem is analyzed.

2

Convergence of general OPs

We shall use the notation IN := {1, 2, . . .}, IN := IN ∪ {∞} and IR := IR ∪ {∞, −∞}. Let X be a Hausdorﬀ topological space. For each n ∈ IN, consider a function fn : X → IR, a set Fn ⊂ X , and the optimization problem IPn : Minimize fn (x) subject to : x ∈ Fn . We call Fn the set of feasible solutions for IPn . If Fn is nonempty, the (optimum) value of IPn is defined as inf IPn := inf{fn (x) | x ∈ Fn }; otherwise, inf IPn := +∞. The problem IPn is said to be solvable if there is a feasible solution x∗ that achieves the optimum value. In this case, x∗ is called an optimal solution for IPn , and the value inf IPn is then written as min IPn = fn (x∗ ). We shall denote by Mn the minimum set, that is, the set of optimal solutions for IPn . To state our assumptions we will use Kuratowski’s [20] concept of outer and inner limits of {Fn }, denoted by OL{Fn } and IL{Fn }, re-

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

4

spectively, and defined as follows. OL{Fn } := {x ∈ X | x = limi→∞ xni , where {ni } ⊂ IN is an increasing sequence such that xni ∈ Fni for all i}. Thus a point x ∈ X is in OL{Fn } if x is an accumulation point of a sequence {xn } with xn ∈ Fn for all n. On the other hand, if x is the limit of the sequence {xn } itself, then x is in the inner limit IL{Fn }, i.e. IL{Fn } := {x ∈ X | x = limn→∞ xn , where xn ∈ Fn for all but a finite number of n′ s}. In these definitions we may, of course, replace {Fn } with any other sequence of subsets of X . Also note that IL{·} ⊂ OL{·}. We shall consider two sets of hypotheses. Assumption 2.1 (a) The minimum sets Mn satisfy that (1)

OL{Mn } ⊂ F∞ .

(b) If xni is in Mni for all i and xni → x (so that x is in OL{Mn }), then (2)

lim inf fni (xni ) ≥ f∞ (x). i→∞

(c) For each x ∈ F∞ there exist N ∈ IN and a sequence {xn } with xn ∈ Fn for all n ≥ N , and such that xn → x and lim fn (xn ) = n→∞

f∞ (x). Assumption 2.2 Parts (b) and (c) are the same as in Assumption 2.1. Moreover (a) The minimum sets Mn satisfy that (3)

IL{Mn } ⊂ F∞ .

Approximation of general optimization problems

5

Note that Assumption 2.1(c) implies, in particular, F∞ ⊂ IL{Fn }, but the equality does not hold necessarily. In fact, in section 4 we give an example in which Assumption 2.1 is satisfied, in particular F∞ ⊂ IL{Fn }, but F∞ ̸= IL{Fn } (see Example 4.7). On the other hand, note that Assumptions 2.2 (a),(c) yield that IL{Mn } ⊂ F∞ ⊂ IL{Fn }. Theorem 2.3 (a) If Assumption 2.1 holds, then (4)

OL{Mn } ⊂ M∞ .

In other words, if {xn } is a sequence of minimizers of {IPn }, and a subsequence {xni } of {xn } converges to x ∈ X, then x is optimal for IP∞ . Furthermore, the optimal values of IPni converge to the optimal value of IP∞ , that is, (5)

min IPni = fni (xni ) → f∞ (x) = min IP∞ .

(b) Suppose that Assumption 2.2 holds. Then IL{Mn } ⊂ M∞ . If in addition IL{Mn } is nonempty, then (6) Proof:

min IPn → min IP∞ . We only prove (a) because the proof of (b) is quite similar.

To prove (a), let x ∈ X be in the outer limit OL{Mn }. Then there is a sequence {ni } ⊂ IN and xni ∈ Mni for all i such that (7)

xni → x.

Moreover, by Assumption 2.1(a), x is in F∞ . To prove that x is in M∞ , choose an arbitrary x′ ∈ F∞ and let {x′n } and N be as in Assumption 2.1(c) for x′ , that is, x′n is in Fn for all n ≥ N , x′n → x′ , and fn (x′n ) → f∞ (x′ ). Furthermore, if {ni } ⊂ IN is as in (7), then the subsequence {x′ni } of {x′n } also satisfies (8)

x′ni is in Fni , x′ni → x′ , and fni (x′ni ) → f∞ (x′ ).

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

6

Combining the latter fact with Assumption 2.1(b) and the optimality of each xni we get f∞ (x) ≤ lim inf fni (xni ) i→∞

(by (2))

≤ lim inf fni (x′ni ) i→∞

= f∞ (x′ )

(by (8)).

Hence, as x′ ∈ F∞ was arbitrary, it follows that x is in M∞ , that is, (4) holds. To prove (5), suppose again that x is in OL{Mn } and let xni ∈ Mni be as in (7). By Assumption 2.1(c), there exists a sequence x′ni ∈ Fni that satisfies (8) for x instead of x′ ; thus f∞ (x) ≤ lim inf fni (xni ) i→∞

(by (2))

≤ lim sup fni (xni ) i→∞

≤ lim sup fni (x′ni ) i→∞

= f∞ (x)

(by (8)).

This proves (5).

✷

In Theorem 2.3 it is not assumed the solvability of each IPn . Thus OL{Mn } might be empty; in fact, it might be empty even if each IPn is solvable. In this case, the (convergence of minimizers) inclusion (4) trivially holds. In the convergence of the optimal values (5) and (6), unlike the convergence of minimizers, it is implicitly assumed that OL{Mn } is nonempty. Remark 2.4 (i) Parts (a) and (b) of Theorem 2.3 generalize in particular some results in [21, 22] and [11, 19], respectively. Indeed, using our notation, in [11, 19, 21, 22] it is assumed that the cost functions fn are continuous and converge uniformly to f∞ . On the other hand, with respect to the feasible sets Fn , in [11] it is assumed that IL{Fn } = F∞ , whereas in [19, 21, 22] it is required that Fn → F∞ in the Hausdorﬀ metric. These hypotheses trivially yield the following conditions: (C1 ) The inner and/or the outer limit of the feasible sets Fn coincide with F∞ , i.e. (9)

IL{Fn } = F∞

Approximation of general optimization problems

7

or (10)

OL{Fn } = IL{Fn } = F∞ .

(C2 ) For every x in X and for every sequence {xn } in X converging to x, it holds that (11)

lim fn (xn ) = f∞ (x).

n→∞

However, instead of (10) and (11) we require (the weaker) Assumption 2.1, and instead of (9) and (11) we require (the weaker) Assumption 2.2. (ii) Theorem 2.3 generalizes the results in [6, 10], where it is used the notion of Γ-convergence. Indeed, [6, 10] study problems of the form (12)

min Fn (x). x∈X

Each of our problems IPn can be put in the form (12) by letting ! fn (x) if x ∈ Fn , Fn (x) := ∞ if x ∈ / Fn , and then, when the space X is first countable, the assumptions in [6, 10] can be translated to this context as follows: the sequence {Fn } Γconverges to F∞ —see Theorems 7.8 and 7.18 in [10]. On the other hand, when X is first countable, the sequence {Fn } Γ-converges to F∞ if and only if (C3 ) For every x in X and for every sequence {xn } in X converging to x, it holds that lim inf Fn (xn ) ≥ F∞ (x). n→∞

(C4 ) For every x in X there exists a sequence {xn } in X converging to x such that lim Fn (xn ) = F∞ (x). n→∞

See Proposition 8.1 in [10]. It is natural to assume that fn (x) < ∞ for each x ∈ Fn and n ∈ IN, and that F∞ is nonempty. In this case, (C3 ) implies part (b) of Assumptions 2.1 and 2.2, (C4 ) implies part (c), and (C3 ) together with (C4 ) imply part (a). Indeed, the last statement

8

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

can be proved as follows. Let x ∈ X be in OL{Mn }. Then there is a sequence {ni } ⊂ IN and xni ∈ Mni for all i such that xni → x. Now, as F∞ is nonempty we can take x′ in F∞ . For this x′ , let {x′n } ∈ X be as in (C4 ), so that F∞ (x) ≤ lim inf Fni (xni ) i→∞

≤ lim inf Fni (x′ni ) i→∞

= =

lim Fn (x′n ) i→∞ F∞ (x′ ) < ∞

(by (C3 )) (because xni is in Mni ) (by (C4 )) (by (C4 )).

Hence x is in F∞ . On the other hand, if in addition we assume that fn (x) ≤ K for all x ∈ Fn , n ∈ IN and some K ∈ IR, then (C3 ) and (C4 ) imply (10). In fact, (C4 ) implies the inclusion F∞ ⊂ IL{Fn } ⊂ OL{Fn }, and (C3 ) together with the uniform boundedness condition imply the reverse inclusion. In the next two sections we present applications of Theorem 2.3. We also show, in Example 4.7, a particular problem in which Assumption 2.1 is satisfied, but the assumptions considered in [6, 10, 11, 19, 21, 22] do not hold.

3

Approximation schemes for l.p.’s

As a first application of Theorem 2.3, in this section we consider the aggregation (of constraints) schemes introduced in [15] to approximate infinite linear programs (l.p.’s). (See also [17] or chapter 12 in [16] for applications of the aggregation schemes to some stochastic control problems.) Our main objective is to show that the convergence of these schemes can be obtained from Theorem 2.3. First we introduce the l.p. we shall work with. Let (X , Y) and (Z, W) be two dual pairs of vector spaces. The spaces X and Y are assumed to be endowed with the weak topologies σ(X , Y) and σ(Y, X ), respectively. Thus, in particular, the topological spaces X and Y are Hausdorﬀ. We denote by ⟨·, ·⟩ the bilinear form on both X × Y and Z × W. Let A : X → Z be a weakly continuous linear map with adjoint

Approximation of general optimization problems

9

A∗ : W → Y, i.e. ⟨x, A∗ w⟩ := ⟨Ax, w⟩ ∀ x ∈ X , w ∈ W. We denote by K a positive cone in X . For given vectors c ∈ Y and b ∈ Z, we consider the (primal) l.p. (13) (14)

LP : Minimize ⟨x, c⟩ subject to: Ax = b, x ∈ K.

A vector x ∈ X is said to be a feasible solution for LP if it satisfies (14), and we denote by F the set of feasible solutions for LP. The program LP is called consistent if it has a feasible solution, i.e. F is nonempty. The following assumption ensures that LP is solvable. Assumption 3.1 LP has a feasible solution x0 with ⟨x0 , c⟩ > 0 and, moreover, the set ∆0 := {x ∈ K|⟨x, c⟩ ≤ ⟨x0 , c⟩} is weakly sequentially compact. Remark 3.2 Assumption 3.1 implies that the set ∆r := {x ∈ K| ⟨x, c⟩ ≤ r} is weakly sequentially compact for every r > 0, since ∆r = (r/⟨x0 , c⟩)∆0 . Lemma 3.3 If Assumption 3.1 holds, then LP is solvable. For a proof of Lemma 3.3, see Theorem 2.1 in [15]. If E is a subset of a vector space, then sp(E) denotes the space spanned (or generated) by E. Aggregation schemes. To realize the aggregation schemes the main assumption is on the vector space W. Assumption 3.4 There is an increasing sequence of finite sets Wn in W such that W∞ := ∪∞ n=1 Wn is weakly dense in W, where Wn = sp(Wn ).

10

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

For each n ∈ IN, let Zn be the algebraic dual of Wn , that is, Zn := {f : Wn → IR | f is a linear functional}. Thus (Zn , Wn ) is a dual pair of finite-dimensional vector spaces with the natural bilinear form ⟨f, w⟩ := f (w) ∀ w ∈ Wn , f ∈ Zn . Now let An : X → Zn be the linear operator given by (15)

An x(w) := ⟨Ax, w⟩

∀ w ∈ Wn .

The adjoint A∗n : Wn → Y of An is the adjoint A∗ of A restricted to Wn , that is, A∗n := A∗ |Wn . Finally, we define bn ∈ Zn by bn (·) := ⟨b, ·⟩|Wn . With these elements we can define the aggregation schemes as follows. For each n ∈ IN, LPn : Minimize ⟨x, c⟩ (16)

subject to : An x = bn , x ∈ K,

which is as our problem IPn (in section 2) with fn (x) := ⟨x, c⟩ and Fn the set of vectors x ∈ X that satisfy (16). The l.p. LPn is called an aggregation (of constraints) of LP. Moreover, from Proposition 2.2 in [15] we have the following. Lemma 3.5 Under the Assumptions 3.1 and 3.4, the l.p. LP∞ is equivalent to LP in the sense that (using Lemma 3.3) (17)

min LP = min LP∞ .

The following lemma provides the connection between the aggregation schemes and Theorem 2.3. Lemma 3.6 The Assumptions 3.1 and 3.4 imply that the aggregation schemes LPn satisfy Assumption 2.1. Proof: To check parts (a) and (c) of Assumption 2.1, for each n ∈ IN let xn ∈ Fn be such that xn → x weakly in X . Thus, by definition of the weak topology on X , ⟨xn , y⟩ → ⟨x, y⟩ for all y ∈ Y, which in particular yields lim ⟨xn , c⟩ = ⟨x, c⟩. n→∞

Approximation of general optimization problems

11

This implies part (b) of Assumption 2.1, and also that the sequence xn is in the weakly sequentially compact set ∆r for some r > 0 (see Remark 3.2). In particular, x is in K, and from (16) and the definitions of An and bn we get A∞ x(w) = lim An xn (w) = lim bn (w) = b∞ (w) ∀ w ∈ W∞ . n→∞

n→∞

Thus x is in F∞ , which yields that OL{Fn } ⊂ F∞ , and so Assumption 2.1(a) follows. Finally, to verify part (c) of Assumption 2.1, choose an arbitrary x ∈ F∞ . Then, by (15) and the definition of bn , A∞ x(w) = ⟨Ax, w⟩ = ⟨b, w⟩ = b∞ (w) ∀ w ∈ W∞ . In particular, if w ∈ Wn for some n ∈ IN, the latter equation becomes An x(w) = bn (w). Hence An x = bn . It follows that F∞ ⊂ Fn for all n ∈ IN and, moreover, the sets Fn form a nonincreasing sequence, i.e. Fn ⊇ Fn+1 ∀ n ∈ IN,

(18)

which implies part (c) of Assumption 2.1.

✷

To summarize, from Lemma 3.6 and Theorem 2.3, together with (17) and (18) we get the following. Theorem 3.7 Suppose that Assumptions 3.1 and 3.4 are satisfied. Then (a) The aggregation LPn is solvable for every n ∈ IN. (b) For every n ∈ IN, let xn ∈ Fn be an optimal solution for LPn . Then, as n → ∞, (19)

⟨xn , c⟩ ↑ min LP∞ = min LP,

and, furthermore, every weak accumulation point of the sequence {xn } is an optimal solution for LP.

12

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

Proof: (a) It is clear that Assumption 3.1 also holds for each aggregation LPn . Thus the solvability of LPn follows from Lemma 3.3. (b) From Lemma 3.6, we see that Theorem 2.3(a) holds for the aggregations LPn . Hence to complete the proof we only need to verify (19). To do this, note that (18) yields min LPn ≤ min LPn+1 for each n ∈ IN, and, moreover, the sequence of values min LPn is bounded above by min LP∞ . This fact together with (5) and Lemma 3.5 give (19). ✷ Theorem 3.7 was obtained in [15] using a diﬀerent approach. Remark 3.8 In the aggregation schemes LPn , the vector spaces Zn and Wn are finite-dimensional for n ∈ IN, and so each LPn is a socalled semi-infinite l.p. Hence Theorem 3.7 can be seen as a result on the approximation of the infinite-dimensional l.p. LP by semi-infinite l.p.’s. On the other hand, a particular semi-infinite l.p. is when the vector space X of decision variables (or just the cone K) is finite-dimensional, but the vector b lies in an infinite-dimensional space W [5, 14, 18]. In the latter case, the aggregation schemes would be approximations to LP by finite l.p.’s.

4

The GC problem

The general capacity (GC) problem is related to the problem of determining the electrostatic capacity of a conducting body. In fact, it originated in the mentioned electrostatic capacity problem —see, for instance, [4, 5]. Let X and Y be metric spaces endowed with their corresponding Borel σ-algebras B(X) and B(Y ). We denote by M (Y ) the vector space of finite signed measures on Y , and by M + (Y ) the cone of nonnegative measures in M (Y ). Now let b : X → IR, c : Y → IR, and g : X × Y → IR be nonnegative Borel-measurable functions. Then the GC problem can be stated as follows. ! c(y) µ(dy) GC : Minimize Y! g(x, y) µ(dy) ≥ b(x) ∀x ∈ X, µ ∈ M + (Y ). subject to : Y

Approximation of general optimization problems

13

In this section we study the convergence problem (see (i) and (ii) in section 1) in which g and c are replaced with sequences of nonnegative measurable functions gn : X × Y → IR and cn : Y → IR, for n ∈ IN, such that gn → g∞ =: g and cn → c∞ =: c uniformly. Thus we shall deal with GC problems ! GCn : Minimize cn (y) µ(dy) Y ! (20) gn (x, y) µ(dy) ≥ b(x) ∀x ∈ X, µ ∈ M + (Y ), subject to : Y

for n ∈ IN. For each n ∈ IN, we denote by Fn the set of feasible solutions for " GCn , that is, the set of measures µ that satisfy (20), but in addition Y cn dµ < ∞.

Convergence. We shall study the convergence issue via Theorem 2.3. First, we introduce assumptions that guarantee the solvability of the GC problems. We shall distinguish two cases for the cost functions cn , the bounded case and the unbounded case, which require slightly diﬀerent hypotheses. For the bounded case we suppose the following. Assumption 4.1 (Bounded case) For each n ∈ IN: (a) Fn is nonempty. (b) The function gn (x, ·) is bounded above and upper semicontinuous (u.s.c.) for each x ∈ X. (c) The function cn is bounded and lower semicontinuous (l.s.c.). Further cn is bounded away from zero, that is, there exist δn > 0 such that cn (y) ≥ δn for all y ∈ Y .

In addition, (d) The space Y is compact. For the unbounded case, we replace parts (c) and (d) with an infcompactness hypothesis. Assumption 4.2 (Unbounded case) Parts (a)-(c) are the same as in Assumption 4.1. Moreover,

14

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

(d) For each n ∈ IN, the function cn is inf-compact, which means that, for each r ∈ IR, the set {y ∈ Y |cn (y) ≤ r} is compact. Further, cn is bounded away from zero. Observe that the inf-compactness condition implies that cn is l.s.c. We next introduce the assumptions for our convergence and approximation results. As above, we require two sets of assumptions depending on whether the cost functions cn are bounded or unbounded. (See Remark 4.8 for alternative sets of assumptions.) Assumption 4.3 (Bounded case) (a) (Slater condition) There exist µ ∈ F∞ and η > 0 such that ! g∞ (x, y) µ(dy) ≥ b(x) + η ∀ x ∈ X. Y

(b) gn → g∞ uniformly on X × Y . (c) cn → c∞ uniformly on Y . Assumption 4.4 (Unbounded case) Parts (a) and (b) are the same as in Assumption 4.3. Moreover (c) cn ↓ c∞ uniformly on Y . Before stating our main result for the GC problem we recall some facts on the weak convergence of measures (for further details see [9] or chapter 12 in [16], for instance). Definition 4.5 Let Y , M (Y ) and M + (Y ) be as at the beginning of this section. A sequence {µn } in M + (Y ) is said to be bounded if there exists a constant m such that µn ≤ m for all n. Let Cb (Y ) be the vector space of continuous bounded functions on Y . We say that a sequence {µn } in M (Y ) converges weakly to µ ∈ M (Y ) if µn → µ in the weak topology σ(M (Y ), Cb (Y )), i.e. ! ! u dµ ∀u ∈ Cb (Y ). u dµn → Y

Y

Approximation of general optimization problems

15

A subset M0 of M + (Y ) is said to be relatively compact if for any sequence {µn } in M0 there is a subsequence {µm } of {µn } and a measure µ in M + (Y ) (but not necessarily in M0 ) such that µn → µ weakly. In the latter case, we say that µ is a weak accumulation point of {µn }. We now state our main result in this section. Theorem 4.6 Suppose that either Assumptions 4.1 and 4.3, or 4.2 and 4.4 hold. Then (a) GCn is solvable for every n ∈ IN. (b) The optimal value of GCn converges to the optimal value of GC∞ , i.e. (21)

min GCn −→ min GC∞ .

Furthermore, if µn ∈ M + (Y ) is an optimal solution for GCn for each n ∈ IN, then the sequence {µn } is relatively compact, and every weak accumulation point of {µn } is an optimal solution for GC∞ . (c) If GC∞ has a unique optimal solution, say µ, then for any µn in the set of optimal solutions for GCn , with n ∈ IN, the sequence {µn } converges weakly to µ. For a proof of Theorem 4.6 the reader is referred to [1]. We shall conclude this section with an example which satisfies our hypotheses, Assumption 2.1, but the hypotheses used in [6, 10, 11, 19, 21, 22] do not hold. Example 4.7 This example shows a particular GC problem in the unbounded case, in which our assumptions are satisfied, but the conditions used in [6, 10, 11, 19, 21, 22] do not hold. Hence, this example shows that our assumptions are strictly weaker than those considered in [6, 10, 11, 19, 21, 22]. We first compare our hypotheses with those in [11, 19, 21, 22]. In our notation, the latter conditions are as follows (see Remark 2.4).

16

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

(i) For each sequence µn ∈ M + (Y ) such that µn → µ weakly we have ! ! lim c∞ dµ. cn dµn = n→∞ Y

Y

(ii) IL{Fn } = F∞ . Consider the spaces X = Y = [0, 2], and for each n ∈ IN let gn ≡ 1, and ⎧ ⎨ 1 if x = 0, 1 if x ∈ (0, 1], cn (x) := ⎩ x 1 if x ∈ (1, 2].

Let b ≡ 0. With these elements the set Fn of feasible solutions for each problem GCn is given by ! ! Fn := {µ ∈ M + ([0, 2]) : gn dµ = µ([0, 2]) ≥ b, cn dµ < ∞}.

As the cost functions cn are unbounded, we consider the Assumptions 4.2 and 4.4, which are obviously true in the present case, and which in turn imply Assumption 2.1 —see Lemma 3.11 in [1]. Next we show that (i) and (ii) do not hold. Let µ be the lebesgue measure on Y = [0, 2], and for each n ∈ IN let µn be the restriction of µ to [1/n, 2], i.e. µn (B) := µ(B ∩ [1/n, 2]) for all B ∈ B(Y ). Thus µn is in Fn for each n ≥ 2, and µn% → µ weakly. Therefore µ is in IL{Fn }, but µ is not in F∞ because c∞ dµ = ∞. Hence (ii) does not hold. Similarly, let µ′n := (1/kn )µn with kn := 1 + ln(n). Then µ′n is in Fn for all n ≥ 2, and µ′n → 0 =: µ′ weakly, but ! ! ′ ̸ c∞ dµ′ = 0. cn dµn = 1 −→

Thus (i) is not satisfied.

Now we compare our assumptions with those in [6, 10]. This can be done because, as M ([0, 2]) is metrizable, the space X = M ([0, 2]) is first countable. See Remark 2.4. We take X, Y, cn , gn and Fn as above, but now we take b = 1/2. As in the former case, Assumption 2.1 holds. Now we slightly modify the set of feasible solutions by ! ! 1 F&n := {µ ∈ M + ([0, 2]) : gn dµ = µ([0, 2]) ≥ , cn dµ < 1}. 2

Approximation of general optimization problems

17

Notice that F!n ̸= ∅ and F!n ⊂ Fn for all n ≥ 2. The sequence of mod" n }, also satisfies Assumption 2.1. Indeed, ified GC problems, say {GC parts (a) and (b) of Assumption 2.1 hold because the minimum sets #n ), and part (c) is true since it holds for have not changed (Mn = M ! F∞ , and F∞ ⊂ F∞ . Next we show that condition (C3 ) in Remark 2.4 does not hold. For each n ∈ IN, let $ % cn dµ if µ ∈ F!n , Fn (µ) := ∞ if µ ∈ / F!n .

Moreover, for each n ∈ IN, let µ′′n be the restriction of µ to [1 + 1/n, 2], and let µ′′ be the restriction of µ to [1, 2]. Hence we have µ′′n ([0, 2]) = % ′′ (n − 1)/n ≥ 1/2 and cn dµn = (n − 1)/n < 1 for all n ≥ 2. Therefore, ′′ ′′ ′′ ! µ′′n is in F!n for each n ≥ 2, and % µn → ′′µ weakly. Thus µ is in IL{Fn }, ′′ but µ is not in F!∞ because c∞ dµ = 1. Hence & ′′ lim inf Fn (µn ) = lim inf cn dµ′′n = 1 < ∞ = F∞ (µ′′ ), n→∞

n→∞

and so (C3 ) is not satisfied. It follows that the Fn do not Γ−converge to F∞ , that is, the assumptions in [6, 10] do not hold. Remark 4.8 Suppose that cn → c∞ uniformly on Y . Then the following holds.

• If c∞ is bounded away from zero, then so is cn for all n suﬃciently large. Hence, in part (b) of Theorem 4.6 it suﬃces to require (only) that c∞ is bounded away from zero, for both cases, bounded and unbounded. • If the sequence {cn } is uniformly bounded away from zero (that is, there exists δ > 0 such that, for each n ∈ IN, cn (y) ≥ δ for all y ∈ Y ), then also c∞ is bounded away from zero. On the other hand, if cn → c∞ uniformly and gn → g∞ uniformly, then the following holds. • If the Slater condition holds for GC∞ (see Assumption 4.3 (a)), then GCn also satisfies the Slater condition for all n large enough. It follows that, for each n ≥ N , GCn is consistent, i.e., Fn ̸= ∅. Then Assumption 4.3 imply part (a) of Assumptions 4.1 and 4.2, for each n ≥ N . Hence, in part (b) of Theorem 4.6, Assumptions 4.1(a) and 4.2(a) are not required.

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

18

• If the Slater condition uniformly holds for the sequence {GCn } (that is, for each n ∈ IN, there exist µn ∈ F∞ and η > 0 such that ! gn (x, y) µn (dy) ≥ b(x) + η ∀ x ∈ X), Y

then the Slater condition also holds for GC∞ . 1 ´ Jorge Alvarez–Mena Programa de Investigaci´ on en Matem´ aticas Aplicadas y Computaci´ on, IMP, A.P. 14–805, M´exico, D.F. 07730 M´exico jamena@imp.mx

On´esimo Hern´ andez-Lerma Departamento de Matem´ aticas, CINVESTAV-IPN, A.P. 14-470, M´exico D.F. 07000, M´exico. ohernand@math.cinvestav.mx

References ´ [1] Alvarez-Mena J.; Hern´ andez-Lerma O., Convergence of the optimal values of constrained Markov control processes, Math. Meth. Oper. Res. 55 (2002), 461–484. ´ [2] Alvarez-Mena J.; Hern´ andez-Lerma O., Convergence and approximation of optimization problems, SIAM J. Optim. 15 (2005), 527– 539. ´ [3] Alvarez-Mena J.; Hern´ andez-Lerma O., Existence of Nash equilibria for constrained stochastic games, Math. Meth. Oper. Res. 62 (2005). [4] Anderson E. J.; Lewis A. S.; Wu S. Y., The capacity problem, Optimization 20 (1989), 725–742. [5] Anderson E. J.; Nash P., Linear Programming in InfiniteDimensional Spaces, Wiley, Chichester, U. K., 1987. [6] Attouch H., Variational Convergence for Functions and Operators, Applicable Mathematics series, Pitman (Advanced publishing Program), Boston, MA, 1984. 1

Current address: Departamento de Ciencias B´ asicas, UAT, Apartado Postal 140, Apizaco, Tlaxcala 90300, M´exico.

Approximation of general optimization problems

19

[7] Back, K., Convergence of Lagrange multipliers and dual variables for convex optimization problems, Math. Oper. Res. 13 (1988), 74– 79. [8] Balayadi A.; Sonntag Y.; Zalinescu C., Stability of constrained optimization problems, Nonlinear Analysis, Theory Methods Appl. 28 (1997), 1395–1409. [9] Billingsley P., Convergence of Probability Measures, Wiley, New York, 1968. [10] Dal Maso G., An Introduction to Γ-convergence, Birkh¨auser, Boston, MA, 1993. [11] Dantzig G. B.; Folkman J.; Shapiro N., On the continuity of the minimum set of a continuous function, J. Math. Anal. Appl. 17 (1967), 519–548. [12] Dontchev A. L.; Zolezzi T., Well-Posed Optimization Problems, Lecture Notes in Math. 1543, Springer-Verlag, Berlin, 1993. [13] Gabriel J. R.; Hern´ andez-Lerma O., Strong duality of the general capacity problem in metric spaces, Math. Meth. Oper. Res. 53 (2001), 25–34. [14] Goberna M. A.; L´ opez M. A., Linear Semi-Infinite Optimization, Wiley, New York, 1998. [15] Hern´andez-Lerma O.; Lasserre J. B., Approximation schemes for infinite linear programs, SIAM J. Optim. 8 (1998), 973–988. [16] Hern´andez-Lerma O.; Lasserre J. B., Further Topics on DiscreteTime Markov Control Processes, Springer-Verlag, New York, 1999. [17] Hern´andez-Lerma O.; Lasserre J. B., Linear programming approximations for Markov control processes in metric spaces, Acta Appl. Math. 51 (1998), 123–139. [18] Hettich R.; Kortanek K. O., Semi-infinite programming: theory, methods, and applications, SIAM Review 35 (1993), 380–429. [19] Kanniappan P.; Sundaram M. A., Uniform convergence of convex optimization problems, J. Math. Anal. Appl. 96 (1983), 1–12. [20] Kuratowski K., Topology I, Academic Press, New York, 1966.

20

´ Jorge Alvarez–Mena and On´esimo Hern´andez–Lerma

[21] Schochetman I. E., Convergence of selections with applications in optimization, J. Math. Anal. Appl. 155 (1991), 278–292. [22] Schochetman I. E., Pointwise versions of the maximum theorem with applications in optimization, Appl. Math. Lett. 3 (1990), 89– 92. [23] Vershik A. M.; Telmel’t V., Some questions concerning the approximation of the optimal values of infinite-dimensional problems in linear programing, Siberian Math. J. 9 (1968), 591–601.

Morfismos, Vol. 9, No. 1, 2005, pp. 21-34

Linear programming relaxations of the mixed postman problem Francisco Javier Zaragoza Mart´ınez

1

Abstract The mixed postman problem consists of finding a minimum cost tour of a connected mixed graph traversing all its vertices, edges, and arcs at least once. We prove in two diﬀerent ways that the linear programming relaxations of two well-known integer programming formulations of this problem are equivalent. We also give some properties of the extreme points of the polyhedra defined by one of these relaxations and its linear programming dual.

2000 Mathematics Subject Classification: 05C45, 90C35. Keywords and phrases: Eulerian graph, integer programming formulation, linear programming relaxation, mixed graph, postman problem.

1

Introduction

We study a class of problems collectively known as postman problems [6]. As the name indicates, these are the problems faced by a postman who needs to deliver mail to all streets in a city, starting and ending his labour at the city’s post oﬃce, and minimizing the length of his walk. In graph theoretical terms, a postman problem consists of finding a minimum cost tour of a graph traversing all its arcs (one-way streets) and edges (two-way streets) at least once. Hence, we can see postman problems as generalizations of Eulerian problems. The postman problem when all streets are one-way, known as the directed postman problem, can be solved in polynomial time by a network 1

This work was partly funded by UAM Azcapotzalco research grant 2270314 and CONACyT doctoral grant 69234.

21

22

Francisco Javier Zaragoza Mart´ınez

flow algorithm, and the postman problem when all streets are two-way, known as the undirected postman problem, can be solved in polynomial time using Edmonds’ matching algorithm, as shown by Edmonds and Johnson [3]. However, Papadimitriou showed that the postman problem becomes NP-hard when both kinds of streets exist [9]. This problem, known as the mixed postman problem, is the central topic of this paper. We study some properties of the linear programming relaxations of two well-known integer programming formulations for the mixed postman problem — described in Section 3. We prove that these linear programming relaxations are equivalent (Theorem 4.1.1). In particular, we show that the polyhedron defined by one of them is essentially a projection of the other (Theorem 4.1.2). We also give new proofs of the half-integrality of one of these two polyhedra (Theorem 4.2.1) and of the integrality of the same polyhedron for mixed graphs with vertices of even degree (Theorem 4.2.2). Finally, we prove that the corresponding dual polyhedron has integral optimal solutions (Theorem 4.3.1).

2

Preliminaries

A mixed graph M is an ordered triple (V (M ), E(M ), A(M )) of three mutually disjoint sets V (M ) of vertices, E(M ) of edges, and A(M ) of arcs. When it is clear from the context, we simply write M = (V, E, A). Each edge e ∈ E has two ends u, v ∈ V , and each arc a ∈ A has a head u ∈ V and a tail v ∈ V . Each edge can be traversed from one of its ends to the other, while each arc can be traversed from its tail to its ⃗ = (V, A ∪ E + ∪ E − ) of M is the head. The associated directed graph M directed graph obtained from M by replacing each edge e ∈ E with two oppositely oriented arcs e+ ∈ E + and e− ∈ E − . Let S ⊆ V . The undirected cut δE (S) determined by S is the set of edges with one end in S and the other end in S¯ = V \S. The directed cut ¯ δA (S) determined by S is the set of arcs with tails in S and heads in S. ¯ The total cut δM (S) determined by S is the set δE (S) ∪ δA (S) ∪ δA (S). For single vertices v ∈ V (M ) we write δE (v), δA (v), δM (v) instead of δE ({v}), δA ({v}), δM ({v}), respectively. We also define the degree of S as dE (S) = |δE (S)|, and the total degree of S as dM (S) = |δM (S)|. A walk from v0 to vn is an ordered tuple W = (v0 , e1 , v1 , . . . , en , vn ) on V ∪ E ∪ A such that, for all 1 ≤ i ≤ n, ei can be traversed from vi−1 to vi . If v0 = vn , W is said to be a closed walk. If, for any two vertices u and v, there is a walk from u to v, we say that M is strongly

Linear relaxations of the mixed postman problem

23

connected. If W is closed and uses all vertices of M , we call it a tour, and if it traverses each edge and arc exactly once, we call it Eulerian. If e1 , . . . , en are pairwise distinct, W is called a trail. If W is a closed trail, and v1 , . . . , vn are pairwise distinct, we call it a cycle. Given a matrix A ∈ Qn×m and a vector b ∈ Qn , the polyhedron determined by A and b is the set P = {x ∈ Rm : Ax ≤ b}. A vector x ∈ P is called an extreme point of P if x is not a convex combination of vectors in P \ {x}. For our purposes, P is integral if all its extreme points have integer coordinates, and it is half-integral if all its extreme points have coordinates which are integer multiples of 21 . ! Let S be a set, and let T ⊆ S. If x ∈ RS , we define x(T ) = t∈T xt . The characteristic vector χT of T with respect to S is defined by the entries χT (t) = 1 if t ∈ T , and χT (t) = 0 otherwise. If T = S we write 1S or 1 instead of χS , if T consists of only one element t we write 1t instead of χ{t} , and if T is empty we write 0S or 0 instead of χ∅ . If x ∈ Rn , the positive support of x is the vector y ∈ Rn such that yi = 1 if xi > 0, and yi = 0 otherwise, and it is denoted by supp+ (x). The negative support supp− (x) is defined similarly.

3

Integer programming formulations

Let M = (V, E, A) be a strongly connected mixed graph, and let c ∈ QE∪A . A postman tour of M is a tour that traverses all edges and arcs + of M at least once. The cost of a postman tour is the sum of the costs of all edges and arcs traversed, counting repetitions. The mixed postman problem is to find the minimum cost of a postman tour. We present two integer programming formulations of the mixed postman problem.

3.1

First formulation

The first integer programming formulation we give is due to Kappauf and Koehler [7], and Christofides et al. [1]. Similar formulations were given by other authors [3, 5, 10]. All these formulations are based on the following characterization of mixed Eulerian graphs. Theorem 3.1.1 (Veblen [11]) A connected, mixed graph M is Eulerian if and only if M is the disjoint union of some cycles. ⃗ = (V, A ∪ E + ∪ E − ) be the associated directed graph of M . Let M For every e ∈ E, let ce+ = ce− = ce . A nonnegative integer circulation

24

Francisco Javier Zaragoza Mart´ınez

⃗ (a vector on A ∪ E + ∪ E − such that x(⃗δ(¯ x of M v )) = x(⃗δ(v)) for every v ∈ V , for more on the theory of flows see [4]) is the incidence vector of a postman tour of M if and only if xe ≥ 1 for all e ∈ A, and xe+ + xe− ≥ 1 for all e ∈ E. Therefore, we obtain the integer program:

(1)

⊤ + ⊤ − MMPT1(M, c) = min c⊤ A xA + cE xE + cE xE

subject to (2) x(⃗δ(¯ v )) − x(⃗δ(v)) = 0 for all v ∈ V, (3) (4) (5)

xa ≥ 1 for all a ∈ A, xe + + xe −

≥ 1 for all e ∈ E, and

xa ≥ 0 and integer for all a ∈ A ∪ E + ∪ E − .

1 Let PM P T (M ) be the convex hull of the feasible solutions to the integer program above, and let Q1M P T (M ) be the set of feasible solutions to its linear programming relaxation:

(6)

⊤ − ⊤ + LMMPT1(M, c) = min c⊤ A xA + cE xE + cE xE

(7)

subject to x(⃗δ(¯ v )) − x(⃗δ(v)) = 0 for all v ∈ V,

(8)

xa ≥ 1 for all a ∈ A,

(9) (10)

3.2

xe + + xe −

≥ 1 for all e ∈ E, and

xa ≥ 0 for all a ∈ A ∪ E + ∪ E − .

Second formulation

The second integer programming formulation we give is due to Nobert and Picard [8]. The approach they use is based on the following characterization of mixed Eulerian graphs. Theorem 3.2.1 (Ford and Fulkerson [4, page 60]) Let M be a connected, mixed graph. Then M is Eulerian if and only if, for every subset S of vertices of M , the number of arcs and edges from S¯ to S minus the number of arcs from S to S¯ is a nonnegative even number. The vector x ∈ ZE∪A is the incidence vector of a postman tour of + M if and only if xe ≥ 1 for all e ∈ E ∪ A, x(δE∪A (v)) is even for all

Linear relaxations of the mixed postman problem

25

¯ + x(δE (S)) ≥ x(δA (S)) for all S ⊆ V . Therefore v ∈ V , and x(δA (S)) we obtain the integer program: (11)

MMPT2(M, c) = min c⊤ x subject to

(12) (13) (14)

x(δE∪A (v)) ≡ 0 (mod 2) for all v ∈ V, ¯ x(δA (S)) + x(δE (S)) ≥ x(δA (S)) for all S ⊆ V, and xe ≥ 1 and integer for all e ∈ E ∪ A.

Note that the parity constraints (12) are not in the required form for integer programming; however, this can be easily solved by noting that, for all v ∈ V , (15)

v )) + x(δE (v)) − x(δA (v)) (mod 2), x(δE∪A (v)) ≡ x(δA (¯

and introducing a slack variable sv ∈ Z+ to obtain the equivalent constraint (16)

x(δA (¯ v )) + x(δE (v)) − x(δA (v)) − 2sv = 0 for all v ∈ V.

2 Let PM P T (M ) be the convex hull of the feasible solutions to the integer program above, and let Q2M P T (M ) be the set of feasible solutions to its linear programming relaxation:

(17)

LMMPT2(M, c) = min c⊤ x

(18)

subject to ¯ x(δA (S)) + x(δE (S)) − x(δA (S)) ≥ 0 for all S ⊆ V and

(19)

xe ≥ 1 for all e ∈ E ∪ A.

Note that the constraints (12) were relaxed to x(δE∪A (v)) ≥ 0 for all v ∈ V , but these constraints are redundant in the linear program LMMPT2(M, c). We reach the same conclusion if we use the formulation with slacks and we discard them.

4

Linear programming relaxations

In the previous section we gave two integer programming formulations for the mixed postman problem, as well as their linear relaxations. One of the first questions we might ask is whether one of the relaxations is better than the other or they are in fact equivalent. We answer this

26

Francisco Javier Zaragoza Mart´ınez

question by showing in two rather diﬀerent ways that the relaxations are equivalent. A third, diﬀerent proof is due to Corber´an et al [2]. With this result in hand, we study some of the properties of the extreme points of the set Q1M P T (M ) of solutions to our first formulation.

4.1

Equivalence

We give two proofs that LMMPT1(M, c) and LMMPT2(M, c) are essentially equivalent. Our first result says that solving both linear programs would give the same objective value. Theorem 4.1.1 For every x1 ∈ Q1M P T (M ) there exists x2 ∈ Q2M P T (M ) such that c⊤ x1 = c⊤ x2 , and conversely, for every x2 ∈ Q2M P T (M ) there exists x1 ∈ Q1M P T (M ) such that c⊤ x1 = c⊤ x2 . Moreover, in both cases, x1a = x2a for all a ∈ A and x1e+ + x1e+ = x2e for all e ∈ E. Proof: First note that x1e = x2e for all e ∈ A, and x1e+ + x1e− = x2e for all e ∈ E imply c⊤ x1 = c⊤ x2 for every vector of costs c. (⇒) Let x1 ∈ Q1M P T (M ) and define x2 as above. It is clear that x2 ∈ RE∪A , so + we only have to prove (18). Let S ⊆ V , then

(20) (21) (22) (23)

0 ≤ 2x1 (⃗δB (S)) # !" v )) − x1 (⃗δ(v)) + 2x1 (⃗δB (S)) = x1 (⃗δ(¯ v∈S 1

¯ + x1 (⃗δB (S)) + x1 (⃗δB (S)) ¯ − x1 (δA (S)) = x (δA (S)) ¯ + x2 (δE (S)) − x2 (δA (S)). = x2 (δA (S))

(⇐) Let x2 ∈ Q2M P T (M ) and assume x2 is rational. Let N be a positive integer such that each component of x = N x2 is an even integer. Consider the graph M N that contains xe copies of each e ∈ E ∪ A. Note that M N is Eulerian, and xe ≥ N for all e ∈ E ∪ A. Hence we can direct some of the copies of e ∈ E in one direction and the rest in the other (say xe+ and xe− , respectively) to obtain an Eulerian tour of M N . Therefore, x ∈ Q1M P T (M N ), xe ≥ N for all e ∈ A, and xe+ + xe− ≥ N for all e ∈ E, and hence x1 = N1 x ∈ Q1M P T (M ). Note that x1 satisfies the properties in the statement. ! Theorem 4.1.1 implies that, for every vector c, LMMPT1(M, c) = LMMPT2(M, c), that is, it is equivalent to optimize over either polyhedron. Our second result goes a bit further: we show that Q2M P T (M ) is

Linear relaxations of the mixed postman problem

27

essentially a projection of Q1M P T (M ). Let A be the incidence matrix of the directed graph D = (V, A), and let D be the incidence matrix of the directed graph D + = (V, E + ). Let Q3M P T (M ) be the set of solutions + − x ∈ RA∪E∪E ∪E of the system: (24) (25)

AxA + D(xE + − xE − ) = 0V xE − xE + − xE −

= 0E

(26)

xA ≥ 1A

(27)

xE

≥ 1E

(28)

xE +

≥ 0E

(29)

xE −

≥ 0E

Note that this system is a reformulation of (7)–(10) where all the constraints have been written in vector form, and we have included an additional variable xe for each edge e. The following is a consequence of Theorem 4.1.1, but we give a diﬀerent proof. Theorem 4.1.2 The projection of the polyhedron Q3M P T (M ) onto xE + = 0E and xE − = 0E is Q2M P T (M ). Proof: Let Q be the projection of Q3M P T (M ) onto xE + = 0E and xE − = 0E (which can be obtained with an application of the FourierMotzkin elimination procedure), that is, let ⊤ ⊤ Q = {x ∈ RA∪E : (A⊤ zV + zA )⊤ xA + (zB + zE )⊤ xE ≥ zA 1A + z E 1E , ∀z ∈ R},

where R = {(zV , zB , zA , zE ) ∈ RV ∪E

+

∪A∪E

: zA ≥ 0A , zE ≥ 0E and zB ≥ |D⊤ zV |}.

We verify first that (18) and (19) are valid inequalities for Q: (18) Let S ⊆ V , and consider the element of R given by zV = χS , zB = χδE (S) , zA = 0A , and zE = 0E . This implies the constraint ¯ − (χS )⊤ AxA + (χδE (S) )⊤ xE ≥ 0, that is, x(δE (S)) + x(δA (S)) x(δA (S)) ≥ 0. (19) Let a ∈ A, and consider the element of R given by zV = 0V , zB = 0E , zA = 1a , and zE = 0E . This implies the constraint ⊤ 1⊤ a xA ≥ 1a 1A , that is, xa ≥ 1. Let e ∈ E, and consider the element of R given by zV = 0V , zB = 0E , zA = 0A , and zE = 1e . ⊤ This implies the constraint 1⊤ e xE ≥ 1e 1E , that is, xe ≥ 1.

28

Francisco Javier Zaragoza Mart´ınez

Now we verify that every element of R can be written as a nonnegative linear combination of the following elements of R: (S1) For S ⊆ V , let zV = χS , zB = χδE (S) , zA = 0A , and zE = 0E . (S2) For S ⊆ V , let zV = −χS , zB = χδE (S) , zA = 0A , and zE = 0E . (A) For a ∈ A, let zV = 0V , zB = 0E , zA = 1a , and zE = 0E . (E1) For e ∈ E, let zV = 0V , zB = 0E , zA = 0A , and zE = 1e . (E2) For e ∈ E, let zV = 0V , zB = 1e , zA = 0A , and zE = 0E . If any component of zA or zE is positive, we can use (A) or (E1) to reduce it to zero, so we only consider the set of solutions of zB ≥ |D ⊤ zV | with zB and zV free. Let S+ = supp+ (zV ), and let S− = supp− (zV ). If both S+ and S− are empty, then we can reduce the components of zB using (E2). Otherwise, assume that S+ is nonempty and that the minimal positive component of zV is 1. For every edge e ∈ δE (S+ ) with / S+ we have endpoints u ∈ S+ , v ∈ (30)

(zB )e ≥ |(D ⊤ zV )e | = |(zV )u − (zV )v | ≥ |(zV )u | = (zV )u ≥ 1.

Therefore, the vectors (31)

∗ ≡ zB − χδE (S+ ) and zV∗ ≡ zV − χS+ zB

∗ ≥ |D ⊤ z ∗ | and have fewer nonzero components. So we can satisfy zB V reduce (zB , zV ) using (S1). Similarly, if S− is nonempty, we can reduce ! (zB , zV ) using (S2).

4.2

Half-integrality

Now we explore the structure of the extreme points of Q1M P T (M ). To start, we oﬀer a simple proof of the following result due independently to several authors. We say that e ∈ E is tight if xe+ + xe− = 1. Theorem 4.2.1 (Kappauf and Koehler [7], Ralphs [10], Win [12]) Every extreme point x of the polyhedron Q1M P T (M ) has components whose values are either 12 or a nonnegative integer. Moreover, fractional components occur only on tight edges.

Linear relaxations of the mixed postman problem

29

Proof: Let x be an extreme point of Q1M P T (M ). We say that a ∈ A is fractional if xa is not an integer. Similarly, we say that e ∈ E is fractional if at least one of xe+ or xe− is not an integer. Let F = {e ∈ E ∪ A : e is fractional}. We will show that F ⊆ E, and that each e ∈ F is tight. Assume that for some v ∈ V , dF (v) = 1. Let e be the unique element of F incident to v. Since the total flow into v is integral the only possibility is that e ∈ E. Moreover, both xe+ and xe− must be fractional. If e is not tight, the vectors x1 and x2 obtained from x replacing the entries in e+ and e− by (32)

x1e+ x2e+

= xe + + ϵ = xe + − ϵ

x1e− x2e−

= xe− + ϵ, = xe − − ϵ

(where ϵ = min{xe+ , xe− , 2(xe+ + xe− − 1)} > 0) would be feasible, with x = 21 (x1 + x2 ), contradicting the choice of x. Hence e is a tight edge, and satisfies xe+ = xe− = 12 . Delete e from F and repeat the above argument until F is empty or F induces an undirected graph with minimum degree 2. (Deletion of e does not alter the argument since it contributes 0 flow into both its ends.) Suppose F contains a cycle C. Assign an arbitrary orientation (say, positive) to C. We say that an arc in C is forward if it has the same orientation as C, and we call it backward otherwise. Partition C as follows: (33)

+ CA = {e ∈ C ∩ A : e is forward},

(34)

− = {e ∈ C ∩ A : e is backward}, CA

(35)

CE= = {e ∈ C ∩ E : e is tight},

(36)

CE> = {e ∈ C ∩ E : e is not tight},

and define (37)

ϵ+ =

(38)

ϵ− =

(39)

ϵ= =

(40)

ϵ> =

(41)

ϵ1 = min{ϵ+ , ϵ− , 2ϵ= , ϵ> }.

min ⌈xe ⌉ − xe ,

+ e∈CA

min xe − ⌊xe ⌋,

− e∈CA

min {xe+ , xe− },

= e∈CE

min {⌈xe ⌉ − xe , xe − ⌊xe ⌋},

> e∈CE

30

Francisco Javier Zaragoza Mart´ınez

The choice of C implies ϵ1 > 0. Now we define a new vector x1 as follows: ⎧ + or e is forward in CE> xe + ϵ1 if e ∈ CA ⎪ ⎪ ⎪ − ⎪ ⎨ xe − ϵ1 if e ∈ CA or e is backward in CE> 1 1 1 xe + 2 ϵ if e is the forward copy of an edge in CE= (42) xe = ⎪ ⎪ ⎪ xe − 21 ϵ1 if e is the backward copy of an edge in CE= ⎪ ⎩ xe otherwise. This is equivalent to pushing ϵ1 units of flow in the positive direction of C, and therefore it is easy to verify that x1 ∈ Q1M P T (M ). Similarly, define ϵ2 and a vector x2 using the other (negative) orientation of C. But now x is a convex combination of x1 and x2 (in fact, by choosing ϵ = min{ϵ1 , ϵ2 } and pushing ϵ units of flow in both directions we would have x = 12 (x1 + x2 )) contradicting the choice of x. Therefore F is empty. ! 1 A similar idea allows us to prove a suﬃcient condition for QM P T (M ) to be integral. A mixed graph M = (V, E, A) is even if the total degree dE∪A (v) is even for every v ∈ V . Theorem 4.2.2 (Edmonds and Johnson [3]) If M is even, then the polyhedron Q1M P T (M ) is integral. Therefore the mixed postman problem can be solved in polynomial time for the class of even mixed graphs. Proof: Let x be an extreme point of Q1M P T (M ). We say that a ∈ A is even if xa is even. We say that e ∈ E is even if xe+ − xe− is even. For a contradiction, assume x is not integral, and define F as in the proof of Theorem 4.2.1. Let N = {e ∈ E ∪ A : e is even}. Note that by Theorem 4.2.1, F ⊆ N . Hence N is not empty. We show now that M [N ] has minimum degree 2, and hence contains a cycle C. Let v ∈ V . If dF (v) ≥ 2 then certainly dN (v) ≥ 2. If dF (v) = 1 then (43)

x(⃗δ(v)) − x(⃗δ(¯ v )) =

%

a∈δA (v)∪δA (¯ v)

±xa +

%

±(xe+ − xe− )

e∈δE (v)

is the sum of an even number of integer terms (one term per arc a ∈ v ) and one term per edge e ∈ δE (v)), and one of them is equal δA (v)∪δA (¯ to zero (the one in δF (v)); therefore another term must be even. The same argument works for a vertex v not in V (F ), that is, dF (v) = 0, with at least one element of N incident to it, that is, dN (v) ≥ 1.

Linear relaxations of the mixed postman problem

31

As before, assign an arbitrary (positive) orientation to C and parti+ − tion it into the classes CA , CA , CE= , CE> . Note that all e ∈ C \ CE= satisfy xe ≥ 2. Hence the vector x1 defined as ⎧ + xe + 1 if e ∈ CA or e is forward in CE> , ⎪ ⎪ ⎪ − ⎪ ⎨ xe − 1 if e ∈ CA or e is backward in CE> , 1 1 (44) xe = xe + 2 if e is the forward copy of an edge in CE= , ⎪ 1 = ⎪ ⎪ ⎪ xe − 2 if e is the backward copy of an edge in CE , ⎩ xe otherwise, as well as the vector x2 obtained from the negative orientation of C, belong to Q1M P T (M ) and satisfy x = 12 (x1 + x2 ). This contradiction implies that F must be empty. !

4.3

Dual integrality

Now we consider the dual of the linear relaxation LMMPT1 (6-10): (45) DMMPT1(M, c) = 1⊤ z subject to (46)

yu − yv + za ≤ ca for all a ∈ A with tail u and head v,

(47)

yu − yv + ze ≤ ce for all e ∈ E with ends u and v,

(48)

−yu + yv + ze ≤ ce for all e ∈ E with ends u and v,

(49)

yv

free for all v ∈ V, and

(50)

ze ≥ 0 for all e ∈ A ∪ E.

Theorem 4.3.1 Let M = (V, E, A) be strongly connected, and let c ∈ . Then DMMPT1 has an integral optimal solution (y ∗ , z ∗ ). ZE∪A + Proof: Since LMMPT1 is feasible and bounded, then DMMPT1 is also feasible and bounded. Furthermore, both problems have optimal solutions. Choose an extreme point optimal solution x∗ of the primal. Without loss of generality, we can assume that not both x∗e+ , x∗e− are positive, unless e ∈ E is tight. We construct an integral solution (y ∗ , z ∗ ) to the dual satisfying the complementary slackness conditions: 1. for all (u, v) = a ∈ A, x∗a > 0 implies yu∗ − yv∗ + za∗ = ca , 2. for all {u, v} = e ∈ E, x∗e+ > 0 implies yu∗ − yv∗ + ze∗ = ce , 3. for all {u, v} = e ∈ E, x∗e− > 0 implies −yu∗ + yv∗ + ze∗ = ce ,

32

Francisco Javier Zaragoza Mart´ınez

4. for all a ∈ A, za∗ > 0 implies x∗a = 1, and 5. for all e ∈ E, ze∗ > 0 implies x∗e+ + x∗e− = 1. Note that x∗a > 0 for all a ∈ A, hence condition (1) implies that yu∗ − yv∗ + za∗ = ca for all (u, v) = a ∈ A. Also note that, for any e ∈ E, at least one of x∗e+ > 0 and x∗e− > 0 holds. Moreover, the only case in which both hold is when e is a fractional tight edge. In this case, conditions (2) and (3) imply that yu∗ = yv∗ and ze∗ = ce . Hence, to obtain a feasible solution to the dual satisfying complementary slackness, we can set ze∗ = ce for each fractional tight edge e, and then contract each connected component (Vi , Fi ) of the fractional graph (V, F ) into a single super-vertex vi , creating a new dual variable yvi for it. Once we are done with the rest of the construction, we set yv∗ = yv∗i for each vertex v ∈ Vi . At this point, all remaining edges e satisfy that either xe+ = 0 or xe− = 0. Delete the arc whose variable is zero, and let D = (V ′ , A′ ) the directed graph thus obtained. Observe that the restriction x of x∗ to the arcs of D is an optimal integer circulation of D with costs c restricted to the arcs of D. But the minimum cost circulation problem has integral ′ ′ optimal dual solutions. Let (y, z) ∈ ZV ∪E be one such solution. Let y ∗ be the extension of y as described in the previous paragraph. Let z ∗ be the extension of z obtained as follows. For each a ∈ A \ A′ let za∗ have the integer value implied by condition (1). For each e ∈ / F let ze∗ have the integer value implied by either condition (2) or (3). Now, using the interpretation of (y, z) as a potential in D, it is not hard to verify that the vector (y ∗ , z ∗ ) satisfies (4) and (5), and hence it is an integral optimal solution to DMMPT1. !

5

Open problems

One of the most interesting open problems is that of a full characterization of integrality of the polyhedron Q1M P T (M ). Another interesting option is to add a set of valid inequalities to obtain a tighter relaxation. For example, we can add the well-known odd-cut constraints to obtain 1 another polyhedron OM P T (M ), and ask again for a full characterization of integrality of this polyhedron. Finally, we may ask whether our knowledge about the extreme points of the primal and dual polyhedra could lead us to a primal-dual approximation algorithm for the mixed postman problem.

Linear relaxations of the mixed postman problem

33

Acknowledgements The author would like to thank Bill Cunningham, Joseph Cheriyan, Jim Geelen, Bertrand Guenin, Miguel Anjos, and Antoine Vella at the University of Waterloo for their continuous support and their insightful comments during countless discussions. Francisco Javier Zaragoza Mart´ınez Departmento de Sistemas, Universidad Aut´ onoma Metropolitana, Unidad Azcapotzalco Av. San Pablo 180, Edificio H 2do Piso, Col. Reynosa Tamaulipas, Deleg. Azcapotzalco, 02200, M´exico, D.F. franz@correo.azc.uam.mx

References ´ Mota E., [1] Christofides N.; Benavent E.; Campos V.; Corber´an A.; An optimal method for the mixed postman problem, Lecture Notes in Control and Inform. Sci. 59 (1984), 641–649. ´ Mota E.; Sanchis J. M., A comparison of two diﬀer[2] Corber´ an A.; ent formulations for arc routing problems on mixed graphs, available online in Comput. Oper. Res., 2005. [3] Edmonds J.; Johnson E. L., Matching, Euler tours and the Chinese postman, Math. Programming 5 (1973), 88–124. [4] Ford L.R. Jr.; Fulkerson D. R., Flows in Networks, Princeton University Press, Princeton, N.J., 1962. [5] Gr¨otschel M.; Win Z., A cutting plane algorithm for the windy postman problem, Math. Programming Series A, 55 No.3 (1992), 339–358. [6] Guan M. G., Graphic programming using odd or even points, Chinese Math 1 (1960), 273–277. [7] Kappauf C. H.; Koehler G. J.,The mixed postman problem, Discrete Appl. Math. 1 No.1-2 (1979), 89–103. [8] Nobert Y.; Picard J.-C., An optimal algorithm for the mixed Chinese postman problem, Networks 27 No.2 (1996), 95–108. [9] Papadimitriou C. H., On the complexity of edge traversing, J. ACM 23 No.3 (1976), 544–554.

34

Francisco Javier Zaragoza Mart´ınez

[10] Ralphs T. K., On the mixed Chinese postman problem, Oper. Res. Lett. 14 No.3 (1993), 123–127. [11] Veblen O., An application of modular equations in analysis situs, Ann. of Math. 2 No.14 (1912/1913), 86–94. [12] Win Z., On the windy postman problem on Eulerian graphs, Math. Programming Series A, 44 No.1 (1989), 97–112.

Morfismos, Vol. 9, No. 1, 2005, pp. 35–38

A nonmeasurable set as a union of a family of increasing well–ordered measurable sets ∗ Ju´an Gonz´alez-Hern´andez

C´esar E. Villarreal

Abstract Given a measurable space (X, A) in which every singleton is measurable and which contains a nonmeasurable subset, we prove the existence of a nonmeasurable set which is the union of a wellordered increasing family of measurable sets.

2000 Mathematics Subject Classification: 28A05, 06A05. Keywords and phrases: well order, measurable space.

1

Introduction

Using the well order principle (Zermelo’s theorem) we prove, for a very general measurable space (X, A), that there exists a well ordered family (under the inclusion) of measurable sets whose union is nonmeasurable. This study is motivated by the determination of the existence of solutions in a Markov decision problem with constraints (see [3] for this topic). The problem we faced was to find an optimal stochastic kernel supported on a measurable function. This led us to try to extend the domain of a measurable function on the union of a well–ordered family of measurable sets. However, the measurability may be missed for the union of the family, as we show below. We also give an example of a set A contained in a measurable space where each singleton is measurable, but nevertheless A can not be expressed as a well–ordered union of measurable sets. ∗

Work partially sponsored by CONACYT grant SEP-2003-C02-45448/A-1 and PAICYT-UANL grant CA826-04.

35

36

J. Gonz´ alez-Hern´ andez and C. E. Villarreal

Let us start by recalling some basic terminology and the statement of the well order principle. Let X be a set. (a) A relation ≼ is called a partial order on X if it is reflexive, antisymmetric and transitive. In this case, X is said to be partially ordered by ≼. (b) Let A be a subset of X. If there exists x ∈ A such that x ≼ a for all a ∈ A, then x is called the first element of A (with respect to the partial order ≼). (c) A partial order ≼ on X is called a total order if for each x, y ∈ X we have x ≼ y or y ≼ x. (d) A total order ≼ in a set X is called a well order if every nonempty subset of X has a first element. In this case, X is said to be well ordered. Theorem 1.1 (Well order principle) Let X be a set. There is a well order ≼ in X. The proof of this principle can be found, for instance, in [1, Well ordering theorem] or [2].

2

The result

Theorem 2.1 Let (X, A) be a measurable space such that, for each x ∈ X, the set {x} is measurable, and X contains a nonmeasurable set. Then there is a collection I of measurable subsets of X, well ordered by ! contention (⊂), such that C∈I C is nonmeasurable. Proof: Let A ⊂ X be a nonmeasurable set. By the well order principle, there is a well order ≼ in A. Denote by ≺ the relation a ≺ b ⇐⇒ (a ≼ b and a ̸= b). For each d ∈ A let us define Ad := {x ∈ A : x ≼ d}. Set E := {Ad : d ∈ A} and note that this set is well ordered by ⊂. If all the Ad are measurable, then we take I = E. Otherwise, there is a d∗ ∈ A such that Ad∗ is nonmeasurable. Let A′ = {d ∈ A : Ad is nonmeasurable}. Since A′ ⊂ A is nonempty, there exists the first element d′ of A′ . Now, Ad′ is

A nonmeasurable set as a union of well-ordered measurable sets.

37

nonmeasurable and so is Ad′ \{d′ }. Moreover, taking I = {Ad : d ≺ d′ }, we have Ad′ \ {d′ } = {d ∈ A : d ≺ d′ } =

! d≺d′

Ad =

!

C,

C∈I

"

and, therefore, we can conclude that the set C∈I C is nonmeasurable. Noting again that I is well ordered by ⊂, the proof is complete. ✷

3

An example

We shall give an example of a measurable space in which each singleton is measurable, but there exists a nonmeasurable set A that is not the union of measurable sets in a well ordered family (under ⊂). For every set B, let #B denote the cardinality of B and 2B the power set of B. Let X be a set such that #X > #IR (we can take X = 2IR , for instance). Define the σ-algebra A as the family of subsets A of X such that A ∈ A ⇐⇒ A is countable or X \ A is countable. We can take A ⊂ X such that #A > #IR and #(X \ A) > #IR. Let I be a wellordered index set, and assume that (Ai )i∈I is any strictly increasing net " of measurable sets such that i∈I Ai = A. As each X \ Ai ⊃ X \ A is uncountable, each Ai is countable. From Theorem 14, p. 179 in [2], we can see that #I = #A > #IR, so the set J := {i ∈ I : #{j ∈ I : j ≼ i} > #IN} is nonempty. Let i∗ be the first element of J and observe that #{j ∈ I : j ≼ i∗ } > #IN. Now, by the axiom of choice " (see [1] or [2]), for each i ∈ I we can choose xi ∈ Ai \ j≺i Ai , such " that the sets {j ∈ I : j ≼ i∗ } and j≼i∗ {xj } have the same cardinality. " " However, j≼i∗ {xi } ⊂ j≼i∗ Aj = Ai∗ , and so #Ai∗ ≥ #{j ∈ I : j ≼ i∗ } > #IN; that is to say, the set Ai∗ is uncountable, and we arrive at a contradiction because each Ai is countable. Hence, A cannot be the union of measurable sets in a well ordered family. We would like to conclude by posing a question. Consider the measurable space (IR, M), where M is the Lebesgue σ-algebra, and let A be an arbitrary nonmeasurable subset of IR (for an example of a nonLebesgue measurable set see [4]). Is it always possible to express A as the limit of an increasing net (Ai )i∈I of elements in M for some well ordered set I?

38

J. Gonz´ alez-Hern´ andez and C. E. Villarreal

Juan Gonz´ alez-Hern´ andez Departamento de Probabilidad y Estad´ıstica, IIMAS-UNAM, A. P. 20-726, M´exico, D. F., 01000, M´exico. juan@sigma.iimas.unam.mx

C´esar E. Villarreal Divisi´ on de Posgrado en Ingenier´ıa de Sistemas, FIME-UANL, A. P. 66450, San Nicol´ as de los Garza, N. L., M´exico. cesar@yalma.fime.uanl.mx

References [1] Halmos P. R., Naive Set Theory, Van Nostrand Reinhold, New York, 1960. [2] Just W.; Weese M., Discovering Modern Set Theory I, The Basics, American Mathematical Society, Providence, RI, 1996. [3] Piunovsky A. B., Optimal Control of Random Sequences in Problems With Constraints, Kluwer Academic Publisher, Dordrecht, 1997. [4] Royden H. L., Real Analysis, Macmillan, New York, 1968.

Morfismos, Vol. 9, No. 1, 2005, pp. 39–54

Noncooperative continuous-time Markov games ∗ H´ector Jasso-Fuentes

Abstract This work concerns noncooperative continuous-time Markov games with Polish state and action spaces. We consider finite-horizon and infinite-horizon discounted payoﬀ criteria. Our aim is to give a unified presentation of optimality conditions for general Markov games. Our results include zero-sum and nonzero-sum games.

2000 Mathematics Subject Classification: 91A25, 91A15, 91A10. Keywords and phrases: Continuous-time Markov games, noncooperative games.

1

Introduction

Continuous-time Markov games form a class of dynamic stochastic games in which the state evolves as a Markov process. The class of Markov games includes (deterministic) diﬀerential games, stochastic diﬀerential games, jump Markov games and many others, but they are usually studied as separate, diﬀerent, types of games. In contrast, we propose here a unified presentation of optimality conditions for general Markov games. In fact, we only consider noncooperative games but the same ideas can be extended in an obvious manner to the cooperative case. As already mentioned, our presentation and results hold for general Markov games but we have to pay a price for such a generality; namely, we restrict ourselves to Markov strategies, which depend only ∗

Research partially supported by a CONACYT scolarship. This paper is part of the author’s M. Sc. thesis presented at the Department of Mathematics of CINVESTAV-IPN.

39

40

H´ector Jasso-Fuentes

on the current state. More precisely, at each decision time t, the players choose their corresponding actions (independently and simultaneously) depending only on the current state X(t) of the game. Hence, this excludes some interesting situations, for instance, some hierarchical games in which some players “go first”. Our references are mainly on noncooperative continuous-time games. However, for cooperative games the reader may consult Filar/Petrosjan [2], Gaidov [3], Haurie [5] and their references. For discrete time games see, for instance, Basar/Oldsder [1], Gonzalez-Trejo et al. [4]. A remark on terminology: The Borel σ-algebra of a topological space S is denoted by B(S). A complete and separable metric space is called a Polish space.

2

Preliminaries

Throughout this section we let S be a Polish space, and X(·) = {X(t), t ≥ 0} a S-valued Markov process defined on a probability space (Ω, F, IP). Denote by IP(s, x, t, B) := IP(X(t) ∈ B|X(s) = x) for all t ≥ s ≥ 0, x ∈ S and B ∈ B(S), the transition probability function of X(·).

2.1

Semigroups

Definition 2.1 Let M be the linear space of all real-valued measurable functions v on Sˆ := [0, ∞) × S such that !

IP(s, x, t, dy) |v(s, y)| < ∞

for all 0 ≤ s ≤ t and x ∈ S.

S

For each t ≥ 0 and v ∈ M , we define a function Tt v on Sˆ as (1)

Tt v(s, x) :=

!

IP(s, x, s + t, dy) v(s + t, y).

S

Proposition 2.2 The operators Tt , t ≥ 0, defined by (1), form a semigroup of operators on M , that is, (i) T0 = I, the identity, and (ii) Tt+r = Tt Tr . For a proof of this proposition see, for instance, Jasso-Fuentes [7], Proposition 1.2.2.

Noncooperative continuous-time Markov games

2.2

41

The extended generator

Definition 2.3 Let M0 ⊂ M be the family of functions v ∈ M for which the following conditions hold: ˆ a) limt↓0 Tt v(s, x) = v(s, x) for all (s, x) ∈ S; b) there exist t0 > 0 and u ∈ M such that Tt |v|(s, x) ≤ u(s, x)

for all (s, x) ∈ Sˆ and 0 ≤ t ≤ t0 .

Now let D(L) ⊂ M0 be the set of functions v ∈ M0 for which: a) the limit [Tt v(s, x) − v(s, x)] t ! 1 = lim IP(s, x, s + t, dy)[v(s + t, y) − v(s, x)] t↓0 t S

Lv(s, x) : = lim t↓0

(2)

ˆ exists for all (s, x) ∈ S, b) Lv ∈ M0 , and c) there exist t0 > 0 and u ∈ M such that |Tt v(s, x) − v(s, x)| ≤ u(s, x) t for all (s, x) ∈ Sˆ and 0 ≤ t ≤ t0 . The operator L in (2) will be referred to as the extended generator of the semigroup Tt , and the set D(L) is called the domain of L. The following lemma (which is proved in [7], Lemma 1.3.2, for instance) summarizes some properties of L. Lemma 2.4 For each v ∈ D(L), the following conditions hold: a)

d+ dt Tt v

:= limh↓0 h−1 [Tt+h v − Tt v] = Tt Lv,

b) Tt v(s, x) − v(s, x) =

"t 0

Tr (Lv)(s, x) dr,

c) if ρ > 0 and vρ (s, x) := e−ρs v(s, x), then vρ is in D(L) and Lvρ (s, x) = e−ρs [Lv(s, x) − ρv(s, x)].

42

2.3

H´ector Jasso-Fuentes

Expected rewards

Let X(·) = {X(t), t ≥ 0} be as in the previous paragraphs, that is, a Markov process with values in a Polish space S and with transition probabilities IP(s, x, t, B) for all t ≥ s ≥ 0, x ∈ S and B ∈ B(S). Recalling the Definitions 2.1 and 2.3 the semigroup defined in (1) becomes Tt v(s, x) = IEsx [v(s + t, X(s + t))], where IEsx (·) := IE[ · |X(s) = x] is the conditional expectation given X(s) = x. Similarly, we can rewrite part b) of Lemma 2.4 as (3) IEsx [v(s + t, X(s + t))] − v(s, x) = IEsx

!" t

Lv(s + r, X(s + r))dr

0

#

for each v ∈ D(L). We shall refer to (3) as Dynkin’s formula. The extended generator L of the semigroup {Tt } will also be referred to as the extended generator of the Markov process X(·). The following fact will be useful in later sections. Proposition 2.5 Fix numbers ρ ∈ IR and τ > 0. Let R(s, x) and K(s, x) be measurable functions on Sτ := [0, τ ] × S, and suppose that R is in M0 . If a function v ∈ D(L) satisfies the equation (4)

ρv(s, x) = R(s, x) + Lv(s, x)

on Sτ , with the “terminal” condition (5)

v(τ, x) = K(τ, x),

then, for every (s, x) ∈ Sτ , (6) v(s, x) = IEsx

!" τ

−ρ(t−s)

e

−ρ(τ −s)

R(t, X(t))dt + e

#

K(τ, X(τ )) .

s

If the equality in (4) is replaced with the inequality ”≤” or ”≥”, then the equality in (6) is replaced with the same inequality, that is, ”≤” or ”≥” respectively. Proof: Suppose that v satisfies (4) and let vρ (s, x) := e−ρs v(s, x). Then, by (4) and Lemma 2.4 c), we obtain Lvρ (s, x) = e−ρs [Lv(s, x) − ρv(s, x)] (7)

= −e−ρs R(s, x).

Noncooperative continuous-time Markov games

43

Therefore, applying Dynkin’s formula (3) to vρ and using (7), !

"

IEsx e−ρ(s+t) v(s + t, X(s + t)) − e−ρs v(s, x) (8)

= −IEsx

#$ t

= −IEsx

#$ s+t

−ρ(s+r)

e

R(s + r, X(s + r))dr

0

−ρr

e

%

%

R(r, X(r))dr .

s

The latter expression, with s + t = τ , and (5) give IEsx e−ρτ K(τ, X(τ )) − e−ρs v(s, x) &

= −IEsx

#$ τ

'

−ρr

e

%

R(r, X(r))dr .

s

Finally, multiply both sides of this equality by eρs and then rearrange terms to obtain (6). Concerning the last statement in the proposition, suppose that instead of (4) we have ρv ≥ R + Lv. Then (7) becomes −e−ρs R(s, x) ≥ Lvρ (s, x) and the same calculations in the previous paragraph show that the equality in (6) should be replaced with “≥”. For “≤”, the result is obtained similarly. ✷ Observe that the number ρ in Proposition 2.5 can be arbitrary, but in most applications in later sections we will require either ρ = 0 or ρ > 0. In the latter case ρ is called a “discount factor”. On the other hand, if the function R(s, x) is interpreted as a “reward rate”, then (6) represents an expected total reward during the time interval [s, τ ] with initial condition X(s) = x and terminal reward K. This expected reward will be associated with finite-horizon games. In contrast, the expected reward in (11), below, will be associated with infinite-horizon games. Proposition 2.6 Let ρ > 0 be a given number, and R ∈ M0 a function on Sˆ := [0, ∞) × S . If a function v ∈ D(L) satisfies (9)

ρv(s, x) = R(s, x) + Lv(s, x)

for all (s, x) ∈ Sˆ

and is such that, as t → ∞, (10)

e−ρt Tt v(s, x) = e−ρt IEsx [v(s + t, X(s + t))] → 0,

44

H´ector Jasso-Fuentes

then (11)

v(s, x) = IEsx =

" ∞ 0

!" ∞

−ρ(t−s)

e

R(t, X(t))dt

s

#

e−ρt Tt R(s, x)dt.

Moreover, if the equality in (9) is replaced with the inequality “≤” or “≥”, then the equality in (11) should be replaced with the same inequality. Proof: Observe that the equations (9) and (4) are essentially the same, the only diﬀerence being that the former is defined on Sˆ and the latter on Sτ . At any rate, the calculations in (7)-(8) are also valid in the present case. Hence, multiplying both sides of (8) by eρs and then letting t → ∞ and using (10) we obtain (11). The remainder of the proof is as in Proposition 2.5. ✷

3

The game model and strategies

For notational case, we shall restrict ourselves to the two-player situation. However, the extension to any finite number ≥ 2 of players is completely analogous.

3.1

The game model

Some of the main features of a (two-player) continuous-time Markov game can be described by means of the game model GM := {S, (Ai , Ri )i=1,2 , La1 ,a2 }

(12)

with the following components. • S denotes the game’s state space, which is assumed to be a Polish space. • Associated with each player i = 1, 2, we have (13)

(Ai , Ri )

where Ai is a Polish space that stands for the action space (or control set) for player i. Let A := A1 × A2 , and K := S × A.

Noncooperative continuous-time Markov games

45

The second component in (13) is a real-valued measurable function Ri on [0, ∞) × K = [0, ∞) × S × A = Sˆ × A (Sˆ := [0, ∞) × S), which denotes the reward rate function for player i. (Observe that Ri (s, x, a1 , a2 ) depends on the actions (a1 , a2 ) ∈ A of both players.) • For each pair a = (a1 , a2 ) ∈ A there is a linear operator La with domain D(La ), which is the extended generator of a S-valued Markov process with transition probability IPa (s, x, t, B). The game model (12) is said to be time-homogeneous if the reward rates are time-invariant and the transition probabilities are timehomogeneous, that is, Ri (s, x, a) = Ri (x, a)

and IPa (s, x, t, B) = IPa (t − s, x, B).

Summarizing, the game model (12) tells us where the game lives (the state space S) and how it moves (according to the players’ actions a = (a1 , a2 ) and the Markov process associated to La ). The reward rates Ri are used to define the payoﬀ function that player i (i = 1, 2) wishes to “optimize”— see for instance (15) and (16) below. To do this optimization each player uses, when possible, suitable “strategies”, such as those defined next.

3.2

Strategies

We will only consider Markov (also known as feedback) strategies, namely, for each player i = 1, 2, measurable functions πi from Sˆ := [0, ∞) × S to Ai . Thus, πi (s, x) ∈ Ai denotes the action of player i prescribed by the strategy πi if the state is x ∈ S at time s ≥ 0. In fact, we will restrict ourselves to classes Π1 , Π2 of Markov strategies that satisfy the following. Assumption 3.1 For each pair π = (π1 , π2 ) ∈ Π1 × Π2 , there exists a strong Markov process X π (·) = {X π (t), t ≥ 0} such that: a) Almost all the sample paths of X π (·) are right-continuous, with left-hand limits, and have only finitely many discontinuities in any bounded time interval.

46

H´ector Jasso-Fuentes

b) The extended generator Lπ of X π (·) satisfies that Lπ = La

if

(π1 (s, x), π2 (s, x)) = (a1 , a2 ) = a.

The set Π1 × Π2 in Assumption 3.1 is called the family of admissible pairs of Markov strategies. A pair (π1 , π2 ) ∈ Π1 × Π2 is said to be stationary if πi (s, x) ≡ πi (x) does not depend on s ≥ 0. Clearly, the function spaces M ⊃ M0 ⊃ D(L) introduced in Section 2 depend on the pair π = (π1 , π2 ) ∈ Π1 × Π2 of strategies being used, because so does IPπ . Hence, these spaces will now be written as M π , M0π , D(Lπ ), and they are supposed to verify the following conditions. Assumption 3.2 a) There exist nonempty spaces M ⊃ M0 ⊃ D, which do not depend on π, such that, for all π = (π1 , π2 ) ∈ Π1 ×Π2 M ⊂ M π , M0 ⊂ M0π , D ⊂ D(Lπ ) and, in addition, the operator Lπ is the closure of its restriction to D. b) For π = (π1 , π2 ) ∈ Π1 ×Π2 and i = 1, 2, the reward rate Ri (s, x, a1 , a2 ) is such that Riπ is in M0 , where Riπ (s, x) := Ri (s, x, π1 (s, x), π2 (s, x)). Sometimes we shall use the notation (14)

Riπ (s, x) := Ri (s, x, π1 , π2 ) for π = (π1 , π2 ), i = 1, 2.

If the game model is time-homogeneous and the pair (π1 , π2 ) is stationary, then (14) reduces to Riπ (x) := Ri (x, π1 (x), π2 (x)) = Ri (x, π1 , π2 ). Throughout the remainder of this paper we consider the game model GM in (12) under Assumptions 3.1 and 3.2.

4

Noncooperative equilibria

Let GM be as in (12). In this work, we are concerned with the following two types of payoﬀ functions, where we use the notation (14). For each pair of strategies (π1 , π2 ) ∈ Π1 × Π2 and each player i = 1, 2:

47

Noncooperative continuous-time Markov games

• The finite-horizon payoﬀ Vτi (s, x, π1 , π2 ) : = IEπsx1 ,π2

!" τ s

e−ρ(t−s) Ri (t, X(t), π1 , π2 )dt

(15) +e−ρ(τ −s) Ki (τ, X(τ )) ] where 0 ≤ s ≤ τ , x ∈ S, Ki is a function in M (the space in Assumption 3.2 a)), and ρ ≥ 0 is a “discount factor”. The time τ > 0 is called the game’s horizon or “terminal time”, and Ki is a “terminal reward”. • The infinite-horizon discounted payoﬀ (16) V i (s, x, π1 , π2 ) := IEπsx1 ,π2

!" ∞ s

e−ρ(t−s) Ri (t, X(t), π1 , π2 )dt

#

where s ≥ 0, x ∈ S, and ρ > 0 is a (fixed) discount factor. Each player i = 1, 2 wishes to “optimize” his payoﬀ in the following sense. Definition 4.1 For i = 1, 2, let Vτi be as in (15), and define Sτ := [0, τ ] × S. A pair (π1∗ , π2∗ ) ∈ Π1 × Π2 of admissible strategies is said to be a noncooperative equilibrium, also known as a Nash equilibrium, if for all (s, x) ∈ Sτ (17)

Vτ1 (s, x, π1∗ , π2∗ ) ≥ Vτ1 (s, x, π1 , π2∗ ) for all π1 ∈ Π1

and (18)

Vτ2 (s, x, π1∗ , π2∗ ) ≥ Vτ2 (s, x, π1∗ , π2 ) for all π2 ∈ Π2 .

Hence, (π1∗ , π2∗ ) is a Nash equilibrium if for each i = 1, 2, πi∗ maximizes over Πi the payoﬀ function Vτi of player i when the other player, say j ̸= i, uses the strategy πj∗ . For the infinite-horizon payoﬀ function in (16), the definition of Nash equilibrium is the same as in Definition 4.1 with V i and Sˆ := [0, ∞) × S in lieu of Vτi and Sτ , respectively. Zero-sum games. For i = 1, 2, let Fi (s, x, π1 , π2 ) be the payoﬀ function in either (15) or (16). The game is called a zero-sum game if F1 (s, x, π1 , π2 ) + F2 (s, x, π1 , π2 ) = 0

for all s, x, π1 , π2 ,

48

H´ector Jasso-Fuentes

that is, F1 = −F2 . Therefore, if we define F := F1 = −F2 , it follows from (17) and (18) that player 1 wishes to maximize F (s, x, π1 , π2 ) over Π1 , whereas player 2 wishes to minimize F (s, x, π1 , π2 ) over Π2 , so (17) and (18) become (19)

F (s, x, π1 , π2∗ ) ≤ F (s, x, π1∗ , π2∗ ) ≤ F (s, x, π1∗ , π2 )

for all π1 ∈ Π1 and π2 ∈ Π2 , and all (s, x). In this case the Nash equilibrium (π1∗ , π2∗ ) is called a saddle point. In the zero-sum case, the functions (20)

L(s, x) := sup

inf F (s, x, π1 , π2 )

and (21)

U (s, x) := inf

sup F (s, x, π1 , π2 )

π1 ∈Π1 π2 ∈Π2

π2 ∈Π2 π1 ∈Π1

play an important role. The function L(s, x) is called the game’s lower value (with respect to the payoﬀ F (s, x, π1 , π2 )) and U (s, x) is the game’s upper value. Clearly, we have (22)

L(s, x) ≤ U (s, x) for all (s, x).

If the upper and lower values coincide, then the game is said to have a value, and the value of the game, call it V(s, x) is the common value of L(s, x) and U (s, x), i.e. V(s, x) := L(s, x) = U (s, x) for all (s, x). On the other hand, if (π1∗ , π2∗ ) satisfies (19), a trivial calculation yields U (s, x) ≤ F (s, x, π1∗ , π2∗ ) ≤ L(s, x) for all (s, x), which together with (22) gives the following. Proposition 4.2 If the zero-sum game with payoﬀ function F has a saddle point (π1∗ , π2∗ ), then the game has the value V(s, x) = F (s, x, π1∗ , π2∗ )

for all (s, x).

The next proposition gives conditions for a pair of strategies to be a saddle point.

Noncooperative continuous-time Markov games

49

Proposition 4.3 Suppose that there is a pair of admissible strategies π1∗ , π2∗ that satisfy, for all (s, x), F (s, x, π1∗ , π2∗ ) =

sup F (s, x, π1 , π2∗ )

π1 ∈Π1

(23) =

inf F (s, x, π1∗ , π2 ).

π2 ∈Π2

Then (π1∗ , π2∗ ) is a saddle point. Proof: Let (π1∗ , π2∗ ) be a pair of admissible strategies that satisfy (23). Then, for all (s, x), from the first equality in (23) we obtain F (s, x, π1∗ , π2∗ ) ≥ F (s, x, π1 , π2∗ )

for all π1 ∈ Π1 ,

which is the first inequality in (19). Similarly, the second equality in (23) yields the second inequality in (19), and it follows that (π1∗ , π2∗ ) is a saddle point. ✷ In the next section we give conditions for a pair of strategies to be a saddle point, and in Section 6 we study the so-called nonzero-sum case as in (17), (18).

5

Zero-sum games

In this section we study the existence of saddle points for the finitehorizon and infinite-horizon payoﬀs in (15) and (16), respectively.

Finite-horizon payoﬀ As in (19)-(21), the finite-horizon payoﬀ (15), in the zero-sum case, does not depend on i = 1, 2. Hence, we have the payoﬀ

Vτ (s, x, π1 , π2 ) : =

IEπsx1 ,π2

!" τ s

e−ρ(t−s) R(t, X(t), π1 , π2 )dt

+e−ρ(τ −s) K(τ, X(τ )) ] . This function Vτ plays now the role of F in (19)-(23). Recall that the Assumptions 3.1 and 3.2 are supposed to hold.

50

H´ector Jasso-Fuentes

Theorem 5.1 Consider ρ ∈ IR and τ > 0 fixed. Moreover let R(s, x, a1 , a2 ) and K(s, x, a1 , a2 ) be measurable functions on Sτ × A, where Sτ := [0, τ ] × S and A := A1 × A2 . Suppose that for each pair (π1 , π2 ) ∈ Π1 × Π2 , the function R(s, x, π1 , π2 ) is in M0 . In addition, suppose that there is a function v(s, x) ∈ D and a pair of strategies (π1∗ , π2∗ ) ∈ Π1 ×Π2 such that, for all (s, x) ∈ Sτ , (24)

ρv(s, x) =

(25)

=

∗

inf {R(s, x, π1∗ , π2 ) + Lπ1 ,π2 v(s, x)}

π2 ∈Π2

sup {R(s, x, π1 , π2∗ ) + Lπ1 ,π2 v(s, x)} ∗

π1 ∈Π1

∗

∗

= R(s, x, π1∗ , π2∗ ) + Lπ1 ,π2 v(s, x)

(26)

with the boundary condition (27)

v(τ, x) = K(τ, x)

for all x ∈ S.

Then a) v(s, x) = Vτ (s, x, π1∗ , π2∗ ) for all (s, x) ∈ Sτ ; b) (π1∗ , π2∗ ) is a saddle point and v(s, x) is the value of the game. Proof: a) Comparing (26)-(27) with (4)-(5), we conclude that part a) follows from Proposition 2.5. b) Assume for a moment that, for all (s, x) ∈ Sτ and all pairs (π1 , π2 ) of admissible strategies, we have (28)

Vτ (s, x, π1 , π2∗ ) ≤ v(s, x) ≤ Vτ (s, x, π1∗ , π2 )

If this is indeed true, then b) will follow from part a) together with (19) and Proposition 4.2. Hence it suﬃces to prove (28). To this end, let us call F (s, x, π1 , π2 ) the function inside the brackets in (24)-(25), i.e. (29)

F (s, x, π1 , π2 ) := R(s, x, π1 , π2 ) + Lπ1 ,π2 v(s, x).

Interpreting this function as the payoﬀ of a certain game, it follows from (24)-(26) and the Proposition 4.3 that the pair (π1∗ , π2∗ ) is a saddle point, that is, F (s, x, π1∗ , π2∗ ) = ρv(s, x) satisfies (19). More explicitly,

51

Noncooperative continuous-time Markov games

from (29) and the equality F (s, x, π1∗ , π2∗ ) = ρv(s, x), (19) becomes: for all π1 ∈ Π1 and π2 ∈ Π2 , ∗

R(s, x, π1 , π2∗ ) + Lπ1 ,π2 v(s, x)

∗

≤ ρv(s, x) ≤ R(s, x, π1∗ , π2 ) + Lπ1 ,π2 v(s, x).

These two inequalities together with the second part of Proposition 2.5 give (28). ✷ Infinite-horizon discounted payoﬀ We now consider the infinite-horizon payoﬀ in (16), which in the zero-sum case can be interpreted as V (s, x, π1 , π2 ) = IEπsx1 ,π2

!" ∞ s

#

e−ρ(t−s) R(t, X(t), π1 , π2 )dt .

Exactly the same arguments used in the proof of Theorem 5.1 but replacing Proposition 2.5 with Proposition 2.6, give the following result in the infinite-horizon case. Theorem 5.2 Suppose ρ > 0. Let R(s, x, a1 , a2 ) be as in Assumption 3.2 b). Suppose that there exist a function v ∈ D and a pair of strategies (π1∗ , π2∗ ) ∈ Π1 × Π2 such that, for all (s, x) ∈ Sˆ := [0, ∞) × S, ρv(s, x) = =

inf {R(s, x, π1∗ , π2 ) + Lπ1 ,π2 v(s, x)} ∗

π2 ∈Π2

∗

sup {R(s, x, π1 , π2∗ ) + Lπ1 ,π2 v(s, x)}

π1 ∈Π1

(30)

= R(s, x, π1∗ , π2∗ ) + Lπ1 ,π2 v(s, x) ∗

∗

and, moreover, for all (s, x) ∈ Sˆ and (π1 , π2 ) ∈ Π1 × Π2 , (31)

e−ρt IEπsx1 ,π2 [v(s + t, X(s + t))] → 0 as t → ∞.

Then ˆ a) v(s, x) = V (s, x, π1∗ , π2∗ ) for all (s, x) ∈ S; b) (π1∗ , π2∗ ) is a saddle point for the infinite-horizon discounted payoﬀ, and v(s, x) is the value of the game. Proof: Comparing (30)-(31) with (9)-(10) we can use Proposition 2.6 to obtain a). To obtain b), we follow the same steps used in the proof of Theorem ˆ ✷ 5.1 but replacing Proposition 2.5 with Proposition 2.6, and Sτ with S.

52

6

H´ector Jasso-Fuentes

Nonzero-sum games

An arbitrary game which does not satisfy the zero-sum condition is called a nonzero-sum game. In this section we are concerned with the existence of Nash equilibria for nonzero-sum continuous-time Markov games with the payoﬀ functions (15) and (16). Finite-horizon payoﬀ For i = 1, 2, let Vτi (s, x, π1 , π2 ) be the finite-horizon payoﬀ in (15). In this setting, the following theorem gives suﬃcient conditions for the existence of a Nash equilibrium — see Definition 4.1. Theorem 6.1 Suppose that for i = 1, 2, there are functions vi (s, x) in D and strategies πi∗ ∈ Πi that satisfy, for all (s, x) ∈ Sτ , the equations ρv1 (s, x) =

max {R1 (s, x, π1 , π2∗ ) + Lπ1 ,π2 v1 (s, x)} ∗

π1 ∈Π1

(32) = R1 (s, x, π1∗ , π2∗ ) + Lπ1 ,π2 v1 (s, x) ∗

∗

and ρv2 (s, x) =

∗

max {R2 (s, x, π1∗ , π2 ) + Lπ1 ,π2 v2 (s, x)}

π2 ∈Π2

(33) ∗

∗

= R2 (s, x, π1∗ , π2∗ ) + Lπ1 ,π2 v2 (s, x), as well as the boundary (or “terminal”) conditions (34) v1 (τ, x) = K1 (τ, x) and

v2 (τ, x) = K2 (τ, x) for all x ∈ S.

Then (π1∗ , π2∗ ) is a Nash equilibrium and for each player i = 1, 2 the expected payoﬀ is (35)

vi (s, x) = Vτi (s, x, π1∗ , π2∗ )

for all (s, x) ∈ Sτ .

Proof: From the second equality in (32) together with the first boundary condition in (34), the Proposition 2.5 gives (35) for i = 1. A similar argument gives of course (35) for i = 2. On the other hand, from the first equality in (32) we obtain ρv1 (s, x) ≥ R1 (s, x, π1 , π2∗ ) + Lπ1 ,π2 v1 (s, x) for all π1 ∈ Π1 . ∗

Noncooperative continuous-time Markov games

53

Thus using again Proposition 2.5 we obtain v1 (s, x) ≥ Vτ1 (s, x, π1 , π2∗ ) for all π1 ∈ Π1 , which combined with (35) for i = 1 yields (17). A similar argument gives (18) and the desired conclusion follows. ✷ Infinite-horizon discounted payoﬀ Let us now consider the infinite-horizon payoﬀ V i (s, x, π1 , π2 ) in (16). The corresponding analogue of Theorem 6.1 is as follows. Theorem 6.2 Suppose that, for i = 1, 2, there are functions vi (s, x) ∈ ˆ the equations D and strategies πi∗ ∈ Πi that satisfy, for all (s, x) ∈ S, (32) and (33) together with the condition e−ρt Ttπ1 ,π2 vi (s, x) → 0

as t → ∞

ˆ Then (π ∗ , π ∗ ) is a for all π1 ∈ Π1 , π2 ∈ Π2 , i = 1, 2, and (s, x) ∈ S. 1 2 Nash equilibrium for the infinite-horizon discounted payoﬀ (16) and the expected payoﬀ is vi (s, x) = Vi (s, x, π1∗ , π2∗ )

ˆ i = 1, 2. for all (s, x) ∈ S,

We omit the proof of this theorem because it is essentially the same as the proof of Theorem 6.1 (using Proposition 2.6 in lieu of Proposition 2.5).

7

Concluding remarks

In this paper we have presented a unified formulation of continuous-time Markov games, similar to the one-player (or control) case in Hern´andezLerma[6]. This formulation is quite general and it includes practically any kind of Markov games, but of course it comes at price because we have restricted ourselves to Markov strategies, which are memoryless. In other words, our players are not allowed to use past information; they base their decisions on the current state only. This is a serious restriction that needs to be eliminated, and so it should lead to future work. Acknowledgement Thanks to Prof. On´esimo Hern´ andez-Lerma for valuable comments and discussions on this work.

54

H´ector Jasso-Fuentes

H´ector Jasso-Fuentes Departamento de Matem´ aticas, CINVESTAV-IPN, A.P. 14-470, M´exico D.F. 07000, M´exico.

References [1] Basar T.; Olsder G.J., Dynamic Noncooperative Game Theory, Second Edition, SIAM, Philadelphia, 1999. [2] Filar J.A.; Petrosjan L.A., Dynamic cooperative games, International Game Theory Review 2 (2000), 47–65. [3] Gaidov S.D., On the Nash-bargaining solution in stochastic diﬀerential games, Serdica 16 (1990), 120–125. [4] Gonz´ alez-Trejo J.I.; Hern´ andez-Lerma O.; Hoyos-Reyes L.F., Minimax control of discrete-time stochastic system, SIAM J. Control Optim. 41 (2003), 1626–1659. [5] Haurie A., A historical perspective on cooperative diﬀerential games, in: Advances in Dynamic Games and Applications (Maastricht, 1998), 19–29, Birkh¨ auser Boston, 2001. [6] Hern´andez-Lerma O., Lectures on Continuous-Time Markov Control Processes, Sociedad Matem´ atica Mexicana, M´exico D.F., 1994. [7] Jasso-Fuentes H., Noncooperative continuous-time Markov games, Tesis de Maestr´ıa, CINVESTAV-IPN, M´exico D.F., 2004.

Morfismos, Comunicaciones Estudiantiles del Departamento de Matem´ aticas del CINVESTAV, se termin´ o de imprimir en el mes de diciembre de 2005 en el taller de reproducci´ on del mismo departamento localizado en Av. IPN 2508, Col. San Pedro Zacatenco, M´exico, D.F. 07300. El tiraje en papel opalina importada de 36 kilogramos de 34 × 25.5 cm consta de 500 ejemplares en pasta tintoreto color verde.

Apoyo t´ecnico: Omar Hern´ andez Orozco.

Contenido Approximation of general optimization problems ´ lvarez-Mena and On´esimo Herna Jorge A ´ndez-Lerma . . . . . . . . . . . . . . . . . . . . . . 1

Linear programming relaxations of the mixed postman problem Francisco Javier Zaragoza Mart´ınez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

A nonmeasurable set as a union of a family of increasing well-ordered measurable sets Jua ´n Gonza ´lez-Herna ´ndez and C´esar E. Vil larreal . . . . . . . . . . . . . . . . . . . . . . . 35

Noncooperative continuous-time Markov games H´ector Jasso-Fuentes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39