D6.4 - Adaptive open social orchestration: methods and integrated protoypes

Page 1

SmartSociety Hybrid and Diversity-Aware Collective Adaptive Systems When People Meet Machines to Build a Smarter Society Grant Agreement No. 600584

Deliverable D6.4 Working Package WP6

Adaptive open social orchestration: methods and integrated prototypes Dissemination Level (Confidentiality):1 Delivery Date in Annex I: Actual Delivery Date Status2 Total Number of pages: Keywords:

1

PU 30/06/2016 08/07/2016 F 87 social orchestration, adaptation and learning, preference elicitation, recommender system, optimisation

PU: Public; RE: Restricted to Group; PP: Restricted to Programme; CO: Consortium Confidential as specified in the Grant Agreeement 2 F: Final; D: Draft; RD: Revised Draft


c SmartSociety Consortium 2013-2017

2 of 87

Deliverable D6.4

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

Disclaimer This document contains material, which is the copyright of SmartSociety Consortium parties, and no copying or distributing, in any form or by any means, is allowed without the prior written agreement of the owner of the property rights. The commercial use of any information contained in this document may require a license from the proprietor of that information. Neither the SmartSociety Consortium as a whole, nor a certain party of the SmartSociety’s Consortium warrant that the information contained in this document is suitable for use, nor that the use of the information is free from risk, and accepts no liability for loss or damage suffered by any person using this information. This document reflects only the authors’ view. The European Community is not liable for any use that may be made of the information contained herein.

Full project title:

Project Acronym: Grant Agreement Number: Number and title of workpackage: Document title: Work-package leader: Deliverable owner: Quality Assessor: c SmartSociety Consortium 2013-2017

SmartSociety: Hybrid and Diversity-Aware Collective Adaptive Systems: When People Meet Machines to Build a Smarter Society SmartSociety 600854 WP6 Compositionality and Social Orchestration Adaptive open social orchestration: and integrated prototypes Michael Rovatsos, UEDIN Michael Rovatsos, UEDIN Agnes Gr¨ unerbl, DFKI

methods

3 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

List of Contributors Partner Acronym UEDIN UEDIN UOXF UEDIN UEDIN UOXF UEDIN

4 of 87

Contributor Sofia Ceppi Pavlos Andreadis Shaona Ghosh Zhenyu Wen Michael Rovatsos Kevin Page Subramanian Ramamoorthy

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

Executive Summary This deliverable documents the final results of the work conducted in work package WP6 Compositionality and Social Orchestration, describing the final algorithms and architectures the work package has produced. Building on the abstract models for social computation developed in deliverable D6.1, on the basic initial orchestration architecture proposed in D6.2, and on the algorithmic foundations for social allocation and preference learning presented in D6.3, this deliverable completes the development of an integrated adaptive social orchestration architecture by integrating and expanding upon our earlier work. Since the abstract conceptual idea “compositionality = context + collectives” formulated in D6.1, we have significantly refined our notions of hybridity and diversity over the course of the project toward a more formally and computationally grounded framework of notions of context and collectives. This improved understanding underpins the algorithms and architectures described as the final contribution of WP6 in this deliverable, and which provide significant improvements over previous designs. The centrepiece of the results of WP6 is the adaptive social orchestration architecture SmartOrch, which provides an integrated framework for adaptation algorithms at three different levels: • The process layer, at which primitive computational processing steps are located, • the orchestration workflow layer, which embeds an orchestration lifecycle consisting of different such processes, and • the user interaction workflow layer, which enacts the interaction model that governs the way users will work with the platform. The fundamental idea behind adaptation in SmartOrch is to extract orchestration patterns from observed traces of previous operation of the HDA-CAS at each of these layers, and to use adaptive decision-making algorithms in order to perform effective adaptation. After introducing the overall architecture, the deliverable presents different techniques that can be used to embed adaptation capabilities for each of these layers: Group recommendation and preference elicitation algorithms that enable process-level adaptation to optimise human team task composition processes; methods for long-term optimisation based on distributed machine learning over process networks that can be used for orchestration-level adaptation; and theoretical results on different interaction workflows for coalition formation that can be used for workflow-level adaptation. We include five original research papers in appendices to the document, which provide technical details and evaluation results for each of these component methods and the overall architecture. Overall, the deliverable describes a number of significant contributions that we have made to extend existing methods in order to address hybridity and diversity with a clearly human-centric outlook that go significantly beyond previous work in terms of accounting for these phenomena. To our knowledge, the work reported here is the first attempt to develop adaptive methods to orchestrate HDA-CASs in a principled, rigorous and comprehensive way. c SmartSociety Consortium 2013-2017

5 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

Table of Contents 1 Introduction 1.1 Evolution of the work package . . . . . . . . 1.2 Broader project context . . . . . . . . . . . 1.3 Hybridity and Diversity-Awareness in Social 1.4 Contents and structure of this deliverable . 2 Adaptive social orchestration architecture

. . . . . . . . . . . . . . . . . . Orchestration . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

7 7 8 9 12 12

3 Process-level adaptation 15 3.1 Adaptive task allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 Preference elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4 Orchestration-level adaptation

20

5 Workflow-level adaptation

22

6 Conclusions

26

A Adaptive social orchestration paper

28

B Recommender system paper

39

C Preference elicitation paper

49

D Orchestration adaptation paper

60

E Workflow adaptation paper

81

6 of 87

http://www.smart-society-project.eu


Deliverable D6.4

1

c SmartSociety Consortium 2013-2017

Introduction

This deliverable documents the final results of the work conducted in work package WP6 Compositionality and Social Orchestration, describing the final algorithms and architectures the work package has produced. From the outset, the core aim of our work has been to develop adaptive social orchestration methods that would help understand and support the operation and evolution of hybrid and diversity-aware collective adaptive systems (HDA-CASs). This has led to a broad range of activities, starting with the conceptual design of abstract models of social computation, which allowed us to identify the key processes and entities involved in orchestrating such computations, via implementations of system prototypes that allowed practical experimentation with conceptual designs, to the development of advanced optimisation and adaptation algorithms informed by insights on phenomena related to hybridity and diversity in human-centric collective computation. We start by briefly recapping the evolution of WP6 and its role within the broader context of SmartSociety, and by outlining the view of hybridity and diversity that has emerged from our work and how this has affected our research methodology.

1.1

Evolution of the work package

The work in WP6 has evolved through various stages throughout the project, which we recap here to situate the contribution of the work presented in this deliverable within the overall programme of work of the work package: • In deliverable D6.1, we presented an abstract model for social computation, which introduced key concepts and gave an abstract formal account of the building blocks of any social orchestration process. We also presented the Play-By-Data architecture that maps peer-to-peer collaboration to a purely data-driven interaction framework for large-scale collectives. • Deliverable D6.2 provided a concrete (static) social orchestration architecture SmartShare as a first instantiation of the principles laid out in D6.1, and evaluated its computational scalability. This provided a prototypical system to be used for later validation of adaptive methods introduced in social orchestration systems. • The subsequent deliverable D6.3 focused on algorithmic methods for constructing collectives, both in the sense of forming task-performing coalitions (social allocation) and in the sense of identifying groups through stratification based on profiling of preferences and behaviours (preference elicitation). Taken together, these algorithmic foundations underpin our design of adaptive social orchestration algorithms. The present and final deliverable of the work package completes the cycle by putting the advances of D6.2 and D6.3 together in an integrated adaptive social orchestration architecture called SmartOrch, aligning the algorithmic and architectural principles described in the abstract in D6.1, and combining the results from D6.2 (which focused on architectural concerns) and D6.3 (which addressed algorithmic issues). Moreover, it advances previous c SmartSociety Consortium 2013-2017

7 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

preliminary work presented in D6.2 and D6.3 on both algorithms and architecture significantly based on the improved understanding of multiple levels of adaptation we have developed since the earlier prototypes. With this regard, this deliverable not only provides integration of previous achievements, but also an account of substantial additional results, both in terms of algorithmic and architectural advances.

1.2

Broader project context

Within SmartSociety, WP6 has played a dual role. In terms of scientific focus, it has sought to investigate the principles underlying compositionality in HDA-CASs, i.e. to answer the question under what circumstances can we expect collectives to form, and how can we use computational means to analyse and support this process? In terms of contribution to the overall SmartSociety vision, it has provided orchestration architectures and prototypical implementations that play a key role in the “big picture”, as without the composition of tasks and the agreement of human participants to them no social computations can be performed. To some extent, this has led to many components needed for the overall SmartSociety architecture being “emulated”, at an often rudimentary level, by prototypes needed to demonstrate an overall orchestration system, and WP6 engaging in extensive interactions with other work packages to align their design with the longer-term research and development conducted by them. This approach has also been motivated by the need to develop applications like the SmartShare ridesharing system to demonstrate overall functionality and run real-world user validation trials. We briefly review the specific interactions with other work packages that have taken place in this process: • Formal models for data-driven peer-to-peer interactions were developed with WP2 alongside experimental integration with their provenance and reputation services, and a methodology for the principled design of data and process models for social computation systems was derived from the experiences gathered through this crossWP collaboration. • We contributed significantly to the formal models of tasks developed in the project, which have informed the development and implementation of the Task Execution Manager by WP3, to be described in the final WP3 deliverable. • We implemented a rudimentary Peer Manager in our orchestration system that served as a template for the components being developed by WP4, and which will be described in the final WP4 deliverable. • We developed methods for indirect incentivisation that involve modifying properties of tasks proposed to human users that are complementary to the direct incentives investigated by WP5, which rely on messages and task-external rewards for users. • We provided a customisable orchestration backend for a limited number of concrete orchestration patterns that was situated below the abstraction level of the programming model developed by WP7, and which are now being “bundled” for use as orchestration primitives by this programming model. 8 of 87

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

• Our architecture prototypes served as an early, simple version of (and, in some sense, blueprint for) the integration of many components of the overall SmartSociety architecture developed by WP8, and helped validate and refine the project’s overall view of interactions between individual work package components. At the time of writing, the objective of T6.4, namely “to integrate the outcomes of T6.1T6.3 with the components developed in WP3-WP5” has not been fully completed, as the delivery of the respective components has been slightly delayed beyond their initial submission deadline. However, as our description in the sections below and the technical papers included in the appendices show, the WP6 architecture and algorithms have reached a stage of maturity where this integration will be possible, with WP8 taking the lead in this process after WP6 has completed its work.

1.3

Hybridity and Diversity-Awareness in Social Orchestration

In order to explain how the different lines of work described in this document fit together, it is worth grounding them in the phenomena all of them are motivated by, hybridity and diversity in collective adaptive systems, and in the target technological advancement they all aim at, that is the development of novel adaptive social orchestration methods. Since the abstract conceptual idea “compositionality = context + collectives” formulated in D6.1, we have significantly refined our notions of hybridity and diversity over the course of the project toward a more formally and computationally grounded framework of notions of context and collectives. This improved understanding underpins the algorithms and architectures described as the final contribution of WP6 in this deliverable. Regarding hybridity, we have identified a need to address adaptivity in a holistic way that encompasses both human and machine activity. Once we take the idea of hybrid collectives seriously, we obtain a view of adaptation that involves both machine and human “resources”, from the low-level processing activities of sensors and computing nodes all the way up to human-to-human negotiation. This has two important implications: Firstly, we need to provide adaptation mechanisms that address all levels of information processing steps and interactions between system components, such as, for example: adaptive optimisation of the computational distribution of algorithmic processes across processing nodes; the discovery of patterns in user behaviour that can be exploited to optimise system performance; the allocation of tasks to different machine peers and human task-performing peers based on past performance; and, adaptive selection of appropriate interaction patterns to manage communication between human participants in a social computation. Realistically, these mechanisms will vary in terms of design and generalisability across different levels of the “adaptation ladder”, with some specialised algorithms specifically tailored to a particular adaptation scenario (e.g. our group task recommendation system described in section 3), while others provide generic methods for data-driven adaptation (e.g. the application of network Lasso methods to orchestration level-adaptation described in section 4). In this sense, the specific methods we have developed cover a broad range of adaptation approaches for orchestrating HDA-CASs, and our contribution is mainly in combining them in the same overall architecture in a coherent fashion rather than in advancing each particular method to a fully mature level. c SmartSociety Consortium 2013-2017

9 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

A second consequence of hybridity is that we have to consider a mix of computational and human/societal objectives for adaptation and optimisation. By this we mean that as the HDA-CAS will rely on both purely computational performance and participation of human peers, the metrics used to guide adaptation and optimisation will need to take account of both the machine and human side of social computation. In fact, the importance of different types of system objectives stemming from hybridity can be seen as an instance of a much broader challenge that is rooted in diversity, namely the problem of having to consider the objectives of different stakeholders in parallel when trying to adapt to emergent patterns in system behaviour. More specifically, we have individual participant objectives (e.g. enjoyment, remuneration, self-actuation, etc) as opposed to global system-level objectives, which can correspond to the objectives of the individual/organisation running the platform (e.g. minimising the churn rate, maximising the percentage of tasks executed to a high quality standard, or simply monetary profit), shared objectives of a community (e.g. fairness - in a variety of senses -, achievement of a joint goal, balanced representation of different views, etc), or a mix of these. Very often, there will be tension between these objectives, and tradeoffs have to be made when trying to accommodate them. In section 3, we present an optimisation method that illustrates how these tradeoffs can be implemented in the case of task/resource allocation in an examplary way, but there are also cases in which the achievement of very highlevel objectives (for example guaranteeing that task coalitions generated by an interaction workflow are stable) may depend on the design of higher-level tiers of an orchestration architecture, as shown in section 5. Beyond challenges related to satisfying multiple objectives in parallel, diversity also creates a challenge with regard to complexity. Supporting a broader range of human skills, preferences, objectives, and behaviours naturally comes with increased demands in terms of representing this variability as well as identifying and adapting to variations among individuals and groups. On a “big data� scale of modern-day CASs, this becomes a computationally intractable task if we try to create diversity-awareness by naively allowing thousands or millions of variations to be considered, for example, by a recommendation system, or an algorithm that tries to learn huge numbers of different user behaviours. One part of our approach to tackling this problems has been to focus on collectives through the use of three different mechanisms: user typing, which allows us to group users by similar behaviour, coarse preferences, which allows us to reduce the number of different situations considered relevant for users, and team formation, which enables us to consider social computation tasks as comprising of individuals playing specific roles in specific activities, so that large numbers of these can be composed efficiently when considering only one category of tasks at any given point in time. The principles of these mechanisms have been described in the previous deliverable D6.3, and are extended by more advanced algorithmic methods in the present deliverable (see section 3). Over and above these previously developed methods, the work reported here adds another perspective to dealing with diversity, one that focuses on context. This is motivated by critically investigating the role of man-machine hybridity, more specifically, the inability of any a priori design of a computer model of users and the tasks they engage with to capture all relevant contextual variables; and the insight that, when considering this in a 10 of 87

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

diversity-aware way, not only may a system not adequately represent all context relevant to its users, but also that what is relevant to each user may be different. As a simple example, consider a collaborative activity that involves some cost, timing, and set of people involved. One (type of) user(s) may mostly care about cost, another about cost and, to a lesser extent, timing, and yet another may care much more about the identities of the other participants. Additionally, there may be latent contextual variables, e.g. the difficulty of the task, that matter to some users, but are not captured by the system. How can we come up with suitable diversity-aware orchestration methods to address this problem? The solution we propose is to actively support uncertainty about the actual preferences of users in order to make allowances for relevant variables not captured, and to develop preference elicitation methods that permit uncertainty over user types, while also supporting coarseness of preferences as defined above, in order to exploit indifference of some users to specific details of a task. Following through with the idea that we do not expect the system to fully understand and predict user behaviour, we also need to allow choice, i.e. not to expect that a machine-determined solution to a problem will simply be carried out by humans as anticipated. Instead, the system needs to produce a variety of possible solutions to cater for the unpredictability inherent to autonomous decision making. Moreover, if global objectives are to be supported that may not fully overlap with the combined self-interest of all participants, we must attempt to influence behaviour through “soft” interventions that do not limit individual and collective choice, but provide incentives or disincentives to encourage globally beneficial behaviours. The algorithms for task recommendation and preference elicitation presented in section 3 embed these principles. Additionally, we present theoretical results that examine the nature of coalition formation under “limited reporting” of user preferences, which is essentially what we can expect to happen in systems that only imperfectly capture all contextual variables. This work (presented in section 5) approaches the problem of diversity from the standpoint of game-theoretic mechanism design, investigating under what circumstances negotiation workflows guarantee truthful reporting of preferences and stability of resulting coalitions (in the sense of their members not having an incentive to deviate from the proposed solution). Specifically, we establish that different types of user preferences give rise to different protocols when preferences are topological (i.e. they depend on metric properties of the task) as opposed to when they are hedonic (i.e. depend only on who shares a task with whom). This provides a foundation for selecting different protocols based on the observed behaviour of participants in a HDA-CAS, so that different dynamic workflows could be selected dynamically in a system over time. In summary, all of the contributions presented in this deliverable address key aspects of hybridity and diversity that have influenced our work on adaptive social orchestration. Taken together, they constitute the first attempt to investigate these phenomena in a comprehensive and principled way in the context of social orchestration of the kind that we encounter in HDA-CASs. c SmartSociety Consortium 2013-2017

11 of 87


c SmartSociety Consortium 2013-2017

1.4

Deliverable D6.4

Contents and structure of this deliverable

As anticipated in the SmartSociety Description of Work, this deliverable “encompasses a description of the algorithms for adaptive open social orchestration designed in T6.3, together with their prototypical implementation, already adapted to integrate the components developed in WP3-WP5” and includes an evaluation of the features shown by the prototype. It documents the achievement of the objectives of task T6.3, specifically to “develop mechanisms for adaptive social orchestration, which enable a complete cycle of automated design-observation-redesign in accordance with global design objectives”. The centrepiece of the results of WP6 is the adaptive social orchestration architecture SmartOrch, introduced in section 2, and presented in detail in a paper included in Appendix A (submitted to the ASE 2016 conference). Each of the subsequent sections focuses on adaptation methods for the three different levels of this overall architecture: • In section 3, we provide an account of the group recommendation and preference elicitation methods we have developed and implemented to enable process-level adaptation that optimises human team task composition processes based on observation. The technical details are given in papers attached in Appendices B (submitted to the DIVERSITY 2016 workshop at ECAI 2016) and C (in preparation for submission at AAAI 2017). • In section 4, we summarise work on orchestration-level adaptation, which provides a more general method to learn global patterns from collective behaviour regardless of the algorithms used to compose individual tasks, and can be used for long-term optimisation of any orchestration process. A technical paper included in Appendix D (submitted to the Journal of Knowledge Discovery and Data Mining) provides the details. • In section 5, finally, we report on work conducted toward workflow-level adaptation, where we have produced theoretical results on the properties of different interaction workflows for coalition formation, and that could serve as a basis for switching among different such workflows dynamically based on observed user behaviour. Again, the details are described in a paper (in preparation for submission to AAMAS 2017) in Appendix E.

2

Adaptive social orchestration architecture

As stated above, the adaptive social orchestration architecture we have developed, SmartOrch, is the centrepiece of the WP6 work reported in this deliverable. Its structure, described in detail in the paper included in appendix A, reflects the integration steps conducted while consolidating and combining the individual contributions of the research conducted in WP6 since the beginning of the SmartSociety project: • It extends the previous static social orchestration architecture SmartShare presented in D6.2, inheriting the Play-By-Data principles previously implemented, and which 12 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

Job Repo

APP

Execution Engine

APP AskSmartSociety

Workflow Executor

User Interaction Workflow Manager

Job Scheduler

Fetch Process

APP

Client REST APIs

SmartShare

Optimizer

AskAndShare APP

Orchestration Patterns

... Platform REST APIs

...

Peer Manager

Task Composition Manager

Negotiation Manager

...

Provenance Manager

Figure 1: The SmartOrch Architecture allow agent coordination protocols representing social orchestration lifecycles to be mapped onto an asynchronous, event-driven processing framework that peers and components can interact with via RESTful APIs. • It includes implementations of the task allocation and preference learning algorithms first outlined in D6.3. These are embedded in the Task Composition Manager, Negotiation Manager, and User Interaction Workflow Manager components of the SmartOrch architecture. • It specifies the interfaces from the Execution Engine, which is the key runtime scheduling and execution component of the orchestration architecture to and from other WP6 components and those components provided by by other work packages, e.g. the Peer Manager, Task Execution Manager, Provenance Manager, etc, thus providing the functionality of an Orchestration Manager as described in the overall SmartSociety architecture developed by WP8. • It adds a new Optimiser component, which combines our three-tiered view of adaptation at the process, orchestration workflow, and user interaction workflow layers. The overall architecture is shown in figure 1, and described in more detail in appendix A. To summarise the main novelty compared to our previous work, we focus on the Optimiser component here, starting with an introduction to the three conceptual layers of adaptation it involves. These are: 1. The process layer, at which primitive computational processing steps are located (e.g. authentication, matchmaking). These cannot be broken down into separately orchestrated sub-steps, and need to be allocated computational resources per platform job c SmartSociety Consortium 2013-2017

13 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

that corresponds to a process instance to be executed. They can be performed by internal components of the platform, or delegated to external third-party services (e.g. a reputation service). 2. The orchestration workflow layer, which embeds a specific control flow of jobs enacted by a specific “manager� component, typically one for each of the stages of the social orchestration lifecycle (peer discovery, task composition, task allocation, task execution, and feedback, see previous WP6 deliverables). For example, a task composition manager might obtain a request from a peer to generate suitable tasks to achieve a certain objective, and handle contacting the right peers, performing matchmaking operations, and consulting reputation information to rank the results. 3. The user interaction workflow layer, which enacts the interaction model that governs the way users will work with the platform, providing the input/output interfaces to humans participating in the computation. For example, the negotiation process in a group activity might involve all participants in a task agreeing to it explicitly, and the process of tracking this agreement would be handled by a negotiation manager accepting agree/reject messages and updating the state of the task. It is at this layer that managers expose APIs toward client applications, whereas the lower-level layers only involve APIs used by internal platform components. The fundamental idea behind the Optimiser is to extract orchestration patterns from observed traces of previous operation of SmartOrch at each of these layers, and to use adaptive decision-making algorithms in order to perform adaptations to observed behaviour. An algorithm that learns user preferences to assign them to social computation tasks they would prefer, for example, would be located at the process layer (specifically, with the Task Composition Manager), and would derive patterns of behaviour from observed past acceptance or rejection of proposed tasks. An algorithm that detects groups of users that rarely interact with each other and identifies that the negotiations between them could be managed by different computational nodes, on the other hand, would be located at the orchestration workflow level. If we observe that some users participate more reliably when they use a different confirmation procedure after having agreed to a task, an adaptive selection procedure at the user interaction workflow layer can decide which of these procedures to use for which type of user. The paper included in appendix A gives specific examples for simple adaptation algorithms that can be used at each level to respond to the three phenomena just mentioned. Each of them has been evaluated through simulation experiments using SmartOrch in our ridesharing scenario. These experiments, which show that significant performance improvements can be obtained by using adaptive social orchestration methods, serve as a proof of concept of the overall potential of our approach. In the following three sections, we report on our development of more principled algorithmic solutions for adaptation at these three levels of social orchestration that go beyond these simple examples. Their mathematical foundations provide a solid basis for more generalisable adaptation algorithms tailored to the setting of social orchestration. 14 of 87

http://www.smart-society-project.eu


Deliverable D6.4

3 3.1

c SmartSociety Consortium 2013-2017

Process-level adaptation Adaptive task allocation

The aim of adaptive task allocation in HDA-CASs is to offer users the possibility to find peers to share a task or resource with. Essentially, it has to propose possible collaborative solutions to users, i.e., identify coalitions of users each of which can share a task or resource. As we are going to discuss, these solutions should be carefully chosen by a system that takes into account user diversity and uncertainty regarding human behaviour. A system used for these applications can be considered successful if it satisfies (to a certain degree) a global system-level objective representing the needs of a collective of users. For example, in the ridesharing scenario in which users aims to find someone to share a ride with, the global system-level objective could take into account the number of users who find peers and their overall level of satisfaction. However, even if driven by the same global goal, each user aims to maximise her own satisfaction for the service, i.e., each individual has the objective to maximise her utility. In the ridesharing case, the utility of a user may depend on the distance between her location and the pick-up point and between her final destination and the drop-off point, for examlpe. A system that does not take into account these individual participant objectives but focuses only on maximising the global system-level objective risks incurring in a high churn rate, which, in the long term, puts the overall social computation provided by the HDA-CAS at risk. Given this multi-criteria optimisation problem and the tension between different types of objectives, it is fundamental for the system to provide solutions that offer a trade-off between them. Each user may have different preferences over tasks/resources and peers, and her utility for a specific solution is affected by how much the proposed solution satisfies her preferences. Crucially, due to this, a system must take into account the diversity among users in terms of variation between their individual preference. In order to do so, it has to learn users’ individual preferences and adapt the computation of user utilities and the identification of solutions accordingly. Indeed, it is unrealistic to assume that a user would accept any solution proposed by the system, especially if such solution does not account for her preferences. For example, consider the ridesharing scenario and assume that user i has no strict preferences over the duration of the ride, while user j wants to reach her destination minimising travel time. User i would accept a ride to go from location A to location B that takes more time than a train journey between the same two locations, while user j would reject it. Moreover, apart from the diversity across users (inter-user diversity), a system should also account for the possibility that a user would behave in different ways when facing (apparently) the same situation multiple times. This intra-user diversity is due to the complexity with which humans take decisions. Several factors affect human decision making, e.g., social, psychological, cultural, and temporal aspects of the situation. One of the key assumptions behind our work is that identifying all these contextual factors for a specific user in a specific situation is not feasible for a computer-based system. For example, in proposing a ride, a system can take into account pick-up location, drop-off location, and time. However, it is possible that user i has no preferences about the factors identified c SmartSociety Consortium 2013-2017

15 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

by the system and instead cares about the size of the cars and the number of people she is going to share the car with. To overcome the problem of capturing the exact factors that affect a user’s decision, a system should account for a certain degree of uncertainty in human behaviour and, thus, be flexible enough to successfully address situation in which users act in an unpredicted way. The problem of identifying a possible solution for users while taking into account (up to a certain degree) their preferences can be solved by using matching algorithms that identify coalitions of users that can share a task/resource. However, such an approach would not fully consider the diversity among users and, more importantly, would not provide the degree of flexibility required to face uncertainty inherent to human behaviour. To address this shortcoming, we propose a diversity-aware recommender system that, instead of offering a single solution to users and unrealistically assuming that each of them would accept it, allows users to choose among a set of possibilities, observes their behaviour, and on the basis of this improves its knowledge about users’ preferences. It is important to highlight that in such a recommender system each user independently chooses a solution they prefer, but a task that involves several users will only be performed if it is selected by all of them, and hence the global computation performed by the system will depend on the coalitions that are formed by users agreeing to share tasks. For example, assume that two solutions are proposed to two users (i and j) in the ridesharing scenario such that they can either share the ride or be in separate cars. If user i chooses the former option while user j the latter, then the ride chosen by user i never takes place. Assuming that all users would independently choose the same solution (i.e., one of the solution always maximises the utility of every user) is unrealistic as it would require, in extreme cases, that they would accept any solution proposed to them. Thus, there is a coordination problem related to the independent user selections that must be addressed by the system. We propose a framework to compute the set of solutions to recommend to the user that simultaneously tackles the multi-criteria optimisation problem and the coordination problem. Indeed, an attempt to solve these two problems sequentially would fail because properties that would be satisfied when the multi-criteria optimisation is solved may not hold anymore if a coordination mechanism is put in place afterward. The reason behind this is that both these issues and their solutions are related to the same function, i.e., the user’s utility. Our framework allows the application designer to specify (i) the global system-level objective, (ii) the individual participants’ objectives, and (iii) the degree of trade-off between between these two types of objectives. Moreover, the system supports user coordination by introducing a taxation mechanism that alters the utility users have for the solutions such that, if the preferences of the users are correctly captured by the system, they will all prefer the same global solution. Note that such taxation does not limit the autonomy of users; they can still choose any solution from those computed by the system – it only serves to nudge them toward the globally optimal solution. The process users apply when selecting an option among a set of recommendations based on their utilities affects their coordination and thus has an impact on the taxation mechanism. For this reason, in our work we consider three different user response models 16 of 87

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

that aim to capture users’ selection behaviour. These models are the noiseless model, in which each user deterministically chooses the option that gives her the highest utility, the constant noise model, where each user selects the solution that gives her the highest utility with a very high probability, and any other solution with a uniform, very small, probability, and the logit, that prescribes that the selection probability of a solution is proportional to the utility that a user ascribes to that solution. More concretely, we iteratively construct the recommendation set by sequentially solving three Mixed Integer Linear Programs (MILPs), each of them guaranteeing different solution properties. In this way, we can deal with both the multi-criteria optimisation and the computational complexity of finding an exact solution. In order to account for the conflict of interest between collective and individual participants, initially we construct a program called M ILP system that aims to maximise the global system-level objective (e.g. maximisation of the number of users who are assigned a task that solves their problem, or domain-specific objectives, e.g. maximising the occupancy of all available cars in the case of ridesharing). The computed solution has the highest possible value V ∗ for the global objective the system can achieve. A second program called M ILP f irst takes V ∗ as an input and guarantees that the computed solution achieves at least a fixed percentage of V ∗ in terms the global objective, while the objective function of the program focuses on maximising a different property that accounts for individual participant objectives, e.g., fairness. The advantage of this approach is that the application controls exactly the extent to which the maximisation of system-level and the individual participant objectives are satisfied. To face the lack of coordination among users, we aim to modify their utility for the recommended allocations such that they all prefer the same solution, which is called the sponsored solution. This solution is the one computed by M ILP f irst and, since it is the one we want to sponsor, we do not alter the utility the users have for it. Instead, we apply taxation to all other recommended solutions. Since we need to solve the problem of identifying the solutions with desired properties and apply taxation simultaneously, we design a third program, M ILP others for this purpose. In particular, a solution obtained with this program aims to be similar to the one of M ILP f irst in terms of users’ utilities, it guarantees a minimum level of satisfying the global objective, and imposes taxes computed on the basis of the specific user response model considered. Given this, we can view our framework as composed of three steps: In the first step, M ILP system is executed in order to identify the highest possible global utility that can be achieved. In the second step, M ILP f irst is used to identify the sponsored solution, and in the third step all the remaining non-sponsored solutions are computed by executing M ILP other , for as many times as the number of recommendations we want to provide (this allows us to also calibrate the computation time needed when computing recommendations). Note that users may have requirements that the MILPs must satisfy, for example, they may have constraints regarding the characteristics of users they are willing to share a task with. In our work, we show that the proposed approach has a high flexibility in terms of constraints that can be included in the MILPs. In order to demonstrate the effectiveness of our approach, we ran two sets of simulated experiments. In the first set, we compared the recommended set of solutions generated c SmartSociety Consortium 2013-2017

17 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

with our diversity-aware approach with a set of solutions that maximise the global systemlevel objective and provide no support for user coordination. The results of this experiment show that, when users face a recommendation set, the taxation mechanism we use to indirectly coordinate users’ choices is successful and that, by accurately choosing the tradeoff between objectives, our system can perform better than the benchmark both in terms of global system-level objective and fairness. In the second set of experiments, we compare our approach to that of directly allocating users to a single solution that maximises the global system-system level objective. In this experiment, we assume that users are characterised by a utility threshold that controls whether or not they will accept a single specific solution assigned to them, where this threshold is not known to the system. For the benchmark, we assume that a user would accept the solution proposed as long as the utility threshold is satisfied. The results of this experiments show that recommending a set of solutions through our approach can produce results that are equally good to the one obtained by the benchmark that offers no options and assumes an unrealistic user behaviour.

3.2

Preference elicitation

The orchestration of collective adaptive systems requires making decisions over a number of users, often with limited resources, and with inter-dependencies among the solutions afforded to each user, as we have seen in the previous section. When considering the problem of learning preferences from observation (which is a necessary ingredient for the adaptive task allocation procedures just introduced), this leads to a complexity that is combinatorial in the complexity of user preference representations. As a result, it becomes important that these models are not only precise but also minimally complex in a way that allows for faster preference elicitation and decision making. In the paper included in Appendix C, we develop a decision-theoretic model of coarse preferences, which provides a formal methodology for eliciting preferences over, and making decisions with, structured representations of the space of solutions. Our approach aims to capture the effect of Categorical Thinking in humans. This well-studied effect implies that an abstraction of the solution space to one of lower dimensionality precedes many human decisions. Our learning models capture this effect, are compliant with the von Neumann Morgenstern Utility Theorem, and act in a complementary way to existing methods of reducing complexity in the representation of preference functions, such as the Additive Independence and Generalised Additive Independence models discussed in D6.3. Further, we propose methods for stratifying users in this context by utilising similarities in the way they abstract the solution space. Within the broader context of the work of WP6, these methods are important for embedding diversity-awareness in learning user models without placing excessive demands on computation and volume of information required for each individual user to learn a suitable model of preferences. In a sense, they demonstrate what kinds of abstractions, generalisations, and simplifications can be employed when we are faced with large numbers of potential contributors to social computations who are acting autonomously and rationally. 18 of 87

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

Figure 2: Factor graphs for updating our belief over user preferences after the user has rated a potential solution, defined over the original solution domain.

Figures 2 and 3 present the basic principles behind our work. The former is a factor graph representing a typical approach to modelling a user’s preferences in terms of utility and query response over a point from a discrete solution space. Figure 3 depicts a representation of the same preferences when coarseness is taken into account. Solutions in the original multivariate solution space are depicted as x = {x1 , x2 , ..., xn }, whereas its mapping through ϕτ to the abstracted solution space of a type τ is depicted as cτ = {c1τ , c2τ , ..., chτ τ }. u is the user’s utility over the specific solution, which is here computed by implementing an Additive Independent model through use of Conditional Gaussians. Note how the domain of each variable changes from one representation to the other, and how this representation depends on the user’s type τ . By utilising this partitioning of the solution space into categories we are able to speed up elicitation procedures by generalising information over a sampled point. Our current work assumes access to a corpus of user interactions with the system, presenting procedures for learning the underlying abstraction from this. Further, we assume that these abstractions remain relatively constant in the population. Future work will examine the on-line elicitation of these abstractions, doing away with both assumptions. We tested our approach on two real-world datasets, where we sequentially ask users to rate one out of a limited set of available solutions, using a standard approach from the literature, with and without our methods of abstraction and stratification. We demonstrate significant computational reduction, and improved recommendation efficiency, over two different decision scenarios: a) predicting the rating of a solution and b) recommending the best solution out of a set. Furthermore, we present theoretical evidence for the ability of our methods to scale to multiple user decision problems over larger datasets, where standard approaches struggle to compete. To conclude, by augmenting existing preference learning models with coarse preferences, we achieve better quality of recommendation in on-line scenarios with a restricted number of user-system interactions, while also reducing the computational cost of both learning users’ preferences, and making decisions with them. These results indicate not c SmartSociety Consortium 2013-2017

19 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

Figure 3: Factor graphs for updating our belief over user preferences after the user has rated a potential solution, defined using coarse preferences. only that we can utilise representations of human categorical thinking to better understand and learn what users want, but also that the resulting representations can make otherwise intractable orchestration problems tractable if combined with the task allocation mechanisms introduced in the previous section.

4

Orchestration-level adaptation

The above process-level adaptation methods focus on specific algorithmic functions to be performed by machine peers as part of a social computation, in particular processes like task composition and allocation, which emphasise modelling users and their preferences, and satisfying system-level and user-level objectives. However, they are specific to individual stages of the overall social orchestration lifecycle (and similar adaptive methods could be designed for other stages than the ones we have considered above), and do not consider patterns that govern the allocation and distribution of computational resources to this overall orchestration process. To address this, we require a more general-purpose orchestration-level adaptation method that aims at identifying recurring patterns in the overall behaviour of the population of human and machine peers in the HDA-CAS. Within hybrid and diversity-aware social systems, such orchestration-level adaptation mechanisms should be scalable and robust in order to maximise system performance and responsiveness. They must be able to perform optimisation in a distributed way across multiple processing units. There is a need to dynamically execute different optimisations with varying objective functions on multiple processing nodes in parallel in order to scale to large-scale human user populations and machine resources. This is the motivation behind the work presented in the paper included in Appendix D that applies scalable, robust, distributed, and generalised optimisation algorithms within the context of an HDA-CAS, again using ridesharing as an example. 20 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

Network Lasso Optimizer

w12

w23

w34

w45

... w(n-1)(n)

Peer n xn ...

Peer 5 x5

...

Peer 4 x4 ...

Peer 3 x3 ...

Peer 2 x2 ...

...

Peer 1 x1

Peer Parameters

Figure 4: Network Lasso Optimizer. The individual node objectives are indicated by f (xi ) for peer i. The similarity between peers (i, j) is indicated by the weight wi,j . The node objectives and the similarity objectives are solved in parallel over multiple processing units.

As described in section 2, the orchestration workflow layer is responsible for observing patterns of behaviour across the overall social orchestration process in order to perform optimisations for achieving certain objectives, e.g. when contacting groups of peers or performing matchmaking and task composition for them. Identifying the right group of peers to contact or performing matchmaking can be facilitated if the technique can perform concurrent optimisation and clustering over the entire peer population. Dynamic clustering of the peers based on historical orchestration patterns can reduce the search space of optimisation by a cubic factor. Furthermore, the optimisation and clustering should be distributed and scalable over different processing nodes. Referring back to the example given for the orchestration workflow level decisions to be made in SmartOrch mentioned in section 2, the techniques that we propose are illustrated using the example of detecting groups of users based on their interactions with each other. An illustration of one of the methods applied in this context is shown in Figure 4 below. The distributed optimisation mechanism not only optimises on individual peers per node; it also jointly optimises the peers based on the similarity between them. Such joint optimisation results in a neighbourhood topological structure within otherwise unstructured peer networks, thus enabling the detection of clusters or groups. The applications of the mechanisms we propose in this work highlight the benefit of three algorithms within a case study of ridesharing systems. The algorithms discussed in the paper are types of statistical regression analysis techniques based on the Network Lasso technique that are known to generalise well. However, these methods are generic enough to handle any HDA-CAS as the optimisation is not specific to any particular use case. The synthetic experiments are designed to simulate arbitrary types of peers with multidimensional attributes. Network Lasso addresses diversity-awareness in its capability of distributed optimisation. Each peer is encoded by its parameters and its own objective function. However, c SmartSociety Consortium 2013-2017

21 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

this does not necessarily mean that all peers need a different objective function. This is achieved by the dual decomposition enabled ADMM optimisation technique used inside Network Lasso, where each of these peer (=node) objectives can be run on a different processing unit and solved in parallel. What this means is that the iterative updates at different stages of the optimisation can be performend in parallel (asynchronously) for each node objective until convergence is achieved. The algorithm parameter λ controls the amount of parallelism needed. With λ = 0, each node objective is solved independently to each other and with λ → ∞, all the node objectives are in consensus with each other. This method of decoupled optimisation is extremely advantageous for large numbers of users in the system as Network Lasso is capable of reducing time to convergence by a cubic factor. The results of the synthetic experiments we have conducted demonstrate the efficiency with which the optimisation technique quickly converges to minimum error over 1500 peers. It is also interesting to note that the mechanism presented in the paper is capable of automatic feature selection with about 30% of the features detected from previous patterns and used for accurate prediction. This is a novel contribution in terms of reliability of HDA-CASs, handling situations where, for example, peers who were previously online are no longer online, or peers change their preferences. The second part of the paper uses real-world data in the form of trip record datasets for various taxi rides. The records are used to encode realistic task requests achieved from with the goal of clustering the peers that have not been served and optimising the clusters to achieve both local and global objectives. In these experiments, we show that the results achieve 99.8% accuracy within 50 iterations and 4 clusters. The work concludes with an in-depth analysis of various scalable and distributed optimisation algorithms within the context of the generalised adaptation goals of the SmartOrch architecture. The orchestration-level optimisation is abstract and can handle any social scenarios with the capacity to automatically detect groups of peers, peers that are important and will participate in the optimisation model. Compared to the adaptation layers discussed in sections 3 and 5, it is worth emphasising that the orchestration-level adaptation methods we have developed are agnostic to the specific decision problems, interaction patterns, and preference structures used in a given HDA-CAS. All they require as an input is data obtained from past social orchestrations, the specification of a network structure that captures the process model underlying a given system structure and information flow, and the definition of objective functions for individual peers, groups of peers, and/or the entire system. Naturally, this comes at the cost of not being able to tailor more specific process- or workflow-level adaptation decisions to a specific scenario. In return for this limitation, however, it allows us to use this optimisation method across any HDA-CAS scenario, in particular addressing hybridity as we can describe all participating nodes in the system uniformly, regardless of whether they represent human or machine peers.

5

Workflow-level adaptation

The final layer of adaptation we consider concerns the user interaction workflow, i.e. the process model that governs how human peers interact with the system in the process of 22 of 87

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

social orchestration. Here, we are effectively interested in going “one level up” from the orchestration level, and considering what different types of social orchestration lifecycles might be considered by the HDA-CAS platform. At this level, we are looking at different ways in which the user population might engage with the process of organising the social computation. We consider systems where there is a choice between different user interaction workflows, for example different ways for a human peer • to be discovered (e.g. advertising interest in participating in certain types of tasks rather than posting a specific request for a goal the user wants achieved), • to confirm their participation in a task (e.g. selecting a task directly to confirm participation, or negotiating precise terms of a task with other participants), or • to give feedback (e.g. once after completing a task, or arbitrarily often during its execution). The question of adaptation here becomes one of selecting from different variations of such workflows based on observed behaviour in order to satisfy various objectives that may be considered relevant to the performance of the HDA-CAS. In the paper included in Appendix E, we study issues related to the selection of user interaction workflows for the specific case of coalition formation, where we are looking for a one-to-n allocation of tasks to users, and we are interested in how all users in a system would be allocated to a set of tasks in an optimal way during a single allocation step. This case is relevant, for example, to sharing economy applications, where people act as providers and users of services, e.g. ridesharing services like Uber and BlaBlaCar, group purchase schemes like GroupOn, or accommodation services such as Couchsurfing. As it is typically not possible for each individual to reach out to people with similar or complementary needs, this type of applications is necessary to ensure that each user has reliable access to services. In this setting, to coordinate the activities of users, we would run a centralised allocation mechanism like that presented in section 3.1, but at the user interaction workflow level, the question arises as to what procedure we should apply to interact with these users. Specifically, we look at the problem of whether users should respond to an allocation offered by the system sequentially to confirm whether they are in agreement with this allocation, or whether they can be automatically allocated concurrently. As an example of an important system-level property we would like to satisfy we investigate stability, i.e. designing the workflow such that no user should prefer a coalition different from the one she has been assigned to. Note that each agent is strategic and has her own private preferences, and agents could communicate preferences to the system that are different from their actual preferences if this seems beneficial, so we also want the mechanism we propose to guarantee truthfulness. While stability and truthfulness are central concerns in the area of mechanism design, in HDA-CASs the design problem is significantly confounded by diversity. Even though we must elicit each agent’s preferences over all her possible coalitions, i.e. allocations the c SmartSociety Consortium 2013-2017

23 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

system might propose to her, preference elicitation is made particularly difficult by the number of parameters that characterise any given task and the possibility of trade-offs between these parameters. Moreover, humans are not always willing to disclose these preferences completely, and even if they were, obtaining complete preference profiles from all participants would require a highly complex and tedious procedure, as we would have to calculate all possible coalition structures over all users engaged with the system at any given point in time. Last but not least, as explained in the introduction chapter, in a truly diversity-aware system, we cannot even realistically rely on the system designer having considered all contextual variables that might be relevant to any participant and over which preferences should be elicited. This difficulty in obtaining and exploiting complete preference profiles suggests adopting approaches that are based on partial preference profiles, i.e. adopting mechanisms where agents only provide limited reports on their preferences. However, limited reporting introduces its own difficulties. In particular, it is challenging to provide stability guarantees and ensure that the limited reports are truthful, i.e., agents have no incentives to misreport their preferences. In the work presented here, we aim to address this shortcoming and design mechanisms capable of forming stable coalitions under limited reporting. The mechanisms we propose are based on a signalling protocol called the posted goods protocol (PGP). The PGP can be viewed as a generalisation of the signalling protocol customarily adopted in the posted price mechanism. In designing such a protocol, we need to take into account that, due to limited reporting, a mechanism cannot compute an allocation that is guaranteed to be stable without allowing the users to communicate with it. For this reason, the protocol allows the mechanism to receive acceptance or rejection signals from the agents about possible allocations that have been proposed to them. More specifically, the PGP prescribes the following five interaction phases between each user i and the mechanism: Phase 1 User i sends a report that contains information about her preferences. In general, this step will only allow limited reporting, i.e. the number of available reports will be much smaller than the possible preferences the agents may hold. Phase 2 The mechanism computes an allocation, which is constrained by user i’s report and the reports of any other users being allocated simultaneously. Phase 3 The mechanism sends offers to user i. Depending on the setting, it may be sufficient for the mechanism to compute just one allocation, or it may be necessary to present multiple offers to the users. Phase 4 User i responds by informing the mechanism if she accepts any of the offers. If she accepts, then the mechanism assigns her to the accepted offer. Once a user has accepted an offer, she cannot be re-allocated. Phase 5 After the task was executed, user i informs the mechanism whether it was executed correctly and/or she was satisfied with it. This phase ultimately determines whether or not the proposed coalitions were stable. 24 of 87

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

Given this protocol, we can define the mechanism designed using it as a basis as a pair (M, f ), where M is the set of messages that agents can report in Phase 1 and f is the allocation function, which maps user reports to coalition structures. A convenient feature of the PGP is that it is straightforward to guarantee truthful reporting, due in part to the fact that agents can accept and reject coalition offers. Therefore, the challenge is to ensure that this truthful report is sufficient to guarantee stability of each coalition of users. Indeed, if the mechanism is not properly designed to account for limited reporting, then users may be allocated to services they do not want. As such, the key problem we address by our paper can be stated as follows: How should the messages users can report, the allocation of the users, and the side information of the mechanism be designed so that each user is allocated a service that she prefers over the possible alternatives? It turns out that this problem does not have a single solution due to the fact that the structure of different families of user preferences means that limited reports sufficient to guarantee stability for one family are not sufficient for another. Luckily, the PGP represents a family of concrete mechanisms. Any specific instantiation of it can vary, for example, in terms of whether all agents perform Phase 1 simultaneously before the protocol enters Phase 2, whether their input is processed sequentially, or whether, in Phase 2, we also allow the mechanism to communicate with agents that have already been allocated to coalitions before computing an allocation. We can use this flexibility to design different mechanisms each of which ensures stability under specific circumstances, and select a different mechanism based on the preferences we have discovered to govern users’ choices using procedures like that proposed in section 3.2. In particular, in order to investigate the effect of different user preferences on the design of mechanisms that yield stable coalitions, we focus on two common classes of user preferences, topological and hedonic preferences. Topological preferences are characterised by service features that can be decomposed into metrisable topological spaces. In ridesharing, for example, topological user preferences would depend on, inter alia, the day and time they require the service, and the location of pick-up and drop-off points. This suggests that we can treat the set of desirable journeys as being defined over R2 for the locations, or over the real projective space RP 1 for times— leading to metric spaces and hence topological preferences. For this case, we show that the stable coalition formation problem reduces to the design of the messages each user can report. We show that this design problem is equivalent to a (e.g., sphere) packing problem, and give explicit solutions in the case of Rn and RP n−1 . Hedonic preferences, on the other hand, capture features of services that do not lie in a metrisable topological space. In particular, hedonic preferences depend on the characteristics of other users that share the service. A motivating example is the allocation of employees to shared offices in which each employee’s preferences are completely dictated by the other employees they share the office with, e.g. depending on the volume of music they listen to, or the amount of time they are in the office. In this case, the crucial design problem is how to allocate each user to a coalition such that the user is willing to share the service with the existing users in the coalition. For general hedonic preferences, we show that this can be achieved by applying an insertionbased approach where each agent has to be added to a coalition sequentially and this c SmartSociety Consortium 2013-2017

25 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

can happen only if all agents that are already in the chosen coalition approve the new person joining them. We also show that for a restricted class of hedonic preferences, i.e. single-subset-peaked preferences, more general allocation algorithms are possible that do not necessitate insertion-based mechanisms. These analytical results provide a foundation upon which adaptation at the user interaction workflow could be accomplished in a meaningful way: If we discovered, for example, that different sets of users have a mostly topological preference profile, we could apply a different workflow to these users than to other users who have mostly hedonic preferences. In reality, of course, things are not that simple – users may have partially hedonic and partially topological preferences, and we may not want to split them into disjoint subsets to avoid losing certain allocations that would involve them being allocated to coalitions that span across the two subsets. Nonetheless, our work on this specific case exemplifies a more general methodology for designing and utilising the options to be made available to HDA-CASs in terms of user interaction workflows: Different variations of these workflows have to be evaluated (empirically, or as shown in our work, analytically) based on high-level properties the system is supposed to satisfy, and the assumptions and conditions have to be established under which each of these variations satisfies them. The system must be then provided with a mechanism to determine which of these conditions hold in the HDA-CAS at runtime, and deploy the respective workflow as appropriate.

6

Conclusions

In this deliverable, we have reported on the final results of the research conducted by WP6 on tasks T6.3 and T6.4, which completes the development of a comprehensive adaptive social orchestration architecture for SmartSociety. As the individual parts of this document have shown, the overall problem of developing adaptive mechanisms to manage context and collectives in HDA-CASs in ways that support compositionality among human and machine peers in large-scale, data-driven systems is a multi-faceted challenge. It encompasses the definition of an overall architectural framework to manage the social orchestration lifecycle in a scalable and efficient way, and the development of adaptation mechanisms at all levels of this lifecycle: within its individual processing steps, across an entire lifecycle, and over different interaction models that enact such lifecycles with users. The overarching contribution of the work performed in WP6 has been to develop a set of algorithmic and architectural methods to address this issue. To achieve this, we have leveraged a breadth of techniques from multiagent systems, mechanism design, decision theory, Bayesian and statistical machine learning, combinatorial optimisation, distributed and service-oriented computing, and Web architectures. To our knowledge, our work has been the first attempt to integrate all these techniques in order to address the problem of producing adaptive methods to orchestrate HDA-CASs in a comprehensive, principled, and rigorous way. Due to the multi-faceted nature of the problem, the conceptual (and computational) complexity of the proposed methods, when viewed as an overall framework, is significant, 26 of 87

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

and it is not possible to develop a one-size-fits-all solution for arbitrary types of systems. However, we believe that significant contributions have been made to extend existing methods in order to address hybridity and diversity with a clearly human-centric outlook that go significantly beyond what previous work has to offer in terms of accounting for these phenomena. Examples of technical advances that have been accomplished to address these phenomena include our diversity-aware recommendation algorithm, preference elicitation methods that account for coarse preferences, multi-objective distributed network optimisation techniques, and the study of coalition formation protocols under limited reporting. Even though none of these methods alone can “solve� the problems hybridity and diversity create once and for all, the systematic identification of different adaptation problems at different layers of the social orchestration process, and the integration of individual adaptation methods in a coherent overall architecture provide a strong methodological foundation on which future work on the topic can build. That said, there are aspects of the overall challenge that we have not managed to tackle within the scope of this work package alone. We have not been able to integrate all components from other work packages at the implementation level to the extent that this would have been desirable, but this work will continue in the context of the overall SmartSociety architecture (WP8). Also, we have not been in a position to validate our components against large-scale real-world data – this validation will have to be completed with data provided through the scenarios and demonstrators of WP9. However, these shortcomings are only due to dependencies within the broader SmartSociety project, and all techniques that were envisioned to be delivered by WP6 for integration and validation purposes are in place as expected.

c SmartSociety Consortium 2013-2017

27 of 87


c SmartSociety Consortium 2013-2017

A

Deliverable D6.4

Adaptive social orchestration paper SmartOrch: Flexible, Adaptive Orchestration for Human-Centric Task Workflows

ABSTRACT Web-based collaborative systems where most computation is performed by human users have distinctly different requirements from traditional workflow orchestration systems, as humans have to be mobilized to perform computations and the system has to adapt to their behavior at runtime. In this paper, we present a social orchestration system called SmartOrch, which has been designed specifically for social computation platforms in which human participation is at the core of the overall distributed computation. SmartOrch provides a flexible and customizable workflow composition framework that has multi-level optimization capabilities. These features allow us to manage the uncertainty that social orchestration systems need to deal with in a principled way. We demonstrate the benefits of SmartOrch with simulation experiments in a ridesharing domain. Our experiments show that SmartOrch is able to respond flexibly to variation in human behavior, and to adapt to observed behavior at different levels. This is accomplished by learning how to propose and route human-based tasks, how to allocate computational resources when managing these tasks, and how to adapt the overall interaction model of the platform based on past performance. By contributing to novel solid engineering principles for these kinds of systems, SmartOrch addresses shortcomings of previous work that mostly focused on ad hoc, application-specific, and non-adaptive solutions.

Keywords Distributed systems, workflow orchestration, workflow composition, workflow optimization, social computation

1. INTRODUCTION In recent years, a new type of massive-scale distributed system has emerged in which most computation is performed by humans. Examples include social computing platforms such as Facebook and Twitter, collaborative content creation systems such as Wikipedia, crowdsourcing systems like Yahoo! Answers, human-based computation platforms such

ACM ISBN 123-4567-24-567/08/06. . . $15.00

as Amazon Mechanical Turk, sharing economy applications such as Uber and AirBnB, and other similar systems that involve collectives of humans performing cognitive and physical activities. While specific programming frameworks have been proposed for such applications or to facilitate the use of existing platforms to enable a specific “social computation” [1, 3, 10, 11], the design of generic platforms to orchestrate human-centric collaborative computation has not received much attention in the literature. Viewing social orchestration as the process of organizing the interactions of humans with a computational platform and with each other, while at the same time managing the computational resources used by the system to support this organization of activity, we obtain a set of requirements that are different from those of workflow management and enactment systems that are mostly “machine-centric” [2, 4, 20]. First, provisioning the system with human resources requires mobilizing humans to contribute to the platform and hence its performance will invariably depend on (collective) human behavior. Therefore, any optimization of the system has to account for the needs and preferences of the user population, its users, rather than just focus on computational performance. Second, the system must be able to deal with large numbers of different users engaging with different stages of complex distributed workflows (e.g. searching for relevant information or activities, contributing to these, negotiating collaborations with each other, providing feedback) at different times, while ensuring reasonable responsiveness and robustness to variations in user behavior. Third, it must allow for natural interaction of users with the platform using modalities they are familiar with from common Web platforms, and must be easily combinable with third-party Web applications to enable developers to build mashups or run different variations/types of apps on the same platform. Therefore, it cannot rely on bespoke languages and architectures and must be deployable on the standard architecture of the Web [6]. In this paper, we present a novel social orchestration architecture called SmartOrch (Smart Architecture for HumanCentric Task Orchestration) that provides the above features. Its overall purpose is to manage a workflow, involving the performance of human and machine activities, to support the completion of domain tasks (e.g. getting a question answered, organizing a meeting, renting a property), which may include steps that take place outside the platform in the real world. The contributions of these peers are enacted through API-level interactions with data resources exposed by the architecture, and whose state is handled by

DOI: 10.475/123 4

28 of 87

http://www.smart-society-project.eu


Deliverable D6.4

managers, each of which is a machine peer responsible for different parts of the workflow. The orchestration manager is a specific peer that allocates backend computational resources to the operations that need to be performed for the overall orchestration of such a workflow. It iterates over an asynchronous event processing loop, serving platform jobs maintained on dynamic process queues, one for every type of interaction with the system. In terms of communication, all interaction is via linked data resources and relies exclusively on basic HTTP operations. This allows the system to make use of the standard Web architecture, exploit its robustness and scalability (using caches and proxies, not relying on always-on commnication channels), and ensure interoperability with standard Web technologies and third-party services. This, in turn, facilitates integration with, e.g., social media and client apps. Beyond these core benefits, however, the key contribution of SmartOrch is that it enables adaptation to human behavior and computational performance at all levels of the architecture through a uniform treatment of data collected by the system. At the level of individual processes, optimization can be performed to take account of observed user behavior in order to achieve the global design objective of the system, e.g. maximize uptake or resource sharing. At the orchestration level, the ways in which computational resources are allocated to different processes in terms of scheduling, parallelization, and delegation, can be optimized based on observed human and machine performance. At the interaction level, different patterns that determine how the platform interacts with its users can be selected dynamically based on requirements and performance measures and be deployed in parallel on the platform. At all three levels, information about past operations (internal and external interactions with the system, runtime profiling information, data flow) is tracked through provenance traces and computation profiling information and can be inspected by the maintainer of the platform, its users, and automated internal optimizer modules. In turn, this allows for flexible adaptation of an existing system configuration by humans and algorithmic procedures that perform automated adaptations. We believe that this multi-level adaptability is vital in social computation systems, where overall performance largely depends on, potentially volatile, collective human behavior. The remainder of this paper is structured as follows: In Section 2 we discuss the background and principles that underpin the design of SmartOrch. In section 3 we describe the design and implementation of the SmartOrch architecture. Section 4 provides examples of adaptation at the process, orchestration, and user interaction level, and simulation experiments that demonstrate the impact of such optimizations in an example scenario. In Section 5 we review related work, and Section 6 concludes.

2. BACKGROUND Social computation [15], understood as the process of hybrid human-machine computation that results in the collaborative performance of a domain task, involves complex operations performed by humans and machine peers, interacting over a digital (normally Web-based) platform. While there is considerable variation in how such applications are organized, the overall lifecycle of performing a collaborative activity generally comprises (some or all of) the following stages: peer discovery, which identifies human peers that

c SmartSociety Consortium 2013-2017

c SmartSociety Consortium 2013-2017 might be interested in contributing to a social computation, task composition, which generates possible (computational or physical) tasks these peers might perform individually or jointly, task allocation, which produces a concrete assignment of peers to activities (and may involve processes such as contracting or negotiation), task execution, which tracks the performance of these tasks, and feedback, which allows peers to report on their experiences, e.g. rate each other. Naturally, different aspects of these might be organized very differently across different applications. For example, in a meeting scheduling application peer discovery is organized by a user contacting specific peers, while on a citizen science web site the platform might perform a search to identify relevant contributors. Task composition might return matching residential properties using a filter-based search on a property sharing portal, or simply suggesting relevant questions to target users with relevant interests on a question answering site. Task execution may be monitored by tracking location in a taxi sharing app, or simply by humans confirming successful completion or failure of a task. The social orchestration process managed by a platform deals with organizing all this activity using computational means and through appropriate interaction with the users. It should be able to cater for differences in the specific requirements of each scenario while providing sufficient structure to support all stages of the above lifecycle. To conceptualize this, we distinguish between three vertical layers of computation involved in the horizontal lifecycle stages: (1) The process layer, at which primitive computational processing steps are located (e.g. authentication, matchmaking). These cannot be broken down into separately orchestrated sub-steps, and need to be allocated computational resources per platform job that corresponds to a process instance to be executed. They can be performed by internal components of the platform, or delegated to external thirdparty services (e.g. a reputation service). (2) The orchestration workflow layer, which embeds a specific control flow of jobs enacted by a specific “manager� component, typically one for each of the stages of the lifecycle described above. For example, a task composition manager might obtain a request from a peer to generate suitable tasks to achieve a certain objective, and handle contacting the right peers, performing matchmaking operations, and consulting reputation information to rank the results. (3) The user interaction workflow layer, finally, enacts the interaction model that governs the way users will work with the platform, providing the input/output interfaces to humans participating in the computation. For example, the negotiation process in a group activity might involve all participants in a task agreeing to it explicitly, and the process of tracking this agreement would be handled by a negotiation manager accepting agree/reject messages and updating the state of the task. It is at this layer that managers expose APIs toward client applications, whereas the lower-level layers only involve APIs used either by platform components. While this kind of horizontal and vertical composition of distributed processes has a lot in common with general distributed workflow orchestration systems, the paradigm of social computation brings new challenges with it that call for (1) more flexibility in terms of workflow composition and (2) adaptability required to be able to respond to observed behavior. As pointed out in [3, ?, 18], once human activity needs to be orchestrated, it is difficult to predict patterns

29 of 87


c SmartSociety Consortium 2013-2017 o peer discovery

p ADVERTISE(C) REQUEST(D)

task composition

NO SOLUTION

SOLUTIONS(T ) task allocation

REJECT(t) AGREE(t) FAILED AGREE(t)

START(t)

task execution

UPDATE(t)

feedback

FEEDBACK(t, F )

Figure 1: Team task protocol, adapted from [16]. Swimlanes represent peer roles, boxes their internal processing steps, and diamonds choice points.

of use, quantity, timescales and quality of contributions at design time. This implies that we need to be able to flexibly extend, modify, and respond to emergent human behavior. Consider, for example, the simple case of Web search, which provides a single-step, stateless user interaction workflow repeated millions of times every day. If users experience difficulty in finding what they are looking for, they will resort to using other applications, e.g. a crowdsourcing platform that utilizes human input where automated search does not produce good results. The disconnect between the two systems involved implies that the search engine provider does not receive feedback from users to modify the calculation of search results, or utilize crowdsourcing to complement them. Conversely, the crowdsourcing platform is unaware of the original queries posed to the search engine and cannot mobilize its users at the time of the initial query. This simple example illustrates a much broader problem: Without the capability to compose more complex human-centric workflows and to adapt them to user behavior at runtime based on data observed across different parts of the overall task environment, it is hard to manage such systems effectively. A formal framework that allows general user interaction models, described as multiagent protocols with different roles and activities, to be mapped onto orchestration architectures in ways that permit composition of complex application workflows has been previously described in [16]. Figure 1 shows an example of such a protocol, where a peer role p interacts with an orchestrator role o, traversing the different stages of the social computation lifecycle. In this team task protocol, peers can advertise capabilities C which the Task Composition Manager can use to generate a set of possible solutions T , i.e. tasks satisfying the description D contained in a task request. The participants of any proposed task t ∈ T can agree or refuse to participate in it. Once all of them agree (a process tracked by the Negotiation Manager) they can start providing updates on the execution status of the task to the Task Execution Manager and feedback to the Reputation Manager. Note that any of the processing steps and interactions can be managed by human and/or machine

30 of 87

Deliverable D6.4

peers. For example, updates on execution status could come from a sensor, or, conversely, suggestions of possible tasks could come from human users in response to a task request. The workflow composition capabilities of SmartOrch, which is based on this model, arise from using a single orchestrator to manage not just interactions with user clients contributing to different segments of the protocol at the same time, but also to run different protocols in parallel, reusing manager components where possible, and managing system resources in an integrated fashion. This is exemplified by three concrete systems we have built using SmartOrch: SmartShare implements a ridesharing scenario, where drivers and passengers can post ride requests that involve various constraints regarding origin and destination, timing of the ride, and other preferences. Task composition generates all solutions (combinations of drivers and passengers) that are possible given all existing requests. Negotiation of a ride is successful once all participants have agreed to it, and all participants of a ride can post feedback about each other from that point onwards. SmartShare implements the protocol shown in Figure 1, omitting the task execution stage. AskSmartSociety implements a crowdsourcing scenario that uses a much more simple task composition pattern. Here, a request is a question posed by a user immediately converted to a task, and all users that have advertised matching interests or expertise automatically join the task, without the task requiring further negotiation to be agreed. Respondents can then submit answers to the question, and rate each others’ answers. This scenario applies a much more open notion of task, which does not rely on specific participants’ contributions, and does not involve solving a combinatorial constraint matching/resource allocation problem. AskAndShare combines the workflow patterns of SmartShare and AskSmartSociety, demonstrating how SmartOrch allows for easy reuse and composition of existing patterns. It uses the AskSmartSociety pattern to enable users to get recommendations for tourist activities in a city, and employs a variation of SmartShare that enables users to propose group activities (such as museum visits, meals, excursions) either based on answers to responses to the initial question (for example, if a respondent recommends a museum), or by creating activities from scratch. Constraint matching is limited to checking times and dates, and agreement to a task is automatic once a minimum number of participants have joined. These examples illustrate the flexibility SmartOrch provides even when using only slight variations of a single interaction model. In Section 4, we will revisit the example of SmartShare to demonstrate the second main contribution of the architecture: the ease with which it permits optimization and adaptation functionality to be built into SmartOrchbased systems at three different levels – the process layer (using the example of task composition algorithms), the orchestration layer (using the example of decomposition-based parallelization of task composition), and the user interaction workflow layer (using the example of varying protocol structure for different sub-populations).

3.

THE SMARTORCH ARCHITECTURE

3.1 Basics SmartOrch has been developed as a purely event-driven, asynchronous framework that follows a fully RESTful design. At the level of backend processing, the orchestrator

http://www.smart-society-project.eu


3.2

Architecture structure

Figure 2 shows a high-level overview of SmartOrch and the main interactions between its components. Core components are shown inside the dashed box, and supporting services are shown at the bottom of the figure. Interactions between core components and client applications as well as supporting services are via client and platform RESTful APIs, where only the client interfaces are exposed for external use, whereas the platform APIs require privileges only provided to components used by the SmartOrch platform internally. Creation of a specific orchestration system starts with the developer defining individual platform jobs from scratch or fetching existing implementations already published on a Job Repository like GitHub1 and binding them so that different user interaction workflows can be created. Such jobs will typically be wrapped in orchestration workflows capturing a response to an API call. Consider the example of handling an AGREE(t) call during negotiation, which indicates that a peer p wants to join task t: An example orchestration workflow for this might involve authenticating p on the 1

https://github.com/

c SmartSociety Consortium 2013-2017

Job Repo

APP

Execution Engine

SmartShare APP AskSmartSociety APP

Workflow Executor

User Interaction Workflow Manager

Job Scheduler

Fetch Process

manages individual platform jobs by maintaining dynamic queues, one for every type of platform job. Jobs can be created due to a client-side interaction, they can be triggered as a side-effect of managers’ activities (e.g. to synchronize different resources), or can be delegated to third-party services (e.g. for authentication). Events corresponding to steps in the user interaction workflows as perceived by the clients of the system are communicated to SmartOrch through RESTful APIs, as are internal events resulting from the operation of managers and other third-party services described below. SmartOrch responds to any such event by generating a sequence of jobs that need to be executed. Example jobs that occur as a result of almost all client-centric events are authentication, access control, and document validation. These sequences of jobs give rise to the orchestration workflows executed whenever calls to exposed APIs are received. Developers can either use predefined jobs from a job repository to construct such workflows, or customize the response of SmartOrch further by writing custom jobs for a specific application. We expect that, for a broad range of application, many of the standard inspection/update client actions needed to manage global state can be implemented using job types already provided, and that custom ones will mostly be have to be defined for algorithmic optimization processes. Distributed state is maintained through a set of exposed resources (documents), such as task requests that correspond to requests from users for a new task, tasks corresponding to composed tasks, and task records that are used to record execution updates. These documents are fully versioned, access-controlled, and linked to each other to reflect the lifecycle of their creation (essentially, each of them corresponds to a “message” arc in Figure 1 and contains the data corresponding to the content of the respective message). In our implementation, peer profiles (used for peer discovery), reputation information (used in the feedback stage), and provenance tracking (used for optimization, as explained in Section 4) are handled by services running on remote servers, while task composition, negotiation, and task execution management are handled by manager components running on the orchestration server. However, SmartOrch is indifferent to the physical distribution of these services.

c SmartSociety Consortium 2013-2017

Client REST APIs

Deliverable D6.4

Optimizer

AskAndShare APP

Orchestration Patterns

... Platform REST APIs

...

Peer Manager

Task Composition Manager

Negotiation Manager

...

Provenance Manager

Figure 2: The Architecture of SmartOrch

platform and checking whether p has write access to the document representing t, validating the format and contents of the data p is sending, recording p’s agreement by updating t, and recording provenance information for this operation, which logs which peer (including machine peers) performed what operation on which data structure. This particular workflow is handled by the Negotiation Manager, and contains calls to the Peer Manager (authentication, access control) and the Provenance Manager. If p is the final peer to agree to the task, the agreement will also trigger a call to the Task Execution Manager, as a new task record for t must be created. In other words, individual orchestration workflows are connected with each other to enact the user interaction workflow defined by client APIs. The implementations of these orchestration workflows are stored in the User Interaction Workflow Manager, which is also used to select different workflows in different situations in the process of optimization (cf. Section 4), and passes them on to the Execution Engine, which, in turn, contains the Workflow Executor and the Scheduler. The Workflow Executor receives orchestration workflows, dispatches the jobs they contain to the correct queues and handles dependencies between them (for example locking a specific data resource while it is being updated). The Scheduler determines which pending job should be executed next, by applying a prioritisation schedule on the different queues for jobs to be served. For example, simple document inspection jobs (e.g. when a user wants to display the list of current tasks on their client app) can be given higher priority than provenance recording, which is not time-critical, in order to ensure responsive behavior of user interfaces. The component that adds genuinely novel functionality to SmartOrch is the Optimizer, which extracts patterns from past operation of the platform that can be used to optimize the system at the process, orchestration, and user interaction workflow layers. To this end, it continually fetches provenance data and updates so-called orchestration patterns that describe what has been learned so far. The interactions between the Optimizer, the orchestration patterns, and other components of the architecture are further explained in detail in Section 4.1. The output of the Opti-

31 of 87


c SmartSociety Consortium 2013-2017 mizer are decisions used as input for the User Interaction Workflow Manager and the Execution Engine. The boxes at the bottom of Figure 2 represent manager components that support and use the orchestration backend, but whose internal structure is not controlled by this backend. Some of these, e.g. the Task Composition Manager, Negotiation Manager, and Task Execution Manager, are implemented using the SmartOrch Execution Engine, while others are third-party services interacting with SmartOrch only at API level. In the implementation of our example scenarios, these are: the Peer Manager, Provenance Manager, and Reputation Manager. The Peer Manager handles user registration and thus maintains profiles for human users, including credentials and scenario-specific profiles used for peer discovery. Any machine peers using platform APIs are also registered with the Peer Manager. The Provenance Manager we use is implemented using a remote PROV [13] server, and plays a key role in recording all operations at the level of accessing and manipulating data resources. It serves as the fundamental data source for the Optimizer’s adaptation and optimization functions, providing a uniform way of capturing observed system operation. Finally, the Reputation Manager is responsible for computing numerical reputation scores for each human user based on feedback supplied by participants in tasks that user was involved in. This can be used, for example, to rank the tasks proposed by the Task Composition Manager, or, as we will show below, to make decisions regarding which user interaction workflow to use for different users in the system. All these services are peers on the platform, authenticated internally, subject to explicit access control policies, and treated as agents that perform operations on data resources when recording provenance. SmartOrch allows arbitrary numbers and types of such services to be connected to the orchestration backend. It is important to emphasize that the configuration described here merely reflects what was implemented in the three systems described earlier.

3.3 RESTful APIs Every interaction with SmartOrch is bundled with a JSON document and a suitable verb (HEAD, GET, POST, PUT and DELETE ) for the exposed operation. The JSON document carries the actual message sent by the peer, and verbs permit overloading of URL endpoints by indicating different operations on exposed resources: HEAD and GET operations are associated with read operations, POST or DELETE can be used, for example, to create or remove a task using the same base URL, and PUT is used for negotiation as well as for updates that are performed on the task record during task execution. Listing 1 presents a simplified example in which a human participates in the negotiation concerning a proposed task. 1 2 3 4 5 6 7 8 9 10 11 12

api . put ( " / apps /: app / tasks /: taskIndex " , function ( request , response ) { var newJob = { " application " : app , " kind " : " authentication " , " id " : getProvToken ( " authentication " ) , " purpose " : [ " negotiation " ] , " inputAdapter " : request , " outputAdapter " : response , " credentials " : getCredntls ( request ) , " ip " : getIP ( request ) , " index " : taskIndex , " version " : getTaskVersion ( request ) ,

32 of 87

Deliverable D6.4

13 14 15 16 17 18

" createdOn " : new Date () , " enqueued " : new Date () , " triggeredBy " : null }; loadJob ( newJob ) ; }) ;

Listing 1: Translating an API call to a platform job When the HTTP request is received at the endpoint /applications/:app/tasks/:taskIndex with the verb PUT, the property purpose indicates that negotiation needs to be performed on this task. The listing shows the very first job that is created, authentication. Since authentication is a common job, additional information is needed for the scheduler so that the desired workflow can be executed. This is captured by the property purpose (we omit additional details that might be needed for the scheduler). Each job obtains a unique ID, and additional properties such as timestamps which are used for job profiling. Finally, through the triggeredBy pointer we obtain a linked list of platform jobs during execution, which in turn allows a full trace of provenance for analysis purposes. (In Listing 1, triggeredBy is null as this is the very first job triggered by an exogenous event.) To account for additional constraints imposed by the system, the workflow that is triggered consists of a more complex sequence of jobs discussed in the following section.

3.4

Event-driven processing

Different orchestration queues are associated with different platform jobs that are the building blocks of the orchestration workflows executed in response to HTTP requests received by the system. In our design, these workflows are partitioned by different API calls. In the above example, upon receipt of the HTTP call, the user interaction workflow manager passes an orchestration workflow composed of the sequence authentication (Auth) ; access control (AC) ; document validation (DV) ; apply negotiation (AN) ; record provenance (RP) to the scheduler, as shown in Figure 3. Line 17 of Listing 1 results in loading a new job onto one of the orchestration queues and a call to the scheduler. Every time the scheduler is invoked, it checks whether there are any pending jobs in the queues. If this is the case, it passes the job with the highest priority to the Workflow Executor. When the Workflow Executor is invoked by the Scheduler, it either executes the job locally or uses a delegated service (a different manager) in order to compute the job. Since authentication is the first job of the orchestration workflow in our example, the job will be delegated to the Peer Manager in order to verify the identity of the client that triggered the execution of Listing 1. Once this is done, the Workflow Executor creates a new job (access control ) that is pushed to the appropriate orchestration queue as dictated by the next step in the orchestration workflow, and a new call to the Scheduler is made. Allowing the Workflow Executor to also create platform jobs makes workflow partitioning easier, especially when SmartOrch has to invoke remote services. It is now the turn of the Scheduler to select the job with the highest priority to be executed. Processing of jobs proceeds in this manner as at the end of execution of every job (apart from the final step in a given orchestration workflow), the Scheduler is repeatedly called to serve the highest-priority job. Eventually, the platform jobs authentication, ac-

http://www.smart-society-project.eu


Deliverable D6.4

c SmartSociety Consortium 2013-2017

Figure 3: From workflows to job queues cess control, document validation, apply negotiation, record provenance are processed in this order, and the orchestration workflow triggered by the original HTTP request is executed successfully. Note that, while the control flow of an individual workflow (which may contain branches and loops) is respected by the Execution Engine, many such workflows are managed in parallel, e.g. when different users negotiate a task, different groups of users negotiate different tasks, or even engage with different stages of the lifecycle for different tasks. Depending on the logical constraints governing global state (that would be captured in the pre- and post-conditions of messages in protocols like that of Figure 1 as described in [16]), the developer must ensure that all data resources are kept in a consistent, synchronized state by using appropriate semaphores. The RESTful design of SmartOrch facilitates this by avoiding use of direct peer-to-peer messaging and managing global state of shared resources (such as task requests, tasks, and task records) solely through server-side versioned documents. Currently the Execution Engine runs on top of a Node.js async event loop as a single thread. Hence, the invocation of both the Scheduler and the Workflow Executor is done through function calls passed as events to this event loop. When async computations are required (e.g. to delegate a job to a manager), function calls that are potentially pending on the loop are executed. As a consequence, as long as there are jobs that are not blocked due to synchronization issues dictated by (potentially different) orchestration workflows, the computational power of the Execution Engine is utilized fully. However, improving the design and the implementation of the interaction between Scheduler and Workflow Executor in the Execution Engine, and making use of the Optimizer further in order to address schedulingrelated issues when efficient use of computational resources through parallelization is needed is an interesting direction of future work, which is hinted at in our initial experiments with parallelization described in Section 4.

4.

ADAPTATION CAPABILITIES

The most distinctive feature of SmartOrch is the way in which it embeds adaptation capabilities in the orchestration system. These enable it to respond to variations and changes in observed human behavior in order to ensure achievement of global system objectives and to optimize computational performance of the platform. While data-driven optimization can in principle be performed in any orchestration architecture, by linking a component that has the capability to

c SmartSociety Consortium 2013-2017

Figure 4: The SmartOrch Optimizer modify decisions at the processing, orchestration workflow, and user interaction workflow levels, and which operates on a single data source (provenance data describing all operations in the system) we enable a much more pervasive and principled capacity for runtime adaptation that is suitable for social computation systems, where we expect variability in system behavior to be the rule.

4.1 Optimizer The Optimizer follows the general design shown in Figure 4: Provenance data, describing all operations in terms of peers, resources, and activities performed on these resources, is continually gathered by the Provenance Manager and can be queried by the platform developer or an automated Optimizer process. For example, we might want to know how soon users stop using the platform after they have not found tasks that match their requests for a while. Optimization algorithms (in a very broad sense of the term) make decisions based on the results of these queries, by using them as parameters that inform their decisions. In our example, if we find out that most users leave after three consecutive unsuccessful attempts, we should give those users priority over use of some resource (e.g. allocate them to a car in a ridesharing domain) after they have been unlucky twice. However, to discover such patterns, a background learning process must run in the background, which involves the use of learning patterns, defining what data patterns are queried to obtain learning data, and application of learning algorithms to extract pertinent orchestration patterns. In the above example, a suitable learning pattern might be (REQUEST → NO SOLUTION)n → REQUEST → SOLUTION checking for all patterns where no solution was identified repeatedly n times, and then another request was posted by the same user later (using the notation of Figure 1). The orchestration pattern extracted from this might be (REQUEST → NO SOLUTION)3 → ⊥ where ⊥ indicates no match against a future REQUEST. When these forms of background learning from data are applied, optimization algorithms would query the library of extracted orchestration patterns, rather than the original provenance data directly. It is worth noting that we use the notion of “orchestration pattern” very liberally here. For example, it could apply to a recurring pattern in a user interaction workflow as in

33 of 87


c SmartSociety Consortium 2013-2017 the example above, to regularities in the performance of certain platform jobs (e.g. because of persistently high loads of specific services), and to many other similar cases. By and large, however, we are mostly interested in optimizations that aim at adapting to human behavior.

4.2 Simulation Experiments In this section, we present three different optimizations performed in the SmartShare ridesharing system, one for each of the three layers of SmartOrch operation (process, orchestration workflow, user interaction workflow). The purpose of these experiments is to demonstrate the breadth of optimization opportunities SmartOrch offers as one of its main contributions, and the benefits these can bring in terms of managing social computation systems. The actual algorithms we use are deliberately kept simple, and do not constitute novel contributions in themselves. As mentioned in Section 2, SmartShare aims to support people that are willing to share a ride. In our simulation scenario, we consider users who want to go from location O to location D within the next 24 hours. Each of them is characterized by role (driver or commuter), gender (male or female), and age (young or adult). The aim of the system is to compute a global ride allocation, a solution that groups people into cars given their requirements and preferences. The pickup location O, the drop off location D, and the time of the ride are requirements of the users that impose hard constraints on the allocation proposed by the system. This information is elicited from the users during the peer discovery stage. During task composition, SmartOrch proposes only solutions to users that satisfy their requirements. However, each user also has preferences about whom to travel with, i.e., they have hedonic preferences. We assume a setting in which young users want to travel only with other young users, adults only with adults, female users only with other females, and male users who are indifferent toward female and male users. This information is not directly elicited, so the system has to learn these hidden preferences and understand which type of implicit constraints they impose. In particular, to provide a solution that satisfies the users, the system needs to determine whether they would accept solutions that violate these preferences, i.e. whether they constitute hard or soft constraints in practice. Assume that SmartShare proposes a unique allocation A to users computed by a heuristic that maximizes the sum of the utility of all the drivers, i.e., the drivers’ social welfare (called “driver welfare” below). Formally, the utility u(A, d) of a driver d given allocation A is u(A, d) = k − k/|S(A, d)| where k is the cost of the ride and S(A, d) is the set of users allocated in the car of driver d, assuming that the cost is spread equally among all passengers. The total welfare of all drivers D can be trivially computed as X U (A) = u(A, d) . d∈D

The heuristic the system uses is shown in Algorithm 1: In line 3, a set of teams T with one driver d ∈ D in each team is created. A team τ ∈ T represents a group of people that shares the same car in the solution proposed by the system. In line 4, given the capacity of the car of each driver Di , the system computes the total capacity C. The next steps

34 of 87

Deliverable D6.4

Algorithm 1 : Greedy task allocation algorithm 1: Input: a vector of drivers D, a vector of commuters M 2: Output: set of teams T that compose allocation A 3: initialize (T, D); P 4: C ← |D| i=1 capacity (Di ) P|D| 5: while M 6= 0 and i=1 (|Ti | − 1) < C do 6: hτ, mi ← findmax (T, M); 7: M ← M \ {m} 8: Tτ ← Tτ ∪ {m} 9: end while 10: return T

identify the pair of commuter m and team τ that maximizes the gain in the driver welfare by using the function findmax(T, M) (Line 6), remove commuter m from the set of commuters that still need to be allocated (Line 7), and add commuter m to team τ (Line 8). These steps are repeated until no more commuters need to be allocated or the overall capacity C has been reached (Line 5). By randomly selecting the order with which users with identical requirements and preferences join a team, the algorithm can be proven to be fair in expectation. That is, in expectation, identical users achieve the same utility. Focusing on this task composition heuristic, we demonstrate the benefits of SmartOrch adaptation at the process, orchestration, and user interaction workflow levels. Process layer. At the level of processes, we consider the example introduced in Section 4.1, whereby, after several iterations, the system learns that users drop out of the system when they are not matched to a car for three times in a row. Given this, we introduce priorities in Algorithm 1 for users, which depend on how many times they have not been matched in a row. This results in a drastic reduction of the dropout rate: In a simple case with 100 commuters and 16 drivers offering 3 seats each, the user dropout probability is ∼ 11% (i.e., 48 3 ) after three iterations with Algorithm 1, and 0 when 100 user priorities are introduced. SmartOrch allows for this adaptation by modifying the job of computing tasks within the Composition Manager without reengineering the system. We simply store an additional attribute for every user in the Peer Manager that represents the number k of consecutive unsuccessful ride requests, and replace the findmax function by a procedure that sorts commuters by their priority, which increases with k. Orchestration layer. To consider an example of adaptation at the orchestration level, we look at the performance of task composition in terms of driver welfare U (A). Note that, given an allocation A, we can compute the highest value of this quantity U ∗ (A) that can be achieved when all users accept the proposed rides. However, we aim to use the actual welfare U (A) achieved given actually accepted rides to evaluate the system, since this is a better measure of the quality of the solutions proposed. To evaluate user satisfaction we consider the two following measures: First, given an allocation A, we consider the number of people N (A) who accept rides and affect U (A), compared to the maximum number of people C that could be sharing a ride (including drivers) in the best case, i.e., C = C + |D|. Further, we also consider the difference between the number of users N ∗ (A) to whom the system pro-

http://www.smart-society-project.eu


Deliverable D6.4

poses a ride and the number of users that actually accept it, i.e., N ∗ (A) − N (A). Evaluating this difference is important because rejection of a ride can be interpreted as a failure of the system to understand user preferences. Finally, we measure computation time t of the algorithm to assess how much the efficiency of the system is improved. We compare the performance of SmartOrch with a system G that simply applies Algorithm1. In what follows, we denote with AS and AG the allocations computed by SmartOrch and G, respectively. In our example, the Optimizer knows that there are four subsets of users that should be considered separately in order to avoid ride rejections: young female users, adult female users, young male users, and adult male users. Given this, SmartOrch can dynamically learn to decompose the input set of the task composition problem into four nonoverlapping subsets of users (who would not ride with each other anyway), and to run four instances of the algorithm in parallel, one for each subset of users, on a separate Composition Manager process. As a result, we expect to observe that U (AS ) ≥ U (AG ) even if U ∗ (AS ) ≤ U ∗ (AG ). Indeed, by discovering users’ preferences and the type of constraints they impose, SmartOrch can avoid proposing rides that would be rejected. This guarantees that U (AS ) ≥ U (AG ). However, since no constraint related to preferences affects the allocation AG , U ∗ (AG ) corresponds to the upper bound of the driver welfare that can be achieved by any system, and thus, typically U ∗ (AS ) < U ∗ (AG ). Given the heuristic used in this example, the observations about the driver welfare trivially imply that N (AS ) ≥ N (AG ) and N ∗ (AS ) − N (AS ) ≤ N ∗ (AG ) − N (AG ), i.e., users are more satisfied with solutions proposed by SmartOrch than with the ones proposed by system G because with the former system (i) more users are allocated and (ii) fewer users reject the solution proposed to them. We observe that SmartOrch can flexibly decide if it should run one instance of the algorithm or multiple instances in parallel and thus opt for the solution that, e.g., reduces the computational time of the algorithm without obtaining a driver welfare lower than the one of G, i.e., it guarantees tS ≤ tG and U (AS ) ≥ U (AG ). The functions that implement this work as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13

function doMatching ( pattern , tasks , users ) { if ( Object . Keys ( pattern ) . length >0) { for ( var i =0; i < Object . Keys ( pattern ) . length ; i ++) parallel ( matching ( pattern [ i ]. tasks , pattern [ i ]. users ) ) ; } else matching ( tasks , users ) ; } function parallel ( async_calls ) { var results ; for ( var i =0; i < async_calls . length ; i ++) { results = async_calls [ i ];} return sh ared _ca llba ck ( null , results ) ; }

Listing 2: Parallel and sequential task composition The noteworthy aspect of this function is the pattern parameter, which acts as input to the function doMatching. It applies the orchestration patterns stored in the Optimizer, which allow for the identification of subgroups into which the users can be divided in order to obtain a more efficient matching. Given this information, an instance of the algo-

c SmartSociety Consortium 2013-2017

c SmartSociety Consortium 2013-2017

Figure 5: Driver welfare achieved by SmartOrch compared to non-adaptive system G for different population sizes.

rithm is executed in parallel for each subgroup (lines 2-5). Before discussing the advantages of SmartOrch in terms of user interaction workflow, we present the results of the simulation of the example described above. The users considered in the simulation are randomly generated such that each of them is either male of female with 50% probability, young or adult with 50% probability, and is a driver with 20% probability or a commuter with 80% probability. For the sake of simplicity, we assume all the users want to go from location O to location D during the same time interval. We consider different population sizes (50, 100, 500, 1000, 3000, 5000, and 10000 users) and the results we present are averaged over 100 instances for each population size. First, we analyse the driver welfare shown in Figure 5 where U (AG ) and U (AS ) are compared to the upper bound ∗ U (AG ). The figure shows only results for populations up to 1000 users. It is easy to observe that the welfare achieved by SmartOrch is higher than the one achieved by G. We obtain similar results for every population size. Indeed, the average ratio between actual driver welfare and the upper bound, where the average is taken over all different population sizes, is 0.4777 for system G and 0.9829 for SmartOrch, with a standard deviation of 0.0037 and 0.0080 respectively. Note that, in this example, U ∗ (AS ) = U (AS ) and thus U (AS ) in Figure 5 represents also the maximum welfare achievable when constraints due to preferences are taken into account. Thus, U ∗ (AG ) − U (AS ) indicates how much preferences affect the maximum achievable welfare achievable. In our example, this effect is not significant. We can make similar observations regarding user satisfaction. Figure 6 shows the number of users who accept the ride proposed by G and SmartOrch compared to C, i.e. the maximum number of people that can take part in a ride. We observe that N (AS ) ≥ N (AG ), i.e. the users that accept the ride with allocation AS are no less than the ones that accept it with AG . Moreover, more users reject the ride proposed by system G. In particular, N ∗ (AG ) − N (AG ) > 0 (light blue area), while N ∗ (AS ) = N (AS ) (empty yellow area). Finally, we discuss results for computation time of the algorithm shown in Table 1. Here, we compare the situation

35 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

Figure 7: SmartShare workflow in which highreputation users do not have accept rides explicitly

Figure 6: Number of users allocated by SmartOrch and a generic system G for different population sizes. Users are divided in two groups, those who accept the ride and those who reject it. population size 50 100 500 1000 3000 5000 7000 10000

tG 3.85 · 102 10.69 · 102 16.90 · 103 65.18 · 103 59.79 · 104 157.75 · 104 316.16 · 104 636.87 · 104

tparallel 18.82 · 102 20.74 · 102 3.49 · 103 4.51 · 103 1.14 · 104 1.87 · 104 2.59 · 104 3.61 · 104

Table 1: Computation times with system G that runs a single instance of the algorithm (tG ) and with SmartOrch, which parallelizes execution whenever possible (tparallel ) for different population sizes.

in which a system (like system G) always runs a single instance of the algorithm with that where execution of the algorithm is always parallelised whenever this reduces the expected number of rejected rides. When the system has 500 users or more, the parallel approach requires less time than the one used by G. This is not the case when there are no more than 100 users. However, since SmartOrch can dynamically adapt to specific situations, if it identifies that the computation time of the algorithm is more critical than, e.g., driver welfare, it can still decide to behave exactly as G, i.e., SmartOrch can decide how to run the algorithm by imposing tS = min{tG , tparallel }. This highlights the flexibility of SmartOrch in making decisions regarding trade-offs between performance depending on the situation in hand. User interaction workflow layer. In the final set of simulation experiments, we focus on optimizations at the user interaction workflow level, specifically, when observing unreliable users. In the ridesharing scenario, in order to reach agreement, users have to accept a proposed ride in the task allocation stage. This requires a lot of explicit communication between the system and the users, and may lead to delays if users take some time to accept a ride. If we assume that each user i has a reputation ri , we could reduce

36 of 87

this communication overhead by selecting a threshold θ and requiring that only users with a reputation below θ must explicitly accept a ride. The idea is that users with a sufficiently high reputation are reliable enough to guarantee that it is unlikely that they will not accept a proposed ride. Note that the potential loss in driver welfare this could incur is due to users who do not take the (explicitly accepted or not) ride proposed by the system, even if their requirements and preferences are satisfied, e.g. due to last-minute changes of plan that implicitly affects their reliability (and reputation). A user interaction workflow model that captures these two alternatives is shown in Figure 7. We measure the impact of this more flexible workflow model on driver welfare as follows: We assign a reputation level ri ∼ U (1, 100) to each user i. Since we assume that users requirements and preferences are satisfied, the probability pi that a user i will not take the ride depends only on exogenous factors and thus on ri . We model this probability as pi = (100−ri )/100, i.e., the complement of the reputation level. Moreover, we assume that if the user explicitly accepts a ride, her probability to “fail” to complete it correctly is reduced by 90%. For example, if ri = 20, then pi = 80% when i does not explicitly accept the ride and pi = 8% when i accepts the ride. In our simulation, we use these probabilities to randomly select the users that do not take a proposed ride. For every population size and instance considered in the previous simulation, we present results averaged over 100 random selections. Driver welfare is computed by considering only users that actually perform a ride. Table 2 shows the proportion of driver welfare obtained compared to the maximally achievable value (achieved when θ = 100 and every user has to explicitly accept the ride) for different threshold values. size 50 100 500 1000 3000

θ=0 0.0438 0.0444 0.0472 0.0475 0.0476

θ = 25 0.4807 0.4913 0.5073 0.5088 0.5109

θ = 50 0.7500 0.7598 0.7753 0.7762 0.7778

θ = 75 0.9111 0.9156 0.9230 0.9233 0.9240

θ = 100 1 1 1 1 1

Table 2: Driver welfare for different population sizes and reputation threshold values These results demonstrate the advantages of an adaptive system that performs optimization at the user interaction workflow level, i.e. the flexibility of the system in deciding the threshold level depending on the particular situation it is currently exposed to. Indeed, SmartOrch can set the value of θ as required to trade off driver welfare against the number of messages exchanged in the system to orchestrate rides.

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

5.

RELATED WORK

Recently, there has been much interest in systems that perform collective computation with significant human involvement, resulting of a growing body of research on human computation [9], social machines [7], human-agent collectives [8], and social collective intelligence [12]. Somewhat surprisingly, relatively little work within this space has focused on orchestration frameworks, let alone on optimization and adaptation capabilities for such orchestration. Programming frameworks for human computation such as TurKit [10], Jabberwocky [1], and AutoMan [3] provide high-level language support for “programming with people”, using systems like Amazon Mechanical Turk or social networking platforms like Facebook as their backends. The proposed languages allow some flexibility, for example continuing the computation until a desired confidence level is achieved, but do not offer any facilities for composing complex multi-user workflows and optimizing these over time. CrowdLang [11] offers somewhat more extensive workflow composition functionality based on a number of generic collaborative patterns (e.g., iterative, contest, collection, divideand-conquer), yet these are limited to crowdsourcing and do not aim to be used for other types of social computations. The same is true of the more comprehensive orchestration architecture proposed by Tranquillini et al [18], even though this allows for much more flexibility in the definition and enactment of custom task workflows. In particular, their system enables developers to define complex crowdsourcing workflows using a BPMN-based modelling language. Another line of work concerns specification languages for social computation-like systems. Murray-Rust et al [14] use a declarative process calculus for describing coordination protocols to capture collaborative processes among “social compute units”, programmable abstractions of loosely coupled human teams. Their protocols are very similar to the models of user interaction workflows we presented in Section 2 (e.g. the one shown in Figure 1), and the experiments they conduct in a collaborative software development application scenario exhibit some adaptation dynamics in the ways in which the workflow is organised (the collective changes structure over time based on different events). However, their system is presented as a simulation prototype rather than as a reusable architecture, and the adaptations they describe are hardcoded into the system by the developer, not extracted from observed behavioral patterns. Singh [17] presents “Local State Transfer” (LoST) as an architectural style for collaborative systems. He proposes a similar agent protocol-oriented description language for distributed process management and communication, which shares our focus on a data-centric, RESTful framework for communication and synchronization. However, his framework does not map the formal enactment model onto a concrete computational orchestration architecture. Beyond the area of social computation, there is of course a substantial body of work on general workflow management and service composition platforms [5], where the systems that share most characteristics with the kind of architecture required in our domains are those used for scientific collaboration such as Kepler [2], Triana [4], or Taverna [20]. Here, there is extensive work on adaptive workflow orchestration and optimization, which, however, mostly focuses on scheduling jobs of a workflow over a set of resources. For example, Wen et al [19] propose a method to partition scientific

c SmartSociety Consortium 2013-2017

workflows over federated Clouds to meet security and reliability requirements, while minimizing monetary cost. The authors of [21] introduce a method to transform the structure of a workflow such that the monetary cost of executing a workflow on a public cloud is minimized while providing performance guarantees. These orchestration workflow optimizations are similar to some of the examples we present. However, they do not address the processing and user interaction workflow layers, the performance of which heavily depends on human behavior, and has not been the focus of these systems.

6.

CONCLUSIONS

In this paper, we have presented SmartOrch, a novel orchestration architecture for social computation systems in which most computation is performed by human users. The design of this architecture is heavily influenced by its humancentric focus in several ways: By implementing an asynchronous, event-driven orchestration loop, SmartOrch is able to cope with different user interaction workflows in parallel and with users engaging in different stages of the social computation lifecycle concurrently. These users will generally not exhibit the regularity of behavior one might expect from computational processes. By applying a purely RESTful approach to managing state and shared data resources, SmartOrch exploits the standard architecture of the Web. This makes it easy to integrate different external services, build client apps for SmartOrch-based systems using simple, language- and platform-independent APIs, and exploit the scalability and robustness inherent to the infrastructure of the Web. By embedding adaptation and optimization features at all levels of social orchestration, SmartOrch is capable of improving global performance based on continual observation of human behavior, both in terms of the computational characteristics of the system and the quality of social computations its orchestration activities result in. To our knowledge, SmartOrch is the first social orchestration system that combines these features and provides a framework generic enough to implement and combine a broad variety of social computation applications. Beyond the examples provided in the paper, SmartOrch has been used to implement further scenarios like meeting scheduling and chat-based group activity creation, and we are currently working on implementing a peer teaching app with it. Future work will focus on developing a description language for SmartOrch user interaction and orchestration workflows that allows for (semi-)automated generation of code that packages platform jobs in orchestration workflows and maps new user interaction workflows to APIs. We would also like to enable developers to specify which aspects of the system’s operation should be tracked in terms of provenance (in the examples above, the provenance templates used to capture provenance information have to be defined in an application-specific way). This would obviate the need for the developer to understand the PROV format, and would also lead to a smaller “provenance footprint”, which is currently often substantial as we track almost all operations in the system. Finally, there is still a lot of work to be done in terms of generalizing the ways in which we describe learning and orchestration patterns, all of which were designed in an ad hoc fashion for the examples in the paper based on human insight, and only scratch the surface of the full potential of truly adaptive social orchestration.

37 of 87


c SmartSociety Consortium 2013-2017

7.

ADDITIONAL AUTHORS

8.

REFERENCES

[1] S. Ahmad, A. Battle, Z. Malkani, and S. Kamvar. The Jabberwocky Programming Environment for Structured Social Computing. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pages 53–64. ACM, 2011. [2] I. Altintas, C. Berkley, E. Jaeger, M. Jones, B. Ludascher, and S. Mock. Kepler: An Extensible System for Design and Execution of Scientific Workflows. In Proceedings of the 16th International Conference on Scientific and Statistical Database Management, SSDBM ’04, pages 423–, Washington, DC, USA, 2004. IEEE Computer Society. [3] D. W. Barowy, C. Curtsinger, E. D. Berger, and A. McGregor. AutoMan: A Platform for Integrating Human-Based and Digital Computation. ACM SIGPLAN Notices, 47(10):639–654, 2012. [4] D. Churches, G. Gombas, A. Harrison, J. Maassen, C. Robinson, M. Shields, I. Taylor, and I. Wang. Programming Scientific and Distributed Workflow with Triana Services. Concurrency and Computation: Practice and Experience, 18(10):1021–1037, 2006. [5] E. Deelman, D. Gannon, M. Shields, and I. Taylor. Workflows and e-science: An overview of workflow system features and capabilities. Future Generation Computing Systems, 25(5):528–540, 2009. [6] R. T. Fielding and R. N. Taylor. Principled Design of the Modern Web Architecture. ACM Transactions on Internet Technology, 2(2):115–150, 2002. [7] J. Hendler and T. Berners-Lee. From the Semantic Web to Social Machines: A Research Challenge for {AI} on the World Wide Web. Artificial Intelligence, 174(2):156 –161, 2010. [8] N. R. Jennings, L. Moreau, D. Nicholson, S. Ramchurn, S. Roberts, T. Rodden, and A. Rogers. Human-Agent Collectives. Communications of the ACM, 57(12):80–88, Nov. 2014. [9] E. Law and L. von Ahn. Human Computation. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2011. [10] G. Little, L. B. Chilton, M. Goldman, and R. C. Miller. TurKit: Tools for Iterative Tasks on Mechanical Turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation, pages 29–30. ACM, 2009. [11] P. Minder and A. Bernstein. Crowdlang: A programming language for the systematic exploration of human computation systems. In Social Informatics, pages 124–137. Springer, 2012. [12] D. Miorandi, V. Maltese, M. Rovatsos, A. Nijholt, and J. Stewart, editors. Social Collective Intelligence. Combining the Powers of Humans and Machines to Build a Smarter Society. Computational Social Sciences Series. Springer-Verlag, 2014. [13] L. Moreau and P. Groth. Provenance: An Introduction to PROV. Morgan and Claypool, 2013. [14] D. Murray-Rust, O. Scekic, H.-L. Truong, D. Robertson, and S. Dustdar. A collaboration model

38 of 87

Deliverable D6.4

[15] [16] [17]

[18]

[19]

[20]

[21]

for community-based software development with social machines. In 10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing, 2014. Reference details removed to maintain author anonymity. Reference details removed to maintain author anonymity. M. P. Singh. LoST: Local State Transfer - An Architectural Style for the Distributed Enactment of Business Protocols. In Proceedings of the 9th International Conference on Web Services (ICWS), pages 57–64, Washington, DC, 2011. S. Tranquillini, F. Daniel, P. Kucherbaev, and F. Casati. Modeling, Enacting, and Integrating Custom Crowdsourcing Processes. ACM Transactions on the Web (TWEB), 9(2):7, 2015. Z. Wen, J. Cala, P. Watson, and A. Romanovsky. Cost effective, reliable and secure workflow deployment over federated clouds. IEEE Transactions on Services Computing, PP(99):1–1, 2016. K. Wolstencroft, R. Haines, D. Fellows, A. Williams, D. Withers, S. Owen, S. Soiland-Reyes, I. Dunlop, A. Nenadic, P. Fisher, J. Bhagat, K. Belhajjame, F. Bacall, A. Hardisty, A. Nieva de la Hidalga, M. P. Balcazar Vargas, S. Sufi, and C. Goble. The Taverna Workflow Suite: Designing and Executing Workflows of Web Services on the Desktop, Web or in the Cloud. Nucleic Acids Research, 41(W1):W557–W561, 2013. A. C. Zhou and B. He. Transformation-based monetary cost optimizations for workflows in the cloud. IEEE Transactions on Cloud Computing, 2(1):85–98, Jan 2014.

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

B

Recommender system paper

Diversity-Aware Recommendation for Human Collectives Pavlos Andreadis and Sofia Ceppi and Michael Rovatsos and Subramanian Ramamoorthy 1 Abstract. Sharing economy applications need to coordinate humans, each of whom may have different preferences over the provided service. Traditional approaches model this as a resource allocation problem and solve it by identifying matches between users and resources. These require knowledge of user preferences and, crucially, assume that they act deterministically or, equivalently, that each of them is expected to accept the proposed match. This assumption is unrealistic for applications like ridesharing and house sharing (like airbnb), where user coordination requires handling of the diversity and uncertainty in human behaviour. We address this shortcoming by proposing a diversity-aware recommender system that leaves the decision-power to users but still assists them in coordinating their activities. We achieve this through taxation, which indirectly modifies users’ preferences over options by imposing a penalty on them. This is applied on options that, if selected, are expected to lead to less favourable outcomes, from the perspective of the collective. The framework we used to identify the options to recommend is composed by three optimisation steps, each of which has a mixed integer linear program at its core. Using a combination of these three programs, we are also able to compute solutions that permit a good trade-off between satisfying the global goals of the collective and the individual users’ interests. We demonstrate the effectiveness of our approach with two experiments in a simulated ridesharing scenario, showing: (a) significantly better coordination results with the approach we propose, than with a set of recommendations in which taxation is not applied and each solution maximises the goal of the collective, (b) that we can propose a recommendation set to users instead of imposing them a single allocation at no loss to the collective, and (c) that our system allows for an adaptive trade-off between conflicting criteria.

1

Introduction

Sharing economy applications constitute an interesting domain for multi-agent resource allocation and coalition formation. In these applications, users act as producers and consumers of resources, aiming to find peers to share the resources with, while a platform supports them during peer discovery and resource sharing. These fundamental aspects of sharing applications highlight how the decisions of the collective of users lead to a globally desirable outcome, while the choices of a single user alone have no such power. However, the services the sharing applications provide should leave the decision-making power to each user in order to allow her to express her preferences and satisfy 1

School of Informatics, University of Edinburgh, United Kingdom, email: p.andreadis@sms.ed.ac.uk, sceppi@inf.ed.ac.uk, mrovatso@inf.ed.ac.uk, sramamoo@staffmail.ed.ac.uk

c SmartSociety Consortium 2013-2017

her individual needs. Consequently, instead of facing the problem of identifying a solution for the collective of users, the platform needs to help them in coordinating their individual choices in such a way that the goal of the collective can still be achieved. In this work, we tackle this issue by providing a recommender system that accounts for user preferences and facilitates the coordination among users, in scenarios where users perform joint tasks in subgroups consisting of the members of a larger collective. In the example of a ridesharing application, each user could be aiming to achieve the best fit between his schedule and the planned ride. However, since rides cannot be achieved without the collaboration of multiple users, the collective goal of facilitating as many users as possible will come into conflict with this individual preference. Many multi-agent applications face the problem of coordinating autonomous agents that aim to share resources, which can be seen as a resource allocation problem. Traditional approaches to this problem typically express a degree of centralised control in order to provide functional, viable solutions to most, if not all, participating users. In particular, several algorithms have been designed that identify stable matches between users and resources [14, 18, 9]. However, users cannot affect the algorithm. The most flexible approaches proposed in the literature make use of sequential mechanisms that allow users to accept or reject the solution currently proposed to them [11, 1]. Finally, some of the existing approaches assume that the system knows the complete preference ordering of users over, e.g., other users [8]. The crucial drawback of this type of work is that it focuses on problems like how to assign children to schools, how to allocate students to shared rooms, and how to match donors with patients in the kidney exchange market. In these scenarios, the possibility that users might prefer not to be allocated, rather than be to allocated as prescribed by the algorithm, is not considered. However, this assumption makes the adoption of such algorithms unrealistic for several sharing applications, e.g. ridesharing and joint event planning. Indeed, these approaches lack a crucial characteristic that systems that mediate between humans should have: the ability to model human diversity and consider the uncertainty of human behaviour. Indeed, human decision making is affected by multiple factors: social, cultural, psychological, personal, and available information [16], that are unique for each individual. These create variations among individuals in terms of preferences over given characteristics of the peers and resources, leading to diversity across users. Moreover, the variability of these factors adds complexity to the decision-making process within each individual, to the extent that near identical situations may lead to significantly different behaviour. This leads to uncertainty regarding user behaviour. Crucially, a sharing application that does not account for diversity and the uncertainty of human behaviour is likely to fail in supporting large-scale coordination within human collectives

39 of 87


c SmartSociety Consortium 2013-2017 effectively. Ideally, a system could address user diversity and provide a very personalised service by eliciting information from users and understanding their general preferences over some defined characteristics of the services. In the literature, there is ample work providing techniques for learning user preferences in an accurate way [17, 7, 4, 12, 10, 5, 13, 21, 15] and that focuses on delivering personalised services [2, 10, 19, 6]. However, apart from the tricky task of eliciting information from users and understanding how any given factors affect user preferences, a system has to deal with the problem of understanding which factors affect human behaviour. This is a currently open problem that is attracting attention from researchers interested in, e.g., social computation and psychology. Given this, a sharing application should account for uncertainty in human behaviour. In particular, in designing such an application, the designer has to (i) pay attention to the type of interaction between the system and the users (both individually as a collective) and (ii) allow for flexibility such that it can adapt to unforeseen behaviour. In this work, in order to provide such flexibility, instead of offering users a single option computed with the techniques discussed above, we focus on the problem of recommending multiple options to users. The allocation problem faced by sharing applications, whose users aim to find peers to share a resource or a task with, is of a combinatorial nature. As such, when a system offers multiple options, all the users assigned to a task have to agree to it, i.e. they have to choose the task for it to happen. Since there is no guarantee that users’ independent choices are consistent with one another, the system has to provide a coordination mechanism. This problem can be seen as a coalition formation problem [20] in which incentives to stay in a suggested coalition may be provided to users who would otherwise reject it. Cost of stability [3] and taxation [23] are two techniques proposed to provide such incentives and achieve the desired effect by artificially modifying users’ preferences. In this work, instead of using explicit coordination techniques that require communication with users, we provide an indirect coordination mechanism, based on the techniques used in coalition formation problems. More specifically, we introduce a taxation mechanism in the options computation process. Note that a sharing application that aims to adapt to the user collective but also wants to account for the interests of individual users, faces a multi-criteria optimisation problem [17]. Indeed, the interest of the collective (that requires users’ collaboration) is in conflict with the interest of individual users whose aim is to obtain what is the best option for themselves. The approach we propose allows the sharing application to specify to which extent it wants to account for individual users’ interests and identify options that achieve the desired trade-off between conflicting interests. The three main contribution of this work are: • A formulation of the user coordination problem faced by sharing economy applications, in such a way that it allows for the explicit representation of the diversity and uncertainty in human behaviour; • A diversity-aware system for the coordination of users in sharing economy applications that does not require communication between users; • Experimental evidence for the necessity of taking human diversity and uncertainty into account when coordinating such applications. Specifically, we demonstrate that we can replace direct allocations with recommendation sets at no cost, while also allowing for adaptively trading-off between various criteria of optimality. The remainder of this paper is organised as follows. In Section 2, we provide a formal description of the allocation problem that charac-

40 of 87

Deliverable D6.4 terises sharing applications, propose a diversity-aware approach that aims to account for diversity and uncertainty of human behaviour, and describe our framework. Section 4 proposes a detailed description and formulation of the mixed integer linear programs used in our optimisation framework. The experimental evaluation and obtained results are described in Section 5. Finally, Section 6 concludes the paper.

2

Formal Model

In this section, we formalise the resource allocation problem that characterises sharing economy applications. Consider a set of tasks J = {1, . . . , |J|}. Each task j ∈ J is associated with one and only one user who owns the task, for example, the user is the owner of the resource that will be shared, or whoever initiated a concrete sharing task. Since we assume that each owner has one and only one task associated with her, for the sake of simplicity, we interchangeably refer to j ∈ J as both the task and its owner. Let I = {1, . . . , |I|} be the set of users who do not own a task, and K the set of all users, i.e. K = I ∪ J. Each owner j ∈ J aims to find non-owners to share her task with and each non-owner i ∈ I aims to find one task to join. All users have requirements and preferences about tasks and users to share a task with. Let x = {x1,1 , . . . , x1,|J| , . . . , x|I|,1 , . . . x|I|,|J| } ∈ X define which non-owner joins each task, i.e., x is an allocation of non-owners to tasks, where X is the set of allocations. In particular, if i is allocated to task j then xi,j = 1, otherwise xi,j = 0. The preferences of each user k ∈ K are represented by a utility function uk : X → R which provides a complete ranking over potential allocations x ∈ X. Similarly, the system-level utility function is defined as Us : X → R. Crucially, in sharing economy applications, single users and the collective of users have conflicting interests. While a user i aims to maximise her utility ui (·), the interest of the collective of users, represented by the system-level utility function Us (·), is related to the overall benefit the users can achieve. For example, Us (·) may consider the sum of the user utility or the number of users that are allocated to tasks. Given this, it is obvious that the maximisation of Us (·) provides no guarantee to individual users in terms of achieved utility. In order to provide such a guarantee, the application designer should, e.g., maximise the fairness of the solutions (i.e., minimise the difference between the utility achieved by every user) or maximise the minimum single user utility. However, in this case, no guarantee is given in terms of system-level utility. The aim of the application is to aid users in finding compatible peers by suggesting allocations while accounting for this conflict of interest. However, it is fundamental to highlight that not all allocations are guaranteed to occur. Indeed, each user k ∈ K selects an allocation from a set R or recommended solutions independently and without direct coordination with other users, according to a user response model. The three user response models typically used in the literature are [21]: • noiseless response model: each user acts deterministically and always selects the solution that would maximise her utility. Formally, if x ∈ R is such that uk (x) ≥ uk (x0 ), ∀x0 ∈ R then pk (x) = 1 otherwise pk (x) = 0 for all k ∈ K, where pk (x) is the probability with which user k ∈ K chooses solution x. • constant noise response model: each user selects the solution that would maximise her utility the majority of the time irrespective of the utility of other solutions. Each of the remaining solutions

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 is chosen with a equal small probability. Formally, if x ∈ R is 0 such that uk (x) ≥ uk (x0 ), ∀xP ∈ R then pk (x) = α otherwise pk (x) = β with α >> β and x∈R pk (x) = 1 for all k ∈ K. • logit response model: each user selects an allocation from the set R proportionally to its utility value. Formally, pk (x) = P uk (x) 0 , ∀k ∈ K. uk (x ) 0 x ∈R

Given that with each of these response models, every user selects a solution without reasoning about other users’ choices but, rather, makes her decisions by exclusively considering her utility over each allocation, in order for a task to occur, all the users, owner and nonowners, allocated to that task in a given allocation x have to select x. For example, if allocation x assigns to task j the subset of non-owners I˜ ⊆ I, then, in order for task j to occur, all non-owners i ∈ I˜ must select allocation x.

3

Diverse Aware Approach

The problem described in the previous section can be approached as a resource allocation problem, in which users are implicitly assumed to be compliant with any solution proposed to them, and are therefore not afforded any alternatives. The results of such an approach are constrained to that of a matching between users and resources. Consequently, there is no consideration of the inherent uncertainty in user behaviour, or the fact that users could simply refuse to participate in systems that do not satisfy their needs. A system that realistically addresses human diversity and the uncertainty in human behaviour requires an explicit representation of user preferences and their responses to different decision scenarios. Furthermore, the decision scenario needs to be formulated so that it allows for the recommendation of solutions to users, while accounting for their possible deviations from expected behaviour. In this section, we will provide a detailed framework for the representation of the diversity and uncertainty in user behaviour, and outline our approach that focuses on the problems of recommending alternatives and facilitating the coordination of users. In particular, our system intends to present a set of allocations R = {x1 , x2 , ..., x|R| } of fixed size |R|. To achieve this goal we need to deal with two issues. The first of these is the multi-criteria optimisation problem in which we have to balance the conflicting interests of each single user (represented by her utility function ui (·)) and the interest of the collective of users (represented by the system-level utility function Us (·)). We overcome this problem by computing solutions that guarantee a minimum level of system utility and maximise, e.g. the fairness of the solution with respect to the allocated resources. The second is that the uncoordinated selection of allocations done by the users, along with their diversity in preferences, makes it unlikely that users will select allocations such that the task can actually occur. In order to help the system in the process of coordinating users’ selections, we introduce a taxation mechanism, so as to influence user selection behaviour by artificially modifying the utility they have for the recommended solutions, i.e. by modifying their preferences. Effectively, taxation allows the system to impose a penalty on allocations users are better of not selecting. Generally, the tax imposed is different for each user and for each allocation, and must guarantee that users still have multiple options (e.g. the system cannot impose an infinite tax). We develop a different taxation mechanism for each of the user response models described in Section 2. Crucially, these two problems must be tackled simultaneously, otherwise properties that are satisfied when the first issue is solved,

c SmartSociety Consortium 2013-2017

e.g. fairness, may not hold anymore if taxation is applied in a separate step. The reason behind this is that both the problems and the solution to these problems are related to the function, i.e. user utility. Now, we describe our approach that aims to optimise the recommendation set R while simultaneously dealing with the problems due to user’s and collective’s conflicting interests, and the lack of coordination in user selection. In order to handle this problem, we iteratively construct the recommendation set by sequentially executing three Mixed Integer Linear Programs (MILPs), each of them guaranteeing different solution properties. In this way we can deal with both the multi-criteria optimisation and the computational complexity of finding an exact solution. In particular, in order to account for this conflict of interest, we initially construct a program called M ILP system that aims to maximise the system utility Us (·) and thus find a solution with the highest utility V ∗ that the system can achieve. A second program called M ILP f irst takes V ∗ as input and guarantees that the computed solution achieves at least a give percentage of V ∗ in terms of system utility, while the objective function of the program is focused on maximising a different property, e.g. fairness. The advantage of this approach is that the application controls exactly to which extent the maximisation of the system-level utility and the fairness are satisfied. The alternative would have been to use a single MILP whose objective function accounts for both the system utility and the fairness. However, in this case (i) the best trade-off between the two factors would have been decided by the program and not by the application designer, and (ii) it would be possible to obtain solutions completely unbalanced towards one of the two factors. To face the lack of coordination among users, we aim to modify their utility for the recommended allocations such that they all prefer the same solution, termed sponsored solution. This solution is the one computed by M ILP f irst and, since is the one we want to sponsor, we do not alter the utility the users have for it. Instead, we apply taxation on all other recommended solutions. Since we need to solve the problem of identifying the solutions with the desired properties and apply taxation simultaneously (as explained in the previous section), we design a third program, M ILP others , dedicated to this. In particular, a solution obtained with this program aims to be similar to the one of M ILP f irst in terms of users’ utility, guarantees a minimum level of system utility, and has taxes computed on the basis of the specific user response model considered. Given this, we can view our framework as composed of three steps. In the first one, M ILP system is executed in order to identify the highest possible system utility achievable, in the second step M ILP f irst is used to identify the sponsored solution, and in the third step all the remaining non-sponsored |R| − 1 solutions are computed by executing M ILP other |R| − 1 times. Note that, users may have other requirements that the system should satisfy, for example, they may have constraints regarding the characteristics of users they are willing to share a task with. Thus, all the MILPs must satisfy these requirements in order to compute a feasible solution. In the next section, we provide a description of the constraints needed to satisfy the properties our framework necessitates and the different type of user requirements.

4

Optimisation Problem Formulation

In this section we present the details of the Mixed Integer Linear Programs (MILPs) that compose the framework described in the previous section. For the sake of clarity and without loss of generality, we present the MILPs for the ridesharing scenario.

41 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

Ridesharing is a sharing application that can be modelled as specified above. Indeed, a set of passengers I and a set of drivers J aim to find people to share a ride with. Each driver is the owner of a task, i.e., a car, and passengers aim to join one car each. We assume that drivers impose their pick-up point, drop-off point, and time of pick-up on passengers they are sharing the ride with. Passengers, in turn, have preferences over the pick-up point and drop-off point, and their utility decreases with the distance between their preferences and what the driver they are assigned to imposes on them. The pick-up time does not affect users’ utility, however the system imposes a threshold on the maximum difference between the pick-up time desired by a passenger and the one specified by the driver she is assigned to. Note that this is a hard requirement and thus no allocation that violates this can be recommended. Moreover, both passengers and drivers may require to be in a car without smokers, Finally, they may also require to either share the ride, or not to be in the same car as another specific user. The requirements just described for the ridesharing scenario are examples of three different types of constraints that users of sharing applications may have. Thus, even if in the following formulations we focus on ridesharing, the constraints presented can be used for a wide range of applications. In order to illustrate the expressiveness of the MILP formulations and the requirements that can be captured, in what follows we describe the characteristic of each possible type of constraint and show how to formulate it by using an example.

4.1

Maximising collective-level objectives

The MILP presented in this section is used in the first step of our framework and aims to compute the maximum system utility achievable without violating any requirements. We start by defining the utility function ui (x) of a passenger i ∈ I. Her utility function is affected by how much the allocation x satisfies her preferences. In particular, here we assume that users have preferences over two aspects that characterise a task: the pick-up point and the drop-off point. Without loss of generality, assume the utility function ui (x) of each passengers i ∈ I is a sum of partial utility functions αi (x) and βi (x) as shown in Equation 1. In particular, αi (x) is the contribution to the utility of agent i that depends on the difference between the pick-up point of i and the one of the driver assigned to her by allocation x. Similarly, βi (x) depends on the difference between their drop-off point. We assume that these differences are divided into intervals and that all differences in the same interval affect the user’s utility in the same way. (1)

ui (x) = αi (x) + βi (x)

The utility function of the system is a linear weighted combination as shown by Equation 2. In this specific case, w1 , w2 , and w3 are the weights. The first weight multiplies the sum of passengers utility, i.e., the social welfare, the second the number of passengers that are allocated to a car, and the third the number of drivers. The idea is that the system cares about the sum of the utility achieved by passengers but also the number of users that have the possibility to get a ride. Us (x) = w1

X

ui (x) + w2

i∈I

XX i∈I j∈J

xi,j + w3

X

42 of 87

maxx∈X Us (x)

X i∈I

xi,j ≤ cj , ∀j ∈ J

X j∈J

xi,j ≤ 1, ∀i ∈ I

1

(2)

j∈J

(3)

(4)

(5)

We introduce a second set of variables hi,i0 ,j , one for each passenger i ∈ I, passenger i0 ∈ I, and driver j ∈ J. These are binary variables indicating if two passengers are sharing the same car. In particular, Constraints 6 guarantee that hi,i0 ,j = 1 if passengers i and i0 are both allocated to car j, and hi,i0 ,j = 0 otherwise. hi,i0 ,j ≤xi,j , ∀i, i0 ∈ I, ∀j ∈ J

hi,i0 ,j ≤xi0 ,j , ∀i, i0 ∈ I, ∀j ∈ J

(6)

hi,i0 ,j ≥|xi,j + xi0 ,j | − 1, ∀i, i0 ∈ I, ∀j ∈ J

We now move to describe the constraints needed to guarantee that the partial utility αi (x) correctly reflects the distance between the pick-up preference of passenger i and what the driver she is assigned to imposes on her. Note that similar constraints are used to compute the partial utility βi (x). As mentioned before, we consider the possible pick-up distance as divided into |T | intervals (where T is the set of intervals). For each interval n ∈ T , the parameter αi,n indicates i’s partial utility if the pick-up distance is in interval n, and parameters αn,lower and αn,upper denote the lower bound and the upper α bound of interval n, respectively. A set of variables ki,n , one for each α n ∈ T , is used to select the right interval. Note that ki,n is a binary α variable and that ki,n = 0 if the pick-up distance is in interval n and α ki,n = 1 otherwise. Given this, the partial utility αi (x) is given by α Equation 7, and Constraints 8 guarantee the the only variable ki,n that equals zero is the one of the interval n, to which the pick-up distance belongs to. Note that M is a very large number as typically used in the Big M method [22], while ∆α i (x) measures the pick-up distance. In this particular case, we compute the distance by considering latitude (pi,lat,pu and pj,lat,pu ) and longitude (pi,long,pu and pj,long,pu ) of the pick-up points, and compute the Manhattan distance between them as shown in Equation 9. This equation is particularly interesting because it shows how we deal with imposing a zero partial utility to a passenger when she is not assigned to any car. In particular, in order to achieve this, a fictitious pick-up distance interval n = |T | is introduced in the set T such that the large number M (bigger than any pick-up distance) is in interval n = |T |, alphai,|T | = 0, and ∆α i (x) = M if i is not allocated to any j ∈ J. αi (x) =

X

n∈T

We are now ready to define the objective function of M ILP system that is to maximise the system’s utility (Equation 3). obj

We start the description of the constraints by focusing on the allocation variables xi,j ∈ [0, 1], ∀i ∈ I, ∀j ∈ J. Since each car j ∈ J has a capacity cj , we need to guarantee that no more than cj passengers are allocated to j (Constraint 4). Finally, we need to guarantee that each passenger is allocated to at most one car (Constraint 5).

α αi,n · (1 − ki,n ), ∀i ∈ I

(7)

α M · ki,n + (αn,upper − ∆α i (x)) ≥0, ∀i ∈ I, ∀n ∈ T

α M · ki,n + (∆i (x) − αn,lower ) ≥0, ∀i ∈ I, ∀n ∈ T X α ki,n ≤ |T | − 1,∀i ∈ I n∈T

http://www.smart-society-project.eu

(8)


c SmartSociety Consortium 2013-2017

Deliverable D6.4 ∆α i (x) =

X j∈J

xi,j (|pi,lat,pu −pj,lat,pu |+|pi,long,pu −pj,long,pu |) + (1 −

X j∈J

xi,j ) · M, ∀i ∈ I

(9)

In defining constraints due to requirements, we differentiate between strict constraints, non-strict constraints, and potential constraints. Strict constraints impose that every group of users sharing a car must have the same value for a given user’s parameter. For example, in the ridesharing scenario, a strict constraint is imposed on the day of the ride and, thus, all users allocated to the same car must have the same value for the parameter pday . In particular, if the two users i are passengers, then the strict constraint is Constraint 10, while if the two users are a passenger and a driver, then Constraint 11 must be imposed. 0 |hi,i0 ,j pday − hi,i0 ,j pday i i0 | ≤ 0, ∀i, i ∈ I, ∀j ∈ J

(10)

|xi,j pday − xi,j pday | ≤ 0, ∀i, ∈ I, ∀j ∈ J i j

(11)

j∈J

Finally, we discuss potential constraints. Constraints of this type do not always impose a condition that must be satisfied by a solution. Indeed, a user may have specific requirements about a characteristic of the users she is sharing the task with or she may be indifferent with respect to this characteristic. For example, in the ridesharing scenario, a user may require to be in a car without smokers, while another user, even if she is not a smoker, may not have such a requirement. In order to formulate the constraint that guarantees these requirements, we need to introduce a new binary variable vjsmoker , one for each car j ∈ J. Constraints 13 impose that vjsmoker = 1, if at least one user among the passengers and the driver sharing car j requires to be oSmoke in a car without smokers. Parameter preqN indicates the “no i oSmoke smoker request” of a passenger i ∈ I. In particular if preqN = i 1 the passenger requires to be in a car with no smokers, while if oSmoke oSmoke preqN = 0 the passenger has no preferences. preqN i j is similarly defined for driver j ∈ J. Constraint 14 guarantees that if none of the users in a car j has a “no smokers” requirement then vjsmoker = 0. Now, if variable vjsmoker = 1, then we need to impose that all the users in car j do not want to smoke during the ride. This oSmoke is achieved by Constraint 15, where parameter pN = 0 if i oSmoke passenger i ∈ I wants to smoke in the car and pN = 1 i oSmoke otherwise. pN is similarly defined for driver j ∈ J. j

oSmoke vjsmoker ≥preqN , ∀j ∈ J j oSmoke preqN + j

X i∈I

oSmoke xi,j preqN ≥ vjsmoker , ∀j ∈ J i

c SmartSociety Consortium 2013-2017

X i∈I

oSmoke xi,j pN ≥ (cj + 1)vjsmoker , ∀j ∈ J (15) i

To conclude, we consider the case in which the system allows a user to specify if she wants/does not want to share a task with another specific user. The constraints used to guarantee these requirements are strict constraints and are formalised as follows. Constraints 16 and 17 guarantee that passengers i and i0 and passenger i and driver j are allocated to the same car, respectively. While Constraints 18 and 19 guarantee the opposite. X

(13)

(14)

(16)

hi,i0 ,j ≤ 0

j∈J

(17)

xi,j ≤ 0 X

hi,i00 ,j ≥ 1

(18)

xi,j ≥ 1

(19)

j∈J

Non-strict constraints impose a threshold on how a passenger’s requirement is satisfied. In the ridesharing scenario, this type of constraint is applied to the difference between the time of pick-up specified by the passenger, and the one of the driver she is sharing the car with. We formulate this type of constraint as shown by Constraint 12, where ptime and ptime are parameters that indicate the pick-up time i j of passenger i and driver j, respectively, and ptime threshold is the threshold. X xi,j (|ptime − ptime |) ≤ ptime (12) i j threshold

oSmoke vjsmoker ≥xi,j preqN , ∀j ∈ J, ∀i ∈ I i

oSmoke pN + j

4.2

Maximising fairness among users

M ILP f irst constitutes the second step of our framework. The aim for this program is to identify the first solution that will be presented to users. Note that this is the solution we would like all users to choose among the ones in the recommendation set. The solution the program provides guarantees a minimum level of system utility while focusing on an objective function that is oriented to being beneficial to the users. In the following formulation we assume, without loss of generality, that the program aims to maximise user fairness. This translates into an objective function that minimises the difference between the utility of every pair of passengers i, i0 ∈ I (Equation 20). obj

minx∈X

X

X

i∈I i0 ∈I|i0 >i

|ui (x) − ui0 (x)|

(20)

As mentioned before, the solution provided by this program must guarantee a minimum level of system utility. Given the maximum utility V ∗ the system can achieve (computed by M ILP system ) and the parameter d ∈ [0, 1], the required guarantee can be obtained by imposing Constraint 21. Us (x) ≥ V ∗ · d

(21)

In addition to this, all the constraints described for M ILP must also hold for M ILP f irst .

4.3

system

Maximising user coordination

The last step of our framework aims to compute the remaining |R| − 1 solutions (one has already been identified by M ILP f irst ). To achieve this, M ILP others is executed |R| − 1 times. A solution identified by M ILP others has the following three characteristics: it guarantees a minimum level of system utility (as M ILP f irst does), computes a solution that is different from the ones previously chosen, and artificially modifies the utility each passenger i ∈ I¯ has for this solution such that all of them would prefer the ¯ solution x∗ identified by M ILP f irst . Note, P that the set I ⊆ I is composed by all passengers i ∈ I such that j∈J x∗i,j = 1, i.e., only the utility of passengers who are assigned to a driver in solution x∗ is

43 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

artificially modified. This last characteristic depends on the response model of the users, and is achieved by using taxation. When considering the noiseless response model and constant noise model, in order for a passenger i ∈ I to select/prefer the solution computed by M ILP f irst , we need to guarantee that her utility for the solution x currently computed is lower than her utility for allocation x∗ , i.e. the solution computed by M ILP f irst . We achieve this by imposing a tax Ď„i (x) on passenger i for allocation x that decreases the utility i has for x. Constraint 23 guarantees that x∗ is the preferred solution of passenger i. In this constraint, is a very small number used to express the fact that ui (x∗ ) must be strictly higher than ui (x) − Ď„i (x). However, this constraint imposes a lower bound to the tax Ď„i (x) but no upper bound. Thus, potentially, the program can assign an infinite value to Ď„i (x). This is not desirable because no real options would be effectively given to the passengers if all but one could be infinitely taxed. Moreover, we also want to avoid the case in which Ď„i (x) is higher than the minimum tax required as the system should not aim to unnecessarily extract excessive utility from its participants. Thus, the upper bound Ď„i (x) should be equal to its lower bound. We obtain this by using the Big M method [22] that involves changing the objective function as shown by Equation 22. min

X i∈I

|ui (x∗ ) − ui (x) + Ď„i (x)| +M

X i∈I

ui (x∗ ) − − ui (x) + Ď„i (x)

ui (x∗ ) − ≼ ui (x) − Ď„i (x)

(22) (23)

others

Similarly to the constant response model case, M ILP needs to impose a lower bound on the tax also for the logit response model (Constraint 25), and modify the objective function (Equation 24) such that the tax is the lowest possible. However, since in this case the probability with which a passenger selects solution x is proportional to the utility that x represents for that passenger, imposing a lower bound to the tax means imposing a lower bound to the selection probability ÂŻ In Constraint 25, Ďˆ is the minimum of x∗ for each passenger i ∈ I. selection probability required for x∗ . Note that this constraint is linear because everything but ui (x) and Ď„i (x) are parameters given as input to the program. min

X i∈I

|ui (x∗ ) − ui (x) + Ď„i (x)| + M

âˆ’Ďˆ¡

X

x0 ∈R

P

X

ui (x∗ )

i∈I

ui (x0 ) − Ď„i (x0 ) + ui (x) − Ď„i (x)

(x0 )

x0 ∈R ui others

ui (x∗ ) â‰ĽĎˆ − Ď„i (x0 ) + ui (x) − Ď„i (x)}

(24) (25)

Finally, M ILP needs to guarantee that solution x is different from the ones previously computed. Depending on the required degree of difference between two solutions, we can formulate the M ILP constraints as follows. Constraint 26 guarantees that the solution x differs from the ones already computed, i.e. the ones in set R, at least for the allocation of one passenger. While Constraint 27 requires that each ride (except the one with no passenger allocated to it) of solution x differs from the corresponding ride in solution x0 ∈ R, at least for the allocation of one passenger. XX |xi,j − x0i,j | > 1, ∀x0 ∈ R (26) i∈I j∈J

44 of 87

X i∈I

|xi,j − x0i,j | > 1, ∀x0 ∈ R, ∀j ∈ J

(27)

In addition to these, all the constraints described for M ILP system and M ILP f irst must also hold for M ILP others .

5

Experimental Evaluation

In order to demonstrate the effectiveness of our approach, we run two sets of simulated experiments. In the first set, we compare the recommended set of solutions generated with our diversity-aware approach, with a set of solutions that maximise the system’s utility and provide no support for user coordination. Essentially, the benchmark set is produced without considering the need for consistency across users’ selections. In this way, we will demonstrate that our diversityaware approach is strictly better for the generation of recommendation sets. In the second set of experiments, we compare our approach to that of allocating a single solution that maximises system utility. We assume users are characterised by a utility threshold of acceptance, unknown to the system. This latter set of experiments, will show how recommending a set of solutions through our approach can produce results that are equally good to what a direct allocation would have produced.

5.1

Experiment design

For both types of experiments we consider different configurations, each of which is characterised by the population of users, specifically the number of users and the percentage of drivers among them, the value of the threshold d used in M ILP f irst and M ILP others , the user response model, and, for the logit model, the probability with which the sponsored allocation x∗ is selected. In particular, we run experiments for 10 and 20 users, for each percentage of drivers among 20, 30, and 40 percent. We vary the utility threshold d for M ILP f irst and M ILP others between the values 0.5, 0.75, and 1. The user models evaluated are the constant noise model and the logit model, and, for the latter, the probability of users selecting the sponsored recommendation is either 60 or 80 percent. Every configuration is repeated 100 times. We choose not to evaluate the noiseless response model because, by construction, it is a special case of the the constant noise model with the best performance, i.e. with the probability of the most preferred option set to 1. Without loss of generality, we set the weights w1 = w2 = w3 = 1 in the system-level utility function. The metrics used for each experiment are system utility, fairness (computed as described in the previous section), number of drivers with allocated passengers, and number of allocated passengers. Note, that all evaluations are performed after user selections have been performed. Thus, rides that have not been chosen by all the users allocated to them are not considered in the performance evaluation. Finally, we highlight that the metrics proposed account for the taxation imposed on the solutions. That is, each user, given his final allocation, has had any taxation imposed on him deducted from his effective utility, which in turn affects the evaluation of the system-level utility (Eq. 2) and fairness of allocation (Eq. 20). The procedure used to obtain the experimental results is the following: First we generate the desired number of users, divided into drivers and passengers as prescribed by the configuration. For each user, we randomly generate the latitude and longitude of the pick-up point and drop-off point (we restrict the variability of this coordinate to 50), the pick-up time, whether she allows smokers in the car, and whether she

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

2500 2000

300

Fairness

System Utility

400

200

1000

100 0

1500

500 Limit 0.50

Limit 0.75 Utility Threshold d

0

Limit 1.00

Limit 0.50

Diversity-Aware Set Benchmark Set

Limit 0.75 Utility Threshold d

Limit 1.00

Diversity-Aware Set Benchmark Set

(b)

Number of Users

(a)

8 6 4 2 0

Limit 0.50

Limit 0.75 Utility Threshold d

Limit 1.00

no. Drivers with Passengers; Diversity-Aware Set no. Allocated Passengers; Diversity-Aware Set no. Drivers with Passengers; Benchmark Set no. Allocated Passengers; Benchmark Set

(c) Figure 1: Results on System Utility (a), Fairness (b), and the number of Allocated Passengers and Drivers with Passengers (c) for the experiments with a recommendation set benchmark, with 20 users, 30% drivers, and constant noise.

wants to smoke. Then we generate |R| = 7 solutions (where possible) both for the case without coordination support in which the goal of the application is solely to maximise the system’s utility, and following our approach. Finally, we simulate user behaviour according to the respective user response model and compute the metrics listed above. This final step of our evaluation changes slightly depending on the benchmark we are comparing our approach to. As mentioned before, we consider two benchmarks: a benchmark with a recommendation set and a benchmark where a single allocation is proposed to the users. Note that in both cases, the solutions computed aim to maximise system-level utility. When we consider the benchmark with a recommendation set, we compute the two sets of |R| solutions computed as described above. Then, both for the benchmark case and for our approach, we recommend to each passenger the rides she is allocated to by these solutions. Each passenger then, independently, and without knowledge of other passengers’ behaviour, selects a solution in accordance with her utility over each option, and her response model. Given these independent choices, we identify which rides have been selected by all the users allocated to them and, on the basis of this, we evaluate the performance of both approaches. In the comparison with the benchmark with a single allocation, we assume that passengers select rides that satisfy a minimum level of user utility, i.e. there is a threshold over the user utility and solutions that do not satisfy this threshold cannot be selected. For example, in the case of ridesharing, we can assume that if the utility of a passenger for a ride is lower than her utility for taking the train, then she chooses

c SmartSociety Consortium 2013-2017

not to join that ride. Given this, we consider the set |R| of solutions computed using our diversity-aware approach and, for the benchmark, the solution computed by M ILP system . We assume that users apply this utility threshold and thus, all the rides that do not satisfy the threshold are removed from the ones that can be selected. This can be understood as users implicitly rejecting these solutions. We therefore make use of this label in corresponding figures. After this, each user selects one of the remaining options (note that in the benchmark case each user has either one ride or no option available). Now, as in the case of the previous benchmark, given these independent choices, we identify which rides have been selected by all users allocated to them and, on the basis of this, we evaluate the performance of both approaches. A key point to remember, is that our diversity-aware approach influences users’ utility over allocations through the use of taxation, and that this directly impacts on selection behaviour. We expect that this will aid in aligning user selections, and therefore lead to greater performance in terms of allocated passengers and drivers with passengers and, consequently, in terms of system-level utility.

5.2

Results

Below we present and discuss the results of our experiments for the case of 20 users with 30% drivers as representative of all experiments. The results on other population sizes and driver percentages were qualitatively equivalent. We analyse each set of experiments (recommendation set and single allocation) separately and present the

45 of 87


Deliverable D6.4

500

2500

400

2000 Fairness

System Utility

c SmartSociety Consortium 2013-2017

300 200 100 0

1500 1000 500

Limit 0.50

Limit 0.75 Utility Threshold d

0

Limit 1.00

Limit 0.50

Diversity-Aware Set, ÇŤ = 0.6 Diversity-Aware Set, ÇŤ = 0.8 Benchmark Set

Limit 0.75 Utility Threshold d

Limit 1.00

Diversity-Aware Set, ÇŤ = 0.6 Diversity-Aware Set, ÇŤ = 0.8 Benchmark Set

(a)

Number of Users

(b)

10

5

0 Limit 0.50 no. no. no. no. no. no.

Limit 0.75 Utility Threshold d

Limit 1.00

Drivers with Passengers; Diversity-Aware Set, ÇŤ = 0.6 Allocated Passengers; Diversity-Aware Set, ÇŤ = 0.6 Drivers with Passengers; Diversity-Aware Set, ÇŤ = 0.8 Allocated Passengers; Diversity-Aware Set, ÇŤ = 0.8 Drivers with Passengers; Benchmark Set Allocated Passengers; Benchmark Set

(c) Figure 2: Results on System Utility (a), Fairness (b), and the number of Allocated Passengers and Drivers with Passengers (c) for the experiments with a recommendation set benchmark, with 20 users, 30% drivers, and logit noise.

constant and logit noise cases for each of them.

5.2.1

Experiments with set recommendation benchmark

This subsection discusses the experimental results from the set of experiments where the benchmark also presents the passengers with a set recommendation. Focusing first on the constant noise case (Fig. 1) we notice that there is a near linear trade-off between fairness and system utility (Fig. 1a, 1b). A trade-off that can be modulated by change of the utility threshold d. We also note that when d = 1, which forces M IP f irst and M IP others optimisations to prioritise system utility maximising solutions, the diversity-aware approach slightly outperforms the benchmark set, in terms of system utility, number of allocated passengers, and number of drivers with passengers (Fig. 1a, 1c). Reducing the value of parameter d offers better results on fairness (Fig. 1b), in comparison to the benchmark, but at a cost of reduced system utility (Fig. 1a). Moving on to the logit noise case (Fig. 2), we notice once more the role of the utility threshold parameter d in trading off system utility and fairness (Fig. 2a, 2b). Further, the only scenario in which we perform slightly worse than the benchmark, in terms of user allocation and system utility, is for = 0.6 and d = 0.5, i.e. when we optimise mostly for fairness and where we try not to influence user decisions too much. In terms of fairness, we only under-perform for d = 1, a result emerging from the large number of users with 0 utility, as results from the benchmark (Fig. 2a, 2c). Otherwise, we notice that the diversityaware procedure significantly outperforms the set recommendation

46 of 87

benchmark, in terms of system utility, number of allocated passengers, and number of drivers with passengers (Fig. 2a, 2c). Summarising the results of this subsection, we note the significant improvement in performance afforded by our diversity-aware system. This signifies how important it is to be aware of the diversity among users when coordinating in sharing economy applications. These results increase in significance once we consider that there is no coordination present between users, and all the improvement is the result of implicit system interactions with each individual user; users that are free to choose amongst recommended alternatives. We conclude that making set recommendations by simply listing a set of system-optimal alternatives is a significantly sub-optimal procedure.

5.2.2 Experiments with single allocation benchmark, and rejection This subsection discusses the experimental results from the set of experiments involving a diversity-aware set recommendation and a benchmark single allocation, while considering passengers that can reject rides. Focusing first on the constant noise case (Fig. 3), we notice that our system under-performs in terms of system utility (Fig. 3a), a loss it gains in its increased performance in terms of fairness (Fig. 3b). We further notice, as above, that there is a near linear tradeoff between fairness and system utility, which can be modulated by change of the utility threshold d (Fig. 3a, 3b). Finally, in the logit noise case for the second set of experiments (Fig. 4), we notice once more the role of the utility threshold parameter d

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

2500 2000

400

Fairness

System Utility

500

300

1500

200

1000

100

500

0

Limit 0.50

Limit 0.75 Utility Threshold d

0

Limit 1.00

Limit 0.50

Diversity-Aware Set, with rejection Benchmark Single, with rejection

Limit 0.75 Utility Threshold d

Limit 1.00

Diversity-Aware Set, with rejection Benchmark Single, with rejection

(a)

Number of Users

(b)

10

5

0

Limit 0.50

Limit 0.75 Utility Threshold d

Limit 1.00

no. Drivers with Passengers; Diversity-Aware Set, with rejection no. Allocated Passengers; Diversity-Aware Set, with rejection no. Drivers with Passengers; Benchmark Single, with rejection no. Allocated Passengers; Benchmark Single, with rejection

(c) Figure 3: Results on System Utility (a), Fairness (b), and the number of Allocated Passengers and Drivers with Passengers (c) for the experiments with a single allocation benchmark, and rejection, with 20 users, 30% drivers, and constant noise.

in trading off system utility and fairness (Fig. 4a, 4b). Importantly, we note that our diversity-aware approach, which recommends a set, allowing for users to freely choose their preferred option, matches the performance of the benchmark, which allocates a single solution to each user (Fig. 4a, 4c). These results hold for when we do not wish to emphasise fairness, i.e in the d = 1 scenario. The results of this subsection show that our diversity-aware set recommendation system, can consistently provide results that are equivalent to those of allocating a single item to each user. This shows, that providing users with options can be essentially free, in terms of system utility, even without considering any other beneficial effects that could result from allowing a system to recommend rather than allocate solutions to its user base. Moreover, we are afforded additional options in trading-off system utility with fairness.

6

Conclusion

We presented a methodology for the coordination of user collectives, in the absence of communication among agents. Our diversity-aware approach significantly outperforms the system utility maximising procedure, demonstrating that the recommendation of sets of solutions in sharing applications requires explicitly handling the uncertainty over user behaviour. Furthermore, we showed how our procedure can match the performance of a direct allocation of users to resources. This significant result demonstrates that we can allow users to have a choice in their alternatives, at no loss to the system. Lastly, our procedure allows for the adaptive trade-off between system-level utility and fairness of final allocation.

c SmartSociety Consortium 2013-2017

Future work will examine handling beliefs over user preferences in the context of recommending a set of options for sharing economies. Specifically, we are studying the inclusion of active learning procedures in the mixed integer linear program formulations. Further, we are interested in studying the robustness of our procedures to varying degrees of incorrect assumptions.

REFERENCES [1] Haris Aziz, Markus Brill, Felix A. Fischer, Paul Harrenstein, J´erˆome Lang, and Hans Georg Seedig, ‘Possible and necessary winners of partial tournaments’, J. Artif. Intell. Res. (JAIR), 54, 493–534, (2015). [2] Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key, Filip Radlinski, Ely Porat, Michael Armstrong, and Vijay Sharma, ‘Building a personalized tourist attraction recommender system using crowdsourcing’, in Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, pp. 1631–1632, (2014). [3] Yoram Bachrach, Edith Elkind, Reshef Meir, Dmitrii Pasechnik, Michael Zuckerman, J¨org Rothe, and Jeffrey S. Rosenschein, The Cost of Stability in Coalitional Games, 122–134, Springer Berlin Heidelberg, 2009. [4] C. Boutilier, ‘A POMDP formulation of preference elicitation problems’, in Proceedings of the 18th National Conference on Artificial Intelligence, pp. 239–246, (2002). [5] D. Braziunas, ‘Computational approaches to preference elicitation’, Technical report, University of Toronto, (2006). [6] Sofia Ceppi and Ian Kash, ‘Personalized payments for storage-as-aservice’, SIGMETRICS Perform. Eval. Rev., 43(3), 83–86, (2015). [7] Urszula Chajewska, Daphne Koller, and Ronald Parr, ‘Making rational decisions using adaptive utility elicitation’, in Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pp. 363–369. AAAI Press, (2000).

47 of 87


Deliverable D6.4

500

2500

400

2000 Fairness

System Utility

c SmartSociety Consortium 2013-2017

300 200 100 0

1500 1000 500

Limit 0.50

Limit 0.75 Utility Threshold d

0

Limit 1.00

Limit 0.50

Diversity-Aware Set, with rejection, ǫ = 0.6 Diversity-Aware Set, with rejection, ǫ = 0.8 Benchmark Single, with rejection

Limit 0.75 Utility Threshold d

Limit 1.00

Diversity-Aware Set, with rejection, ǫ = 0.6 Diversity-Aware Set, with rejection, ǫ = 0.8 Benchmark Single, with rejection

(a)

Number of Users

(b)

10

5

0 Limit 0.50 no. no. no. no. no. no.

Limit 0.75 Utility Threshold d

Limit 1.00

Drivers with Passengers; Diversity-Aware Set, with rejection, ǫ = 0.6 Allocated Passengers; Diversity-Aware Set, with rejection, ǫ = 0.6 Drivers with Passengers; Diversity-Aware Set, with rejection, ǫ = 0.8 Allocated Passengers; Diversity-Aware Set, with rejection, ǫ = 0.8 Drivers with Passengers; Benchmark Single, with rejection Allocated Passengers; Benchmark Single, with rejection

(c) Figure 4: Results on System Utility (a), Fairness (b), and the number of Allocated Passengers and Drivers with Passengers (c) for the experiments with single allocation benchmark, and rejection, with 20 users, 30% drivers, and logit noise. [8] Kim-Sau Chung, ‘On the existence of stable roommate matchings’, Games and Economic Behavior, 33(2), 206 – 230, (2000). [9] John P. Dickerson, Ariel D. Procaccia, and Tuomas Sandholm, ‘Optimizing kidney exchange with transplant chains: theory and reality’, in International Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 711–718, (2012). [10] Krzysztof Gajos and Daniel S. Weld, ‘Preference elicitation for interface optimization’, in Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, UIST ’05, pp. 173–182, New York, NY, USA, (2005). ACM. [11] D. Gale and L. S. Shapley, ‘College admissions and the stability of marriage’, The American Mathematical Monthly, 69(1), 9–15, (1962). [12] Christophe Gonzales and Patrice Perny, ‘GAI networks for utility elicitation.’, KR, 4, 224–234, (2004). [13] Shengbo Guo and Scott Sanner, ‘Real-time multiattribute bayesian preference elicitation with pairwise comparison queries’, in International Conference on Artificial Intelligence and Statistics, pp. 289–296, (2010). [14] Dan Gusfield and Robert W. Irving, The Stable Marriage Problem: Structure and Algorithms, MIT Press, Cambridge, MA, USA, 1989. [15] Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Jose M Hern´andez-lobato, ‘Collaborative Gaussian processes for preference learning’, in Advances in Neural Information Processing Systems, pp. 2096–2104, (2012). [16] Daniel Kahneman and Amos Tversky, ‘Choices, values, and frames.’, American psychologist, 39(4), 341, (1984). [17] R. L. Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Trade-Offs, Cambridge University Press, 1993. [18] Michel Balinski Mourad Baou, ‘The stable allocation (or ordinal transportation) problem’, Mathematics of Operations Research, (3), 485–503, (2002). [19] Filip Radlinski and Susan Dumais, ‘Improving personalized web search using result diversification’, in Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in

48 of 87

Information Retrieval, pp. 691–692, (2006). [20] Onn Shehory and Sarit Kraus, ‘Methods for task allocation via agent coalition formation’, Artificial Intelligence, 101(1), 165 – 200, (1998). [21] Paolo Viappiani and Craig Boutilier, ‘Optimal bayesian recommendation sets and myopically optimal choice query sets’, in Advances in Neural Information Processing Systems, pp. 2352–2360, (2010). [22] Wayne L. Winston, Introduction to Mathematical Programming: Applications and Algorithms, Duxbury Resource Center, 2003. [23] Yair Zick, Maria Polukarov, and Nicholas R. Jennings, ‘Taxation and stability in cooperative games’, in Proceedings of the 2013 International Conference on Autonomous Agents and Multi-agent Systems, pp. 523– 530, (2013).

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

C

Preference elicitation paper

Coarse Preferences for Computationally Efficient Preference Elicitation and Optimisation Procedures Pavlos Andreadis and Michael Rovatsos and Subramanian Ramamoorthy 1 Abstract. Decisions dependent on users’ preferences can be made more efficient by exploiting the structure of each individual user’s representation of the space of solutions to a choice problem. Moreover, inference regarding preferences can be made more efficient by exploiting that same structure. For instance, in recommender systems, knowing which products users do (not) distinguish between can help reduce the number of queries required before being able to make an optimal recommendation, while also reducing the time required for determining that recommendation. We propose a decision theoretic model for representing the choice problem, personalised to each potential user, in terms of partitions over the system’s original space of outcomes. To the extent that an individual user may distinguish between fewer of these outcomes, we say that their representation of preferences over their unique partition is an instance of “coarse preferences”. We present an elicitation procedure for this model. Starting with a family of user-specific partitions over a common space of outcomes, our approach achieves better convergence to a belief regarding the representation for a new unknown user than the alternative of solving this problem over the original space of outcomes without reasoning about this aspect of structure. Moreover, the reduced dimensionality of our model allows for a significant reduction in the computational time required for making recommendations, especially to groups of users. We present methodologies for learning the coarse structure offline and validate our approach on two real-world data-sets.

Categories and Subject Descriptors [Information systems] ∼ Recommender systems; [Theory of computation] ∼ Online learning algorithms, Active learning

General Terms Algorithms; Theory

Keywords Categorical Thinking; Coarse Preferences; Decision Theory; Group Recommendation ; Preference Elicitation; Recommender Systems

1

Introduction

In order for a system to make effective decisions concerning a user, it needs access to an accurate description of that user’s preferences. 1

School of Informatics, University of Edinburgh, United Kingdom, email: p.andreadis@sms.ed.ac.uk, mrovatso@inf.ed.ac.uk, sramamoo@staffmail.ed.ac.uk

c SmartSociety Consortium 2013-2017

Often, the interaction with the user is limited, forcing the system to construct this from minimal information. Further, many modern applications require making decisions over a number of users, using a limited amount of resources, and with inter-dependencies among the solutions afforded to each user. This can lead to decisions over users with complexity that is combinatorial in the complexity of user preference representations. As a result, it becomes important that these models are, not only precise, but also minimally complex, in a way that allows for faster decision making. We develop a decision theoretic model of coarse preferences; a formal methodology for eliciting preferences over, and making decisions with, structured representations of the space of solutions. We show how this results in improved quality of elicitation under limited data scenarios. Further, we demonstrate how our approach can significantly reduce the computational time spent on making belief-state updates and evaluations. Moreover, we give theoretical guarantees for significant reduction in computational complexity when performing multiple-user preference elicitation over larger data-sets. This work focuses on the dual problem of learning a user preference model, and making a decision given that model. When this decision concerns the identification of high quality items for recommendation, it falls under the purview of the Recommender Systems (RS) literature [27, 39]. Though there is a vast amount of work in this field, we focus on Preference Elicitation (PE), which can be cast as a RS sub-area. PE allows for constructing formal decision theoretic models that provide a number of advantages over commonly used commercial approaches, such as Collaborative Filtering [3, 4]. Specifically: • Active Learning [11, 23] - the ability to reason about which interaction with the user(s) will lead to a better understanding of their behaviour. • Modular user response models [6, 18, 22] - the ability to update our models using interactions from across a system, thus easily integrating a variety of user behavioural data (including from newly identified data sources) into the same model. • Probabilistic belief models [11, 35] - explicit representations of uncertainty over user preferences. A crucial aspect in determining the underlying parametric model by which to represent a user’s preferences is the trade-off between ease of elicitation and generality of the model [21]. The work presented here is complementary to existing forms of model abstraction in the preference elicitation literature, such as those making use of Additive Independence [29] and Generalised Additive Independence [2, 7, 21]. We present a novel abstraction methodology and decision theoretic model based on the Cognitive Science concept of Categorical Thinking [40, 45]. Work on this shows how people use simplified representations of reality when making decisions. Our model is able

49 of 87


c SmartSociety Consortium 2013-2017 to represent such behaviour in a way that is consistent with the Von Neumann-Morgenstern utility theorem [44], and captures the personalised aspect of such representations while still allowing for decision making across different users. We show that we can match, and conditionally beat, the elicitation performance of standard, non-coarse, approaches, with the added benefit of reduced elicitation and optimisation times. We demonstrate these results on two real-world data-sets, namely, the Sushi [28] and Housing [24] data-sets. Complementary analysis on generated ridesharing data, demonstrates how the difference in performance of standard and coarse approaches could potentially increase in other domains. A limitation, shared by e.g. Collaborative Filtering, but, less so, the standard elicitation model, is the need for access to a corpus of data for computing the prior coarse structures. Moreover, we assume that these structures remain constant in the population, and are therefore indicative of the behaviour of future users. The next section outlines existing work in the Preference Elicitation literature, as well as work that is related to the issue of modelling categorical thinking in humans. Section 3 formally outlines the problem of eliciting one or more users’ preferences with the aim of making a recommendation, while also presenting a benchmark model from the literature. Section 4 details the problem of representing users as categorical decision makers, which introduces Section 5, where we present our theory of coarse preferences. The latter includes a proof of conformance to the von Neumann-Morgenstern Utility Theorem [44], a procedure for the representation of coarse user types, and a demonstration of compatibility with additive models of utility. Section 6 presents a general framework for the elicitation of coarse preferences, as well as its instantiation for our experiments in Section 8. Section 7 provides theoretical guarantees over the computational benefits that can be achieved by utilising coarse preferences in multi-agent preference elicitation and recommendation problems. Finally, Section 9 concludes.

2

Related Work

Decision Theory [17, 44] deals with the analysis of the goal-directed behaviour of an individual faced with non-strategic uncertainty [20]. Preference Elicitation is the process of learning an agent’s decision model in decision theoretic problems. This decision model usually takes the form of a utility function; a mapping from the space of solutions to the space of real numbers. The prior over utility functions is usually defined by a parametric model that constraints it according to some behavioural assumptions over the user. Solutions are then identified by assignments to a set of variables. The typical approach is to assume that these exhibit some structural independence. Usually that of additive independence [29]. Constraining the space of utility functions in this manner explicitly addresses an expressivity-tractability trade-off. Examples of such work can be found in [7, 14, 18, 22, 37] among others. Non-parametric approaches, such as Gaussian Processes [38], have also been used to represent utility functions (e.g. in [23, 26]). There is also work on eliciting preferences in the form of rankings of alternatives, without the mediation of a utility function (e.g. in [26]), though the latter do not generalise beyond a set of ranked outcomes. Another way of reducing the space of possible utility functions is by assuming the existence of a limited number of possible user profiles. This user typing makes use of available historical data in order to cluster users according to their behaviour in the domain [13]. Making a decision while taking a user’s preferences into account can be cast as making a recommendation. Under this perspective, pref-

50 of 87

Deliverable D6.4 erence elicitation is part of the Recommender Systems (RS) literature [27, 39]. The most widely and successfully applied RS approach is that of Collaborative Filtering [3, 4]. We do not compare our work with the broader scope of RS for the reasons outlined in Section 1. Though the approaches described above successfully capture the effect different variable assignments have on the user’s preferences, they fail to capture dependencies within their, individual or combined, variable domains. Such dependencies typically emerge because of the user not differentiating between some assignments. Cognitive science describes this phenomenon as categorical thinking [40, 45]. Coarse thinking [33, 34] is one attempt at modelling such behaviour, where decision makers predict outcomes by using the probability distribution of the most likely category, rather than employing a Bayesian model. The notion of concept [1, 25] is also relevant to our approach. Concepts are essentially subsets of the outcome space that satisfy some specified formulas. The problem of concept learning involves the identification of a concept given a sequence of labeled samples or an interactive querying procedure. Recently, Boutilier et al. [8] presented an on-line feature elicitation procedure, where uncertainty over concepts was reduced enough to make an optimal decision. They expand on this in [9, 10] by introducing uncertainty over the utility function. Though we are interested in eliciting utility functions over a coarser representation of the outcome space, which a set of concepts could be, we do not require of the user to label these subspaces. Our approach on coarse preferences is related to Bjorndahl et al. [5]’s language-based games. A generalization of psychological games [19], their approach can capture dependent preferences [31] as well as coarse beliefs [33] or categorical thinking. We borrow from their approach in referring to a group of assignments as situations. We note the general interest in the literature over the behaviour we describe as coarse preferences. Other work that tries to model decision makers with similar behaviour can be found, e.g., in [15, 16, 36, 42]. To the extent of our knowledge, this is the first time this concept has received a general, formal treatment in the context of Decision Theory.

3

Problem Description

We assume a system tasked with making decisions for one or more users in specific decision scenarios, each one defined by a fixed set of choices with known, possibly stochastic, effects. For simplicity of presentation, we will assume that decisions result in outcomes deterministically, and use the two concepts interchangeably. Moreover, in the context of multiple users, we will refer to a one-to-one mapping of decisions to users as a plan. Restrictions over the set of available plans will, in some cases, prohibit its decomposition to a set of independent decisions. Users participate in the system in an attempt to maximise their expected utility, with respect to a utility function defining their preferences over outcomes (though their behaviour could be stochastic). The system will generally not have complete knowledge of this function, but can, through interaction with the user, elicit enough preference information to make a good, if not optimal, decision. We will assume that this interaction is manifested through a set of queries available to the system, and that the user’s responses to these can be translated to updates of its knowledge over her utility function. The system’s goal is to, in expectation, maximise a value function defined over the space of individual user outcomes for a specific decision scenario. A common example is that of maximising expected Social Welfare [32], i.e. the sum of participating users’ utility. Formally, assume a, potentially unknown, set of users I, where each

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 user i ∈ I aims to select, or be assigned, an outcome x ∈ X, where X is a multivariate space of outcomes. Each user i is characterised by a utility function ui : X → R, with ui ∈ U drawn from the space of possible utility functions U . Values ui (x), ∀x ∈ X, define how much the user prefers each multivariate assignment. In other words, ui (x) > ui (x0 ) ⇔ x x0 , for user i and ∀x, x0 ∈ X.2 Given a group of users J ⊆ I, the system makes decisions so as to optimise for a value function v : ΛJ → R, where ΛJ ⊆ ×|J| X is the space of available plans. For simplicity, and without loss of generality,P we define the value of a plan λ ∈ ΛJ as the social welfare v(λ) = i∈J ui (xi ), where λ = [xi , ∀i ∈ J], and xi ∈ X ∀i ∈ J (which reduces to ui (xi ) in the single user case). We will also examine the case where plans are constrained so that xi = xi0 , ∀i, i0 ∈ J, i.e. when each user in the group is assigned the same outcome. The following section focuses on the problem of eliciting the preference model of a single user. We will expand on this standard construction in Section 5, where we present the basics of our theory of Coarse Preferences. Section 7 extends the problem of preference elicitation to multiple users, examining the effects the coarse representation has on computational costs.

3.1

Preference Elicitation

User i’s utility function ui can generally be defined as a vector of parameters. In the trivial single-user paradigm where Λ , X is a set of possible outcomes with cardinality m, we can assume the mdimensional vector ui , with jth element ui,j = ui (xj ) simply being the utility of the outcome xj ∈ Λ. The optimal decision w.r.t. ui is x∗ = argmaxx∈Λ ui (x), giving utility ui (x∗ ). Though we have assumed a parametrisation by the number of possible outcomes, our model still holds for more structured representations, such as those making use of additive [30] or generalised additive independence [7] of variables. ui is then simply the vector of parameters required for that model, while all results below apply. The system does not in general have complete knowledge of ui , but, at each time-step t, maintains a density bti over the space U , indicating its current belief over the user’s utility function, with b0i representing its prior knowledge. The expected utility of an outcome x given density bti over U is Z EU (x, bti ) = ui (x) bti (ui )dui (1) In such a state of uncertainty, the optimal decision is that x∗ with maximum expected utility EU (x∗ , bti ). We denote by M EU (bti ) the value of being in state bti , assuming one is forced to make a decision: M EU (bti ) = EU (x∗ , bti ). The system can at each time-step t, up till a horizon H, present the user with a query qt ∈ Qt from the space of available queries Qt . Each query qt elicits a response rt ∈ R(qt ), where R(qt ) is the space of possible responses to the query. There query-response set pairs can represent a number of system-user interactions. Specifically, interactions in which the system presents a set, or space, of choices to the user, anticipating a response. The user’s response can be used to update our belief over her utility function, in accordance with Bayes’ rule:

2

bt+1 (ui ) = P (ui |rt ) = R i

P (rt |qt , ui )bti (ui ) P (rt |qt , ui )bti (ui )dui

(2)

We point to the seminal work of von Neumann and Morgenstern [44] for a concise analysis of the concept of utility functions.

c SmartSociety Consortium 2013-2017

The (myopic) expected value of information (EVOI) of a query can be defined by considering the difference between M EU (bti ) and the expectation (w.r.t. rt ) of M EU (bt+1 ). A myopically optimal i elicitation strategy involves asking queries with maximal EVOI at each time step [14].

3.1.1 Benchmark We are interested in demonstrating the compatibility of our methods with existing approaches in the literature, as well as the benefits resulting from this integration. For our experiments, we have adapted the work in [22], by considering utility queries rather than pairwise comparison queries. Specifically, at each time-step t, we select an item for the user to rate, i.e. we present a query qt ∈ Q , Λ ⊆ X. We consider noisy user responses r(qt ), which we model as Gaussians with a predefined variance σr2 , i.e. N (r(qt ), σr2 ). As in [22], we make the assumption of Additive Independence (AI) of all variables for a user i’s preferences, i.e. the utility funcP tion can be written in the form ui = kj=1 uji (xj ). 3 Assuming a discrete domain Xj for each variable xj , we can represent our belief j j j over partial utilities ui (x ), ∀x ∈ Xj with a uni-variate Gaussian j j j j j j 2 N ui (x ); µi (x ), σi (x ) . Equivalently, our belief over the utility function can be written as bti =

j| k |X Y Y

j=1 xj =1

2 N uji (xj ); µji (xj ), σij (xj ) ,

(3)

i.e. a Gaussian with diagonal covariance, where the time index has been omitted for readability. Under these assumptions, the user’s utility u(x) over a specific complete assignment x ∈ X , x = [x1 , x2 , ..., xk ], is the sum of k Gaussians and, hence, a Gaussian, 2 P P with mean kj=1 µji (xj ), and variance kj=1 σij (xj ) . We can compute the posterior over our belief of the user’s utility function, after querying an item xt , by constructing the factor graph [35] for the respective variable assignments and response. Our inputs at each time-step are a prior over each partial utility for each partial assignment, and the user’s evaluation of that complete assignment, or r(xt ). Utilising a message passing algorithm, specifically Loopy Belief Propagation [35], we can compute the posterior over each partial utility, in closed form, and transfer it to the next iteration. At this point a new factor graph is constructed to match the new complete assignment and we continue as above. An example of this factor graph is given in Fig. 1.

4

Users as Categorical Thinkers

Categorical thinking can be understood as humans remapping the representation of the solutions at hand to a lower dimensionality space, before making a decision. The subjective nature of this mapping is key. Essentially, for any specific user, there are “significant” subspaces of X for which her utility remains unchanged: A partitioning of X into preferential equivalence classes. If we knew which these were, we could collapse our outcome space into a situation space C of preferential equivalence classes. Formally, ∃Ck ⊆ X, ∀k ∈ {1, ..., z} : ui (x) = uik , ∀x ∈ Ck and ∪zk=1 Ck = X, Ck ∩ Ck0 = ∅, ∀k, k0 ∈ {1, ..., z} and k 6= k0 . Having access to this information, 3

Two attributes are called additive independent if the preference between two lotteries (joint probability distributions) on the two attributes depends only on their marginal probability distributions [29].

51 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4 prior set of possible partitions of the outcome space, the elicitation and multiple-user decision problems require a similar solution approach. Section 5 will detail a theory of Coarse Preferences, prove the equivalence of typical utility function representations with our model, and propose extensions for combination with additive models of preferences. Section 6 will provide a general methodology for eliciting coarse preferences, while also detailing procedures that are more specific to our experiments in this work.

5

Figure 1: A factor graph representing the joint distribution for our

belief over the part of a user’s utility function corresponding to a utility query qt = xt , at time t, and his response r(qt ). We use this representation to update our belief over u to bt+1 (u), using a a standard message passing algorithm.

we can reduce the number of parameters for u, in this case from |X| to z.4 In this case of a single user with known utility function, reducing the set of parameters can greatly speed up decision making and preference elicitation, and reduce memory requirements. However, interesting problems arise when we consider problems where 1. We do not know the partition for the user, but we have access to a prior over partitionings: either a generative prior, or a library of candidate partitions (e.g. garnered from historical data and/or representative of the problem domain); 2. We need to make a decision over multiple users, each one with his own representation of the outcome space; or a combination of both. This paper will present solutions to both issues, postponing the presentation of a generative prior, however, for later work. To illustrate the second point, P consider the social welfare optimisation problem argmaxx∈X i∈J ui (x). Straightforward in outcome space, but if mapped into situation space it transforms into X argmax ui (ci ), (4) ci ∈Ci ,∀i∈J:∃o∈O:o∈Ci ,∀i∈J

i∈J

where Ci ∀i ∈ J are the new utility function parameters, or situation spaces, as defined by the coarse preferences of each user, and ci is a variable denoting preferential equivalence class Ci in the original outcome space. Given that each user would view the problem differently, and therefore exhibit different coarseness, each user’s utility function would be represented in a different parameter space. Any computational gains from the parameter transformation is lost when multiple users have to be considered in this manner. A similar problem arises during preference elicitation: While knowledge of the mapping from outcomes to situations would, again, make this easily handled by traditional approaches, this is invalidated by the fact that every user exhibits a different structure over the outcome space. Coarseness needs to be elicited in parallel with utility.5 We will see that, given a 4

5

Note, that we have only made the typical assumption of stationarity of the user’s utility function. Even a strictly monotonic function u can be represented in this manner, in the limit. Practical use of this mapping would, of course, require stricter conditions (without some form of approximation). This is what is implied by “significant�. An alternative approach would be to learn the mapping first, before proceeding to elicit utilities. However, this would represent a different decision scenario, with explicit elicitation of solution space abstractions. Many rec-

52 of 87

Coarse Preferences

Key to our approach for coarse preferences, is the idea that a user’s preferences are coarser than what the outcome space might imply. Specifically, we assume that her evaluation over an outcome is not as fine as the combination of variable domains describing it but, rather, that she understands alternatives in terms of the situations they represent. We first give the common definitions of binary preference relations, before proceeding to give a definition of coarse preferences. Assume two outcomes x, x0 ∈ X and an arbitrary user. A weak preference x x0 of outcome x over x0 is a binary relation indicating that the user weakly prefers outcome x to x0 , or, in other words, x is at least as good as x0 , for that user. The weak preference relation is typically expected to satisfy the properties of comparability and transitivity, if it is to be considered rational [44]. The binary relations of indifference and strict preference are defined in terms of the weak preference relation as: x âˆź x0 ⇔ x x0 ∧ x0 x

x x0 ⇔ x0 x.

(5) (6)

We are now ready to give a definition of coarse preferences: Definition 5.1 (Coarse preferences). We say that a Decision Maker (DM) exhibits coarse preferences φ over outcomes x ∈ X (or is a φ-coarse DM) if given a many-to-one mapping φ : X → C from a space of outcomes to a space of situations, with φ(x) = c, φ(x0 ) = c0 , we have: x x0 ⇔ c c0

(7)

Intuitively, a user identifies in each outcome a corresponding situation, according to a mapping φ, and maintains a preference ranking over these situations rather than directly over the space of outcomes. The ranking over outcomes is a result of inverting this mapping. An alternative view of situations is that of preferential equivalence classes over the space of outcomes. Essentially, every situation defines a subset of the space of outcomes, all members of which are equally preferred. The binary relations of indifference and strict preference extend to coarse preferences: Corollary 5.2. Indifference and strict preference: If a DM exhibits coarse preferences then it follows from Def. 5.1 and Eq. 5 and 6 that: x âˆź x0 ⇔ c âˆź c0 0

0

x x ⇔c c

(8) (9)

The next subsection details the proof of the existence of a coarse utility function over situations, representing the same preferences as a ommender systems applications are incompatible with such a procedure, since they tend to be based on either implicit feedback from user selection behaviour, or on the explicit rating of items. We point towards the Concept Learning literature (e.g. [1, 25]) for more.

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 given utility function over outcomes. We can therefore, given a utility function u : X → R defining a coarseness φ : X → C, write the utility function uc : C → R,

(10)

with uc φ(x) = u(x), ∀x ∈ X.

5.1

Existence Proof for Coarse Utility Functions

We next try to constructively prove that, if a φ-coarse user’s preferences over outcomes can be represented by a utility function u, there exists a coarse utility function uc that can represent preferences over the space of situations, in accordance with the von Neumann and Morgenstern [44] expected utility representation theorem. To start off, we denote lotteries over situations as lc = hpc1 , c1 ; pc2 , c2 ; ...; pcz , cz i, where situation cj is realised with probability pcj . Proposition 5.3 (Equivalence of simple lotteries over outcomes and situations). Assume a set of outcomes X and situations C, and a many-to-one mapping φ : X → C. Every lottery over outcomes l = hp1 , x1 ; ... ; pn , xn i with xi ∈ X, ∀i = {1, ..., n} defines a lottery over situations lc =Phpc1 , c1 ; ...; pcz , cz i with cj ∈ C, ∀j = {1, ..., z}, such that pcj = xi ∈X j pi , where X cj ⊆ X : ∀x ∈ c

X cj , φ(x) = cj . We write φ : L → Lc , where L, Lc are the sets of simple lotteries over outcomes and situations, respectively.

Corollary 5.4. Given Prop. 5.3, and since X cj are disjoint sets with union X, we have n X

pi =

i=1

z X

pcj

(11)

j=1

We will now use the above to prove that there indeed exists a coarse utility function uc , that represents the same preference ordering as u: Theorem 5.1 (Utility functions representing coarse preferences). Assume a coarse DM, φ : X → C, with a utility function u : L → R representing , and with sets of simple lotteries over outcomes and situations L, Lc , respectively. There exists a utility function uc : Lc → R representing , such that u(l) = uc φ(l) . Proof. Define a function uc : C → R with uc φ(x) = u(x), and P z c c j c 1 c z denote uc (lc ) = j=1 pj u (c ), where lc = hp1 , c ; ...; pz , c i with cj ∈ C, ∀j = {1, ..., z}. Also assume the lottery l = hp1 , x1 ; ...; pn , xn i with xi ∈ X, ∀i = {1, ..., n} with lc = φ(l). We know from theP von Neumann and Morgenstern [44] Utility Thei orem that u(l) = n i=1 pi u(x ), but from Cor. 5.4, and since we defined uc φ(x) = u(x), we have: n X i=1

pi u(xi ) =

z X X

j=1 xi ∈X j c

pi uc (cj ) =

z X

pcj uc (cj )

j=1

⇔ u(l) = uc (lc ).

(12) 0

From the expected utility definition, given lotteries l and l we have l l0 ⇔ u(l) ≼ u(l0 ), but because u(l) = uc (lc ), and denoting lc = φ(l) and lc0 = φ(l0 ), we finally get: l l0 ⇔ u(l) ≼ u(l0 ) ⇔ uc (lc ) ≼ uc (lc0 ) ⇔ lc lc0 .

c SmartSociety Consortium 2013-2017

(13)

5.2

Coarse User Types

This section investigates how to make decisions while having to compromise multiple different coarse representations of the outcome space, as will generally be the case when multiple users are involved. In order to do so, we first introduce the concept of a coarse user type; a generalisation of coarse preferences to a set of users. The following subsection will then examine integrating the theory of coarse preferences with models of additive independence through the construction of partial situations. Let us assume that users belong to one of a known set of types T , with each Ď„ ∈ T defined by a coarse representation, φτ : X → CĎ„ . Each such mapping, represents a partitioning of the space of outcomes X into a set of preferential equivalence classes CĎ„ = {X c , ∀c ∈ CĎ„ } : X c ⊆ X with ∀x ∈ X c , φ(x) = c. We can define the pointwise intersection of all user types’ equivalence class partitionings as CT = âˆŠâˆ€Ď„ ∈T CĎ„ or, equivalently, CT = {X cT , ∀cT ∈ CT } : X cT ⊆ X and ∀x ∈ X cT , âˆ€Ď„ ∈ T, φτ (x) = cĎ„ . We will refer to CT as the type-set situation space. It is not necessary for a user to exhibit the exact same coarseness as a type in order to belong to it. In fact, it is enough that any decisions made with the use of a type’s coarse representation are consistent with decisions one would have made if she had direct access to the user’s coarse representation: Definition 5.5 (Coarse Preference User Types). Assume a partitioning A = {A1 , ..., Ak } of the space of outcomes X, and the coarse preferences partitioning of the outcome space Ci , for some user i ∈ I. We will say that user i is of Coarse Preference User Type A, iff the pair-wise point intersection of Ci with A results in A. We can relax the definition above by allowing approximate user types with bounded loss of accuracy in terms of utility values:

Definition 5.6 ( -close Coarse Preference User Types). Assume a partitioning A = {A1 , ..., Ak } of the space of outcomes X, and the coarse preferences partitioning of the outcome space Ci , for some user i ∈ I, with φi : X → Ci and uci : Ci → R. We will say that user i is of -close Coarse Preference User Type A, if there exists a partitioning of the outcome space Ci , for a mapping φ i : X → Ci , such that the pair-wise point intersection of Ci with A results in A. Where ∀c ∈ Ci , c is the result of the union of equivalence classes defined by c1 , c2 ∈ Ci such that ∀x, x0 ∈ X : φi (x) = c1 , φi (x0 ) = c2 , with uci (c1 ) − uci (c2 ) ≤ , we have φ i (x) = φ i (x0 ) = c .

We will examine decisions affecting a user of unknown type, drawn from a known set of coarse user types, with some known prior. A key issue is that, even as coarseness produces a preferential discretisation of the outcome space, this partitioning is, generally, unique to each user type. Examining the problem of recommendation, we can see how this becomes problematic: it prohibits us from making decisions over any specific situation space Cτ , of a particular user type τ . This becomes obvious if we consider evaluation of two outcomes x, x0 such that both belong to the same preferential equivalence class defining situation c11 (i.e. φ(x) = φ(x0 ) = c11 ) for user 1, but belong to different situations c21 , c22 for user 2 (i.e. φ2 (x) = c21 , φ2 (x0 ) = c22 ). We should not expect from a general evaluation of recommendations to be indifferent between outcomes in c11 . We could, of course, optimise over the original outcome space. However, this would defeat the purpose of reducing the parametric representation of the utility function. The question then arises: Is there a way of maintaining some of the benefits from the outcome space partitioning, while still making consistent decisions given a set of viable user types? The above example points towards such an approach:

53 of 87


c SmartSociety Consortium 2013-2017 Theorem 5.2 (Coarse Representation for Sets of User Types). Assume a set of user types T , representing coarse preferences φτ : X → Cτ , with some ranking over Cτ , ∀τ ∈ T , given a space of potential outcomes X. The point-wise intersection of the user types’ preferential equivalence class partitionings, results in the maximally coarse representation of the outcome space, φT : X → CT , such that each user of type τ ∈ T is indifferent among outcomes in each non-empty preferential equivalence class in the resulting partition. Proof. Proving that each user of a specific type is indifferent among outcomes in any of the resulting preferential equivalence classes is trivial. Each such equivalence class is a subspace of each user equivalence class from which it was constructed. The preferential indifference among all outcomes of that class still holds for any of its non-empty subspaces. Proving that the above construction is maximal, requires proving that adding any outcome to any resulting equivalence class would make it lose the above property. Assume that it did not. Then, for each user, it would have belonged to the respective equivalence class which constructed the resulting equivalence class. It would therefore have been included in it. Consequently, it is not a new addition to it. We will refer to the set of user types T defining φT : X → CT as the seed group of the partitioning CT , and refer to the corresponding variable assignment space CT as the set-type situation space. A consequence of Theorem 5.2 is that any pair of mappings φτ and φT , for any τ ∈ T , uniquely define a mapping φT →τ : CT → Cτ . If the provided rankings over situations allow indifference, and we are provided with rankings of the type rather than , then we can obtain the maximally coarse representation by, for each user type, re-mapping any outcomes belonging to preferential equivalence classes between which that user is indifferent, to the same situation. The above theorem will then apply. Using Theorem 5.2 we can now make decisions for a group of users in a reduced space of solutions CT , and rewrite the social welfare example from Eq. 4 as X c argmax ui (ci ), (14) c∈CT

i∈I

where ci , ∀i ∈ I is the variable denoting the equivalence class, in outcome space, that is a super-set of the space defined by c. This section has so far presented the details of making decisions while having to compromise multiple different coarse representations of the outcome space, and in doing so introduced the definition of coarse user types. The later will also form the basis of our approach to the elicitation of Coarse Preferences, as detailed in section 6. We end our presentation of a theory of coarse preferences by examining its integration with models of additive independence through the construct of partial situations.

5.3

Partial situations

Our theory on coarse preferences considers a many-to-one mapping from a space of outcomes to a space of situations. Though there are no constraints, other than the user’s preferences, as to what form this may take, it would be useful for us to make use of any structure present in the respective outcome space utility function. Particularly, we are interested in maintaining the decomposition of outcomes into assignments to a set of variables, as with additive decompositions, even when working in the space of situations. Failing to do so, would mean that we would have to map each individual outcome, separately,

54 of 87

Deliverable D6.4 to a situation, therefore relinquishing any benefits resulting from the original additive decomposition. In order to achieve this goal, we will examine mapping assignments to subsets of variables Xg ⊆ {x1 , ..., xm }, to partial situations. Intuitively, this reflects the fact that the user is able to independently evaluate assignments to these subsets. This will make our approach complementary to additive decompositions, such as in Section 3.1.1. For this purpose, let us assume that situations can be represented by a vector of partial situations, such that the mapping φ can be analysed to a set of functions φg , with the group g ∈ {1, ..., h}. φg are manyto-one mappings from a space X g of potential assignments to a set of variables Xg ⊆ {x1 , ..., xm }, to a space of partial situations Sg : φg : X g → Cg ,

(15)

with corresponding coarse sub-utility functions ucg : Cg → R,

(16)

such that uc (o) =

h X

ucg (xg ),

(17)

g=1

where xg ∈ X g . We term Xg , ∀g ∈ {1, ..., h} Partial Situation Variable Sets (PSVS) and require that their union fully covers (though is not necessarily a partition of) the set of variables {x1 , ..., xm }: ∪hg=1 Xg = {x1 , ..., xm }.

(18)

The variable subsets over which original sub-utility functions are defined do not need to be the same over which coarse sub-utility functions are defined. However, the equivalence of u and uc forces some constraints. Essentially, a user can not have an outcome space utility function that is decomposed in such a way, that utilities cannot be computed for partial situations. Therefore, all variable subsets over which u’s sub-utility functions are defined, must be equal to the union of one or more PSVS. These need not necessarily be disjoint. Formally: Proposition 5.7. If given a preference ordering , there exists any preferentially independent variable set A, such that @K ⊆ {1, ..., h} : A = ∪k∈K Xk , then the situation decomposition φg , ∀ g ∈ {1, ..., h}, is not compatible with that ordering.

6

Eliciting Coarse Preferences

This section presents a general framework for the elicitation of coarse preferences, along with a detailed instantiation for our experiments in Section 8. We also present a few necessary transformations of belief states from, and to, the outcome, user type, and set type solution spaces. Below, we omit the user index i for improved readability. Because of our construction of coarse preference types in section 5.2, we can select queries from the space of set-type situations, without loss of information. This reduces the cardinality of the, discrete, solution space from |X| to |CT |. Moreover, the structure inherent in the coarse user types forces groups of set-type situations cT ∈ CT to exhibit the same utility. Any inconsistencies will be detected during elicitation and used to update our belief over types. The combination of reduced cardinality and structural inference should speed the convergence of the elicitation significantly, compared to approaches defined over the original outcome space. However, in limited data

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 scenarios, such as some of those presented in Section 8, where only a small set of outcomes are available to be queried over, we will not need to reason about the set-type situation space. Mapping outcomes to type specific situations will suffice. With the exception of a categorical distribution over the family of partitions, all probabilities can be selected to fit a specific elicitation scenario. For simplicity of presentation, we consider utility queries and noisy user responses, as in Section 3.1.1. The conditional beliefs over the utility function are updated over individual type situation spaces, also similarly to that section.

of a specific solution x ∈ X as: EU (x|buc ) =

X

τ ∈T

hτ X

(21)

µcgτ .

g=1

When doing computations with utility across a number of users, or user types, we can map our belief from the user type to the set-type situation space by reversing the mapping φT →τ : c b(ucT |τ ) = φ−1 (22) T →τ b(uτ |τ ) .

When working with sets of outcomes, as in our examples, one can easily represent Eq. 22 using matrix products. By moving freely between set-type situation space and user type situation spaces, we can make decisions across user types by working with the former, while maintaining densities over the latter. Since our user types only define the coarseness across users, but not necessarily the utility itself, we can use the responses rt a user gives to our queries qt , at time-step t, to update our belief over the utility function independently for each user type, as below: bt+1 (ucτ |τ ) = b(ucτ |τ, rt , qt ) = R

Figure 2: A factor graph representing the joint distribution for our

belief over the part of a user’s utility function corresponding to a utility query qt = xt , at time t, and his response r(qt ), as represented in user type situation space. We use this representation to update our belief over u to bt+1 (u), using a a standard message passing algorithm. i Shaded circles represent variables with a known assignment. Given a distribution over types bT , and a conditional distribution on the utility function in either set-type situation space b(ucT |τ ), or user type situation space b(ucτ |τ ), we can represent the density over a user’s utility function as a mixture over |T | models: bu c = bT

Y

τ ∈T

b(ucτ |τ ).

(19)

We can represent our belief over the user type using a Categorical distribution, bT = Cat(w), with the mixture component weights w = {w1 , w2 , ..., w|T | }. Moreover, since we are working over a discrete domain, we opt to use a Gaussian distribution for representing our belief over the utility of each assignment to each situation variable cgτ (summarised as a Conditional Gaussian in Figure 2). Specifically, Q τ Q 2 g g g g we write: b(ucT |τ ) = hg=1 , ∀τ ∈ T . cτ ∈Cτ N ucτ ; µcτ , σcg τ Here, µcgτ and σc2gτ are the utility mean and variance for a specific assignment to cgτ , while hτ are the number of situation variables for coarse type τ . We can now rewrite Eq. 19 as: buc = Cat(w)

hτ YY Y

g τ ∈T g=1 cg τ ∈Cτ

N ucgτ ; µcgτ , σc2gτ .

(20)

Figure 2 represents the joint density over the user type τ and the utility functions b(ucτ |τ ), ∀τ ∈ T . We will detail how to learn the necessary mappings φτ , ∀τ ∈ T , in Section 6.1. We compute priors over our density b0T and b0 (ucT |τ ) from available data, prior to the elicitation process. Given a belief buc , we can compute the expectation over the utility

c SmartSociety Consortium 2013-2017

P (rt |ucτ , qt ) bt (ucτ |τ ) . P (rt |ucτ , qt ) bt (ucτ |τ ) ducτ

(23)

The updated belief over user types can then be computed from the responsibility of each mixture component τ ∈ T for the user’s response rt : P (rt |τ, qt ) btT (τ ) , t τ ∈T P (rt |τ, qt ) bT (τ )

bt+1 T (τ ) = b(τ |rt , qt ) = wτ = P R

(24)

where P (rt |τ, qt ) = bt+1 (ucτ |τ ) · ucτ (rt ) ducτ . The next section will present a methodology for learning the family of coarse user types and corresponding coarse mappings in an off-line fashion.

6.1

Learning the Coarse Prior

In order to learn the coarseness prior b0T and mappings φτ we assume access to a data-set D of user-outcome-rating triples. The specific procedure followed differs with the model, but a basic functionality used in all cases is that of the Regression Tree (RT) algorithm [12]. This procedure receives a set of outcome-rating pairs as input, and outputs a decision tree with a continuous target variable; in this case, the utility. This decision tree defines a partitioning of the original outcome space by use of axis-parallel linear constraints. In the single user type models, it is enough to run the RT algorithm once over the whole data-set D (ignoring user information). For the non-additive case, we map each leaf-node to a situation. For the AI case, we maintain a list of all split-points per variable and use them to partition the respective variable domain. Each equivalence class in each variables partitioning then maps to a partial situation, as introduced in Section 5.3. For the multiple user type scenarios we first need to cluster the users in D, according to their ratings of outcomes. However, we do not assume that users have rated the same set of items. Instead, we run a RT algorithm independently over each user-specific data subset Di ⊆ D, then use the resulting regression tree RTi ∀i ∈ J to generate a number of outcome-rating pairs. We subsequently construct a vector of these ratings for each user, and run a clustering procedure over them. The number of user types |T | is predetermined and the subset of data Dτ ⊆ D belonging to each cluster is used to generate the respective coarse mapping φτ , as above. Future research will examine generative models of coarseness.

55 of 87


c SmartSociety Consortium 2013-2017 7

Deliverable D6.4

Multi-user Preference Elicitation: Computational cost under each model

This section examines the problem of simultaneous preference elicitation across multiple users. Though some real-world applications might allow for asynchronous handling of user queries, there will be scenarios where this is not a viable approach. Especially when queries have the additional role of acting as recommendations of outcomes that are dependent on other user’s behaviour. Consider the general problem description of Section 3 and the following preference elicitation scenario: At each time step t, we present each user i ∈ J with a query qti ∈ Qi , i.e. the system selects and presents a query q t ∈ ×i∈J Qi . 6 Assuming users respond independently to their respective queries, if an exhaustive enumeration of Q|J| is performed for each next query estimation, then we can perform belief updates independently for each user i, considering all possible queries qti ∈ Q and respective responses r(qti ) ∈ R(qti ). In this section, we compare the computational cost, in time, of selecting a next query, while assuming that the original outcome space over which standard utility functions are defined is discreet. Where this is not the case, coarse preferences can be utilised as a formal way of constructing personalised outcome space discretisations, that allow us to elicit information with algorithms such as that of Section 3.1.1.

7.1

Time spent on updating belief functions talg b

Let represent the average time it takes for the system to compute a belief update under algorithm alg. Moreover, define the number of belief updates during an EVOI computation required for user i under algorithm alg, and its respective preference model, as nalg i . The average time spent on computing belief updates for deciding on a query q t for a set of users J is then: X alg talg · ni . (25) i∈J

nalg will depend on the type of query presented to each user, and, for i many commonly used queries, on the cardinality of the outcome space and, potentially, on the available user responses. In the simple scenario we examine in Section 8, we apply utility queries over disceretised outcome spaces, and consider only the most probable user response. in this case we have: nalg = |Qalg i i |,

(26)

where we use Qalg to refer to the cardinality of the available set of i utility queries for user i under algorithm alg. Assuming all of the outcome space is made available for each user, then the benchmark algorithm of Section 3.1.1 performs nout = nout = |O| belief i updates. Using a coarse preference elicitation scheme with mapping φi : O → Ci for each user, this reduces to ncoa = |Ci |. i We see that the reduction in computational time required for belief updates during the selection of the next query is linear in the difference of outcome space and situation space cardinalities. However, in the case of binary comparison queries (e.g. in [22]) we have nout = nout = 2 · |O| · (|O| − 1) and ncoa = 2 · |Ci | · (|Ci | − 1), i i if users are allowed one of two responses Rt = { , }. More generally, considering the presentation of recommendation sets of k items, where users get to select one item from the set (e.g. in 6

Where we assume that user responses are independent of queries presented to other users.

56 of 87

Q out out [37, 43]), we = k · h=1:k (|O| − h + 1) and Q have ni = n coa ni = k · h=1:k (|Ci | − h + 1). We will show through our experiments in Section 8 that tout ≥ tcoa b b . In fact, for the simplest model, of 1 user type and a situation space of 1 variable, tout >> tcoa b b . We next examine the computational time required for the evaluation of belief states during the query selection procedure.

7.2

Time spent on query belief evaluation

The process of selecting an optimal query q t to present to users J also involves the procedure of evaluating all potential belief states resulting from the combination of user responses r t ∈ R(q t ) = Q i i∈J R(qt ), ∀q t ∈ Qt . In the elicitation scenario described earlier, this equates to the following number of belief state evaluations: Y i nalg |Qt |. (27) eval = |Qt | = i∈J

out |J| Therefore, for the benchmark in Section Q 3.1.1 we have neval = |O| , coa and for a coarse algorithm neval = i∈J |Ci |. Assuming that a belief state evaluation takes talg eval time for algorithm alg, on average, that means that the time spent on evaluating |J| belief statesQfor algorithms out and coa is, respectively, tout eval · |O| and tcoa |C |. i eval · i∈J We expect |Ci | to generally be much smaller than |O|, but this will of course depend on the problem at hand and accuracy required. We coa examine the relation between tout eval and teval , in the following section, out coa where we show that teval ≥ teval .

8

Experiments

We first examine our elicitation models, as presented in Section 6, in comparison with the respective standard approach, as detailed in Section 3.1.1. We make use of two data-sets. The first is the Sushi data-set [28], where 5000 users each rate 10 out of 100 predefined sushi dishes. Every sushi, i.e. outcome, is defined by 7 variables, such as their heaviness/oiliness and frequency of consumption. In summary, there are 3 categorical, out of which 2 are binary, and 4 continuous-space variables. Sushi is rated on an integer scale of 0 to 4. However, we will treat rating as a continuous-space variable, for model simplicity. An important thing to note, is that prior to any operation we perform a discretisation of the outcome spaces, as in [22], in order to be able to apply the benchmark, as described in Section 3.1.1. We first test our approaches on a revised decision scenario, where we are trying to accurately predict the user’s utility u(xtest ) over a withheld test outcome xtest . Since |Λ| = 9 (10 minus the withheld test outcome), this scenario is very restricted in terms of available outcomes to query. Similar restrictions appear in real-world applications, especially when queries are in the form of recommendations from a set of available items. Our metric is that of average normalised loss, where the loss function is defined as the absolute distance of our prediction from the test point’s evaluation. We ran the following algorithms for 5000 experiment instances. Each time we uniformly selected a user and one of his rated items, xtest . We withhold his data Di and learn all coarse priors over D−i . The algorithms we tested are: 1. over Outcomes (the benchmark from Section 3.1.1); 2. over Situations (single coarse user type and single variable situation space, i.e. no decomposition);

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

The plot in Fig. 3 depicts the average performance of each algorithm over 5000 experiment instances. The performance of our AI-Situations algorithms is better than that of the benchmark, near tripling the reduction in loss. However, the effect is small across algorithms. This difference in performance is statistically significant (Wilcoxon Matched-Pairs Signed-Ranks Test [41], p < 0.001) from time-step 3 and onwards. The basic coarse model under-performs as a result of the limited number of data-points belonging to the same situation as the test-point. This does make it an unsuitable procedure for this decision scenario. 0.5

over Outcomes over Situations over AI Situations over filtered AI situations over clustered data - situations

Average Normalised Loss

0.45 0.4

time considering the problem of identifying an optimal recommendation. For this purpose, we define normalised loss over the absolute distance of the actual utility of our recommendation at each time-step from the best available recommendation. The rest of the procedure is identical to the one described above, and is also run over 5000 experiment instances. Figure 5 presents our results. We note the good performance of the simple Situations model, which reduces the solution space to a single dimension. The AI Situation models slightly under-perform compared to the benchmark. The difference in performance between the benchmark and simple Situations model is statistically significant (Wilcoxon Matched-Pairs Signed-Ranks Test [41], p < 0.115 for t = 4, and p < 0.05 for t ≥ 5 ) from t = 4 and onwards. 0.4

Average Normalised Loss

3. over Additive Independent (AI) situations (single coarse user type, AI decomposition); 4. over a ”filtered” version of Additive Independent (AI) situations (single coarse user type, AI decomposition, removal of variables with a domain of cardinality 1); 5. over a mixture of situation models (a mixture of |T | single variable situation space coarse user types).

over Outcomes over Situations over AI Situations over filtered AI situations over clustered data - situations

0.3

0.2

0.1

0

0.35 -0.1 0.3

2

3

4

5

6

7

8

9

Figure 5: Sushi data-set (recommendation): Average normalised loss,

0.2 0

1

2

3

4

5

6

7

8

9

Time-step t

Figure 3: Sushi data-set (prediction): Average normalised loss, esti-

mated from the offset of a model’s prediction from the withheld user test-point evaluation, over 5000 experiment instances.

Fig. 4 shows the time, in seconds, spent per elicitation time-step, for each algorithm. Averages have been computed across time-steps. The decision scenario examined here does not exhibit the combinatorial properties analysed in Section 7. Computational gains for the Situations, and ”filtered” AI-Situations algorithms, are therefore the result of the reduced number of variables considered for these models. Specifically, the direct effect this has on the time spent for each belief-state evaluation and update.

estimated from the offset of the utility of a model’s recommendation to the user, from the best available recommendation, over 5000 experiment instances. Next, we run the prediction scenario (1000 experiment instances) over the Housing data-set. The main difference is that since the data represents a single agent, we cannot make use of user-typing. Algorithm 5 above was therefore excluded. The Housing data-set [24] consists of the financial evaluation of 506 homes, each one described by 14 variables, out of which 13 are continuous and 1 is binary. As with the Sushi data-set, we discretise the data prior to any operations. Evaluations are given in terms of thousands of dollars. An example of these variables are the per capita crime rate by town, the average number of rooms per dwelling, and the index of accessibility to radial highways. 0.4

Average Normalised Loss

Average Computational Time (sec)

1

Time-step t

0.25

0.15

0

1.5

1

0.5

0 over Outcomes over Situations over AI Situations over filtered AI situations over clustered data - situations

Figure 4: Sushi data-set: Computational time used by each algorithm

per time-step (in seconds).

We next examine the above scenario over the Sushi data-set, but this

c SmartSociety Consortium 2013-2017

over Outcomes over Situations over AI Situations

0.3

0.2

0.1

0

-0.1

0

2

4

6

8

10

12

Time-step t

Figure 6: Housing data-set (prediction): Average normalised loss, esti-

mated from the offset of a model’s prediction from the withheld user test-point evaluation, over 1000 experiment instances.

The results in Fig. 6 show our additive independent models perform-

57 of 87


c SmartSociety Consortium 2013-2017

Average Computational Time (sec)

ing equally well to the benchmark for the prediction scenario. Since there were no situation variables with a domain of cardinality 1, there is no distinction between the �filtered� and �unfiltered� version. Fig. 7 shows the average computational time spent per time-step for the identification of a query and update of the belief-state after the user’s response. The results are similar to that of Fig. 4, though the increased number of available queries allows the AI Situation algorithm to provide computational benefits. We point out that in a scenario like the ones described in Section 7, there would be additional computational benefits from adopting the AI Situations model. 0.8

0.6

0.4

0.2

0 over Outcomes over Situations over AI Situations

Figure 7: Housing data-set: Computational time used by each algo-

rithm per time-step (in seconds).

9

Conclusion and Future Work

We presented a decision theoretic model for the representation of human categorical thinking, titled coarse preferences, capturing how people abstract solutions into lower dimensionality spaces by using partitionings of the original solution space. The theory conforms to the von Neumann-Morgenstern Utility Theorem [44] and is compatible with models of additive independence and generalised additive independence. Coarse preferences allow for significant gains in computational time; We provide theoretical guarantees for the reduction of computational cost in a large multiple user preference elicitation and optimisation problems, and experimentally demonstrate significant computational reduction per belief update and belief-state estimation. In terms of predictive capability, we present promising results on a real-world data-set, namely the Sushi [28] data-set. In aiming to predict a user’s evaluation of a specific solution, under a scenario of limited availability of queries, we show significant improvement with a single user type, additive independent model of coarseness, compared to an additive independent model defined over the original solution space. When the problem description changes to that of making an optimal recommendation, under the same, otherwise, scenario, a simple coarse model which reduces the problem to a single variable, with one user type, is enough to significantly improve upon the benchmark. The extent to which users act as coarse decision makers will depend on the specific application at hand. With this paper we demonstrate that there are significant benefits to be gained the more this is the case, while providing the tools for educing these benefits. Moreover, we prove that the loss in utility when dealing with multiple coarse user types, is no more than the worst loss in utility among initial approximations between users. Complementary, the theory provides an approach for trading off the accuracy of utility function repre-

58 of 87

Deliverable D6.4 sentations and computational efficiency, by use of -close Coarse Preference User Types. Going forward, we are interested in experimentally demonstrating the benefits of our approach on large multi-agent platforms. Moreover, we are currently developing generative models of coarse preferences, in order to allow for the elicitation of coarseness on-line.

REFERENCES [1] A. Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988. [2] Fahiem Bacchus and Adam Grove. Graphical models for preference and utility. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 3–10. Morgan Kaufmann Publishers Inc., 1995. [3] R Bell, Jonathan Bennett, Yoram Koren, and Chris Volinsky. The million dollar programming prize. Spectrum, IEEE, 46(5): 28–33, 2009. [4] Robert M Bell, Yehuda Koren, and Chris Volinsky. The bellkor solution to the netflix prize, 2007. [5] A. Bjorndahl, J. Y. Halpern, and R. Pass. Language-based games. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence, pages 2967–2971. AAAI Press, 2013. [6] C. Boutilier. A POMDP formulation of preference elicitation problems. In Proceedings of the 18th National Conference on Artificial Intelligence, pages 239–246, 2002. [7] C. Boutilier, F. Bacchus, and R. I. Brafman. UCP-Networks: A directed graphical representation of conditional utilities. In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, UAI’01, pages 55–64, Seattle, WA, 2001. [8] C. Boutilier, K. Regan, and P. Viappiani. Online feature elicitation in interactive optimization. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML09, pages 73–80, Montreal, Quebec, Canada, 2009. [9] C. Boutilier, K. Regan, and P. Viappiani. Preference elicitation with subjective features. In Proceedings of the 3rd ACM Conference on Recommender Systems, pages 23–25, New York, New York, USA, October 2009. [10] C. Boutilier, K. Regan, and P. Viappiani. Simultaneous elicitation of preference features and utility. In Proceedings of the 24th National Conference on Artificial Intelligence, AAAI-10, pages 1160–1197, 2010. [11] D. Braziunas. Computational approaches to preference elicitation. Technical report, University of Toronto, 2006. [12] Leo Breiman, Jerome Friedman, Charles J Stone, and Richard A Olshen. Classification and regression trees. CRC press, 1984. [13] U. Chajewska, L. Getoor, J. Norman, and Y. Shahar. Utility elicitation as a classification problem. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, pages 79–88. Morgan Kaufmann Publishers Inc., 1998. [14] Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive utility elicitation. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 363–369. AAAI Press, 2000. [15] H. Cr`es. Aggregation of coarse preferences. Social Choice and Welfare, 18(3):507–525, 2001. [16] J. Farfel and V. Conitzer. Aggregating value ranges: Preference elicitation and truthfulness. Autonomous Agents and MultiAgent Systems, 22(1):127–150, 2011.

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 [17] Simon French. Decision theory: an introduction to the mathematics of rationality. Halsted Press, 1986. [18] Krzysztof Gajos and Daniel S. Weld. Preference elicitation for interface optimization. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, UIST ’05, pages 173–182, New York, NY, USA, 2005. ACM. ISBN 1-59593-271-2. doi: 10.1145/1095034.1095063. URL http://doi.acm.org/10.1145/1095034.1095063. [19] J. Geanakoplos, D. Pearce, and E. Stacchetti. Psychological games and sequential rationality. Games and Economic Behavior, 1(1):60–79, 1989. [20] Herbert Gintis. The bounds of reason: game theory and the unification of the behavioral sciences. Princeton University Press, 2009. [21] Christophe Gonzales and Patrice Perny. GAI networks for utility elicitation. KR, 4:224–234, 2004. [22] Shengbo Guo and Scott Sanner. Real-time multiattribute bayesian preference elicitation with pairwise comparison queries. In International Conference on Artificial Intelligence and Statistics, pages 289–296, 2010. [23] Shengbo Guo, Scott Sanner, and Edwin V Bonilla. Gaussian process preference elicitation. In Advances in Neural Information Processing Systems, pages 262–270, 2010. [24] David Harrison and Daniel L Rubinfeld. Hedonic housing prices and the demand for clean air. Journal of environmental economics and management, 5(1):81–102, 1978. [25] D. Haussler. Learning conjuctive concepts in structural domains. Machine Learning, 4:7–40, 1989. [26] Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Jose M Hern´andez-lobato. Collaborative Gaussian processes for preference learning. In Advances in Neural Information Processing Systems, pages 2096–2104, 2012. [27] D. Jannach, M. Zanker, A. Felfernig, and G. Friedrich. Recommender Systems: An Introduction. Cambridge University Press, 2010. [28] Toshihiro Kamishima. Nantonac collaborative filtering: recommendation based on order responses. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 583–588. ACM, 2003. [29] R. L. Keeney and H. Raiffa. Decisions with Multiple Objectives: Preferences and Value Trade-Offs. Cambridge University Press, 1993. [30] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT Press, 2009. [31] B. K˝oszegi and M. Rabin. A model of reference-dependent preferences. Quarterly Journal of Economics, pages 1133–1165, 2006. [32] Jian Li and Amol Deshpande. Maximizing expected utility for stochastic combinatorial optimization problems. In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on, pages 797–806. IEEE, 2011. [33] S. Mullainathan. Thinking through categories. Unpublished Manuscript, 2002. URL http://www.haas. berkeley.edu/groups/finance/cat3.pdf. [34] S. Mullainathan. Coarse thinking and persuasion. Quarterly Journal of Economics, 123(2):577–619, 2008. [35] Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012. ¨ urk, M. Pirlot, and A. Tsoukias. Representing preferences [36] M. Ozt¨ using intervals. Artificial Intelligence, 175(7):1194–1222, 2011. [37] R. Price and P. R. Messinger. Optimal recommendation sets:

c SmartSociety Consortium 2013-2017

[38] [39]

[40] [41] [42] [43]

[44] [45]

Covering uncertainty over user preferences. In Proceedings of the 20th National Conference on Artificial Intelligence, volume 10. AAAI Press, 2005. Carl Edward Rasmussen. Gaussian processes for machine learning. In in: Adaptive Computation and Machine Learning. Citeseer, 2006. F. Ricci, L. Rokach, B. Shapira, and P. B. Kantor. Recommender Systems Handbook. Springer-Verlag New York, Inc., New York, NY, USA, 1st edition, 2010. ISBN 0387858199, 9780387858197. Eleanor Rosch and Barbara B Lloyd. Cognition and categorization. Hillsdale, New Jersey, page 47, 1978. Sidney Siegel. Nonparametric statistics for the behavioral sciences. McGraw-hill, 1956. E. G.-L. Strugeon. Using value ranges to reduce user effort in preference elicitation. Group Decision and Negotiation: A Process-Oriented View, 180:211–218, 2014. Paolo Viappiani and Craig Boutilier. Optimal bayesian recommendation sets and myopically optimal choice query sets. In Advances in Neural Information Processing Systems, pages 2352–2360, 2010. J. von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, Princeton, 1953. R. Wilson and F. Keil. The MIT Encyclopedia of the Cognitive Sciences. The MIT Press, Cambridge, MA, 2001.

59 of 87


c SmartSociety Consortium 2013-2017

D

Deliverable D6.4

Orchestration adaptation paper

An Application of Network Lasso Optimization for Ride Sharing Prediction Shaona Ghosh · Kevin Page · David De Roure

Abstract Ride sharing has important implications in terms of environmental, social and individual goals by reducing carbon footprints, fostering social interactions and economizing commuter costs. The ride sharing systems that are commonly available lack adaptive and scalable techniques that can simultaneously learn from the large scale data and predict in real-time dynamic fashion. In this paper, we study such a problem towards a smart city initiative, where a generic ride sharing system is conceived capable of making predictions about ride share opportunities based on the historically recorded data while satisfying real-time ride requests. Underpinning the system is an application of a powerful machine learning convex optimization framework called Network Lasso that uses the Alternate Direction Method of Multipliers (ADMM) optimization for learning and dynamic prediction. We propose an application of a robust and scalable unified optimization framework within the ride sharing case-study. The application of Network Lasso framework is capable of jointly optimizing and clustering different rides based on their spatial and model similarity. The prediction from the framework clusters new ride requests, making accurate price prediction based on the clusters, detecting hidden correlations in the data and allowing fast convergence due to the network topology. We provide an empirical evaluation of the application of ADMM network Lasso on real trip record and simulated data, proving their effectiveness since the mean squared error of the algorithm’s prediction is minimised on the test rides. Keywords Machine Learning · Networks · Ridesharing · Optimization Shaona Ghosh University of Oxford, UK E-mail: shaona.ghosh@oerc.ox.ac.uk Kevin Page University of Oxford, UK E-mail: kevin.page@oerc.ox.ac.uk David De Roure University of Oxford, UK E-mail: david.deroure@oerc.ox.ac.uk

60 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 2

Shaona Ghosh et al.

1 Introduction City councils and private commercial companies with a smart city initiative have recently been aiming for facilitating ride sharing of public and private transportation systems that provide significant environmental benefit in terms of reduced energy consumption and carbon footprint. Ride sharing is a service that arranges shared rides or carpooling on very short notice by better utilization of empty seats in vehicles. of This is especially important during the rush hours when there is a significant surge in demand for public transport leading to long waiting times and higher tariff rates for the commuter. This is when an elevated supply of vehicles aggravates traffic congestion and carbon emissions, while lowering the net income for the drivers when the demand subsequently falls following the rush hour. It is therefore imperative to develop smart ride sharing algorithms to optimize for the best outcome. Traditionally such Cloud Smart City Ride Share

Smart Mode of Communication

Smart Prediction Schedule Shareable car ride

Commuters

Fig. 1: Smart City Ride Share Prediction System. ride sharing systems are accompanied by a smartphone or tablet based application, with which potential riders can make real time requests. The system then dispatches the ride shared vehicle or taxis for the pick up after a decision making process that usually takes place in the cloud. The vehicles are also equipped with a version of the application that can communicate with the server. Typically, there are two stages to the planning: (i) first vehicles are searched that match the different constraints and criteria of the ride share scenario. For example, search and match the vehicle within 0.2 mile radius of the lat, lon with an available capacity of at least 2; (ii) the search and match phase is typically followed by the schedule or plan for the pick-up whilst satisfying the minimum increase in distance, costs and maximum profit margin for both the riders and the drivers. The Figure 1 illustrates such a smart city ride share system with the commuter images taken from the Pascal VOC 2012 challenge [6]. In reality, most ride requests are generated in real-time almost near the commute time. The requests need to be processed with minimum response time delay whilst address-

c SmartSociety Consortium 2013-2017

61 of 87


c SmartSociety Consortium 2013-2017 An Application of Network Lasso Optimization for Ride Sharing Prediction

Deliverable D6.4 3

ing the dynamic context such as the current rush hour surge demand among others. In this work we address this quick response and dynamic context of serving shareable rides to new requests by using models learnt on historical data.

2 Motivation and Contribution Most ride sharing systems cannot learn models to facilitate dynamic real-time prediction of ride sharing opportunities before the actual scheduling, searching and matching process. Specifically learning a model of interactions and using the same model for new data is novel to the applications in this field. We are motivated by simplifying the search, match and scheduling phases: by bridging the gap in learning from the correlations in the ride or trip data for optimization and clustering followed by efficient predictions before the planning and scheduling stage. Intuitively, this is because if latent groupings are detected in the data, then the search space for the optimization problem is drastically reduced. Additionally, jointly optimizing and clustering can avoid the separation of different phases and save significant delay in response time to a ride request. Consider, an example, pick up locations A 40.747, -73.893, B 40.69, -73.969, C 40.82,-73.944, D 40.744,-73.912, A and B are 6.5 miles apart, A and C are 7.1 miles apart. B and C are 13.5 miles apart (route with tolls), D and C are 8.2 miles apart (route with tolls) and D and B are 5.7 miles apart. Further, the route with tolls have heavy congestion at the time of request C, known from past information. Also, the requests within 0.2 miles within C have in the past rated the rides shared from around D negatively. Requests from around the region B have found rides shared with region from A expensive. With all this information, a possible clustering is A,C and B,D. Knowing the implicit clustering before the planning and scheduling phase saves the costs and the delay involved in re-planning, resource allocation, optimizing over all the four rides and serving requests. Lasso is a statistical machine learning technique that is known for its capabilities for simultaneous variable selection and estimation of the function with added regularization. Automatic variable selection is useful especially when all the variables pertaining to the ride might not carry meaningful information or might not be available. Variants of Lasso can capture correlations between various parameters of A, B, C and D to optimize and cluster them jointly. To the best of our knowledge this is the first application of a joint optimization, clustering and prediction framework within the ride sharing purview that is fully scalable. An emergent property of the system we propose is its ability to predict the optimal pricing based on the clusters. Such optimal pricing is imperative in case of commercial systems like Uber that apply surge pricing for rush hours when the demand is high. The danger of surge pricing by a multiplicative factor is that it can lead to a reduction of rider interest. If the surge pricing can be predicted beforehand by learning the patterns from the historical data, the supply can be adequately increased or ride share opportunities provided to offset the surge pricing.

62 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 4

Shaona Ghosh et al. Network Lasso Optimizer

w12

w23

w34

w45

... w(n-1)(n)

Ride n xn ...

Ride 5 x5

...

Ride 4 x4 ...

Ride 3 x3 ...

Ride 2 x2 ...

...

Ride 1 x1

Ride Parameters

Fig. 2: Network Lasso Ride Share Optimizer. 2.1 Summary of Contributions The main contributions of this paper can be stated as follows: – We conceive an application of a robust and scalable machine learning enabled large scale ride sharing system that learns a model of correlations from historical trip/ride request data, and uses it to predict ride sharing opportunities on current data. – Applying Network Lasso convex optimization algorithm [9] can jointly optimize model parameters, detect hidden clusters and predict parameter value on test trip/ ride requests thus reducing the search space for any traditional ride sharing system phases to follow if required. – The empirical evaluation of the application of Network Lasso algorithm on simulated and real datasets, show the efficient grouping (clustering) of training trip records, deducing test trip record (not yet served) model parameters based on its cluster membership and accurate prediction of its fare pricing.

3 Related Work Carpooling systems and recurring ride sharing systems [14, 3] have studied the ride sharing problem, although investigating the daily commute routine only with requests that were preplanned. In the works that have studied the real-time ride share prediction problem such as [12, 13], the focus of their methods was on the searching and scheduling of the taxis for ride sharing. For example, searching the ride sharing vehicle closest to the pick up points or scheduling the vehicles such that the total distance is minimized. In our work, the main focus is placed on the stage prior to the searching, scheduling and matching riders to drivers. The model learnt during this stage can enable real time prediction at later stages. The research work in the dial-a-ride problems (DARP) [5, 10] have studied static customer ride requests which are known a-priori.

c SmartSociety Consortium 2013-2017

63 of 87


c SmartSociety Consortium 2013-2017 An Application of Network Lasso Optimization for Ride Sharing Prediction

Deliverable D6.4 5

Although these methods perform grouping of requests beforehand, requests do not get served real-time. In the work of Zhu et. al. [21], the authors focus on path planning algorithm for the ride share vehicles with minimized detour. Further, capturing the spatio-temporal underlying features of the rides in our work is at the stage of inferring similarities in rides by means of grouping which is different from the spatio temporal embedding in the work of Ma et. al. [12, 13]. They form the topology over rides based on if rides can be shared together. In out work, the topology over rides is used to find Traditionally, as evident from the works of [13], a Poisson Model is assumed for the distribution over the ride requests. We assume a similar distribution of the ride requests with which we simulate the real time requests for prediction. In the work of Santi et. al. [15], a network topology is used for ride share prediction in the similar way as we do. However, our method differs in the optimization that we adapt for ride related data, jointly optimizing on individual objective and neighbouring objectives on the graph. Other literature focuses on efficient scheduling of ride share vehicles (path planning) [13, 12], recommendations for drivers [20, 19], pricing for commuters [11] and impact of ride sharing [11, 4] or static grouping of riders [20, 19].

4 Formulating a Ride Sharing Model Regression analysis is a well known machine learning paradigm in the statistical method of estimation of the relationship among variables. An important criterion of this analysis is establishing how the behaviour of the dependant variable changes with respect to the multiple independent variables. Supervised regression [7] is capable of inferring the functional relationship between the output variable and the input variable; the learnt model is then used to predict the output response on new input data. The learning is performed by comparing the quality of the predicted response value from the model that the algorithm comes up with with the true response value by means of a loss function. Progress is made by the algorithm by taking minimizing the loss over all the data that it sees.

4.1 Learning the Model by Lasso Regression Lasso based optimization technique is a type of regression analysis that can automatically detect which independent variables are important in influencing the behaviour of the dependant variable. Lasso algorithms [17] can perform feature selection and supervised regression simultaneously. For example, the variable pertaining to the ride fare might not be available for a particular set of records whilst they might have the drop off location. Lasso can automatically select some of the dependant variables that strongly influences the model from the variables present. In the example, it is capable of selecting the drop off location to interpolate the fare from the distance travelled in the model for those records where it is absent. Additionally, Lasso helps induce the sparse (zeroed variables) representation within the model such that the contribution from some variables can be turned off selectively. For example, in the model, Lasso

64 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 6

Shaona Ghosh et al.

#10 5

12

10

2.5

#PickUp Requests

#PickUpRequests

3

2 1.5 1 0.5

8

6

4

Wed

Tues

Sun

Thurs

Sat

Fri

2

Mon

0

#10 4

0

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

3.5

Day of the Week

Hour of the Day

(a)

(b)

Fig. 3: Taxi Pick-up request distribution (a) Day of the week (b) Hour of the day.

may completely ignore the contribution of the variable payment-type by setting it to zero as payment-type has no correlation with total fare. This is made possible by the use of a regularization penalty that penalizes for complex models (many non zero variables) over sparse models (many zero variables) such that the model better generalizes to test data. Let us consider a social system where m different ride requests that are being orchestrated simultaneously. Let i denote any such ride request that is being optimized within the framework. Let us assume the scenario can be modelled by a function of some linearly independent variables measured over a period of time that describes a ride request or a ride served. Let yi = fi (xi ) denote the mapping between the variable xi , where xi ∈ Rd encodes the behaviour of the system and yi is the response or outcome such that yi ∈ R. The only assumption on the function f is that it is a convex smooth function. p is number of observations for the pair (xi , yi ). For simplicity, let xi encode four independent variables describing a ride such as distance, time-of-day, pick up location and payment-type. The unconstrained Lasso formulation is given by the following equation.

minimize

1 || f (xi ) − f (x˜i )||22 +λ |xi |1 2

(1)

In Equation 1, f (x) ˜ is the predicted response value of the regression model whereas f (x) is the actual value and λ is the non-negative regularization parameter. The first term in the quadratic programming formulation measures the loss between the predicted response variable and the true response variable while the second term in the Lasso penalty or L-1 norm that ensures the sparsity in the model (minimizing the sum of absolute values of the variables). The output of Lasso is a model that is capable of interpolating the total fare of the ride as a function of the variables of any new ride request.

c SmartSociety Consortium 2013-2017

65 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

An Application of Network Lasso Optimization for Ride Sharing Prediction

8

7

#10 4

7

#PickUp Requests

6 5 4 3 2 1 0 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32

Day of the Month

Fig. 4: Taxi Pick-up request distribution over day of the month. 4.2 Alternating Direction Method of Multipliers Optimization Typically, for large scale problems with immense datasets where d << p and d is very large, the vanilla Lasso technique is hindered in terms of scaling with the data; that is the optimization becomes extremely time consuming. In our situation, a full fledged social system like ride sharing system may have ride data records in the order of millions of observations of multi-dimensional variables. Essentially, a fully scalable and robust system should scale with the number of observations p and the number of variables d. Optimization without exploiting structure in the problem makes the convergence time scale with the cube of the problem size [2]. For such situations, optimization in the primal may be cumbersome and one should resort to optimization techniques in the dual. Dual decomposition ensures that the function can be decomposed and each decomposition can be solved separately. This leads us towards investigating dual decomposition technique for scaling the computation across multiple computing resources. One such technique is Alternating Direction Method of Multipliers (ADMM) [18]. This method guarantees the decomposition and robustness with the introduction of new augmenting variables and constraints. In other words, under certain assumptions, additional auxillary variables can be introduced to the optimization problem that enable decomposition of the problem with additional constraints for distributed optimization. This allows scalability and robustness. Let us consider the Lasso model in Equation 1, within the ADMM model that can be written as: 1 || f (xi ) − f (x˜i )||22 + λ |zi |1 2 subject to xi − zi = 0.

minimize

66 of 87

(2)

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 8

Shaona Ghosh et al.

where zi is a copy of xi , is introduced to treat the second term as a decomposable term in the minimization, such that the two terms can be minimized in parallel towards minimizing the global objective. In our example, zi is the copy of the ride xi . The constraint xi − zi is the consistency constraint linking the two copies. From this point onward, when we refer to f (x), we mean for any ride scenario i. In our example, this is equivalent to distributing the modelling of the total fare of the ride as a function of the ride’s different variables over multiple processing units, such that each ride is being optimized on a different processor in parallel. This allows the model to converge much faster than the non distributed (non ADMM) Lasso. As is the case with ADMM, the steps of the optimization can be broken down into a series of update steps derived from the augmented Lagrangian of Equation 2. We can rewrite f (x) as f (x) = Ax, where A is the coefficient matrix, such that the system of linear equations is given by Ax = b, where b is the response vector. So Equation 2 can be rewritten as:

minimize

1 ||Ax − b||22 + λ |z|1 2

(3)

subject to x − z = 0.

The augmented Lagrangian of Equation 3 is given by:

Lρ (x, z, u) =

ρ 1 ||Ax − b||22 + λ |z|1 + uT (x − z) + ||x − z||22 . 2 2

where L is the Lagrangian, u is the Lagrange multiplier, and ρ is the cost for violating the consistency constraint. Minimizing with respect to x and z separately, and u jointly, leads to the iterates. The iterates can be updated in a distributed way yielding the scalability for very large problems. The main advantage in using ADMM based Lasso besides robustness and scalability is in its guaranteed global convergence.

xk+1 = (AT A + ρI)−1 (AT b + ρzk − uk ) zk+1 = S λ (xk+1 + uk /ρ) ρ

uk+1 = uk + ρ(xx+1 − zk+1 ).

c SmartSociety Consortium 2013-2017

67 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

An Application of Network Lasso Optimization for Ride Sharing Prediction

9

The x update can be derived by minimizing with respect to x. We know ||u||22 = uT u, (Ax − b)T = xT AT − bT , uT v = vT u 1 ρ L(x) = (Ax − b)T (Ax − b) + λ |z|+uT x − uT z + (x − z)T (x − z) 2 2 1 T T T T T T = (x A Ax − x A b − b Ax + b b) + λ |z|+uT x − uT z 2 ρ + (xT x − xT z − zT x + zT z) 2 1 T T T T = (x A Ax − 2x A b + bT b) + λ |z|+xT u − uT z 2 ρ + (xT x − 2xT z + zT z) 2 1 T T 1 T T T = x A Ax − x A b + b b + λ |z|+xT u − uT z 2 2 ρ ρ + xT x − ρxT z + zT z 2 2

We know from the quadratic term where a is a vector. So we have:

∂ T ∂x x MX

= 2Mx, and the linear term

∂ T ∂x x a

=a

∂ Lρ (x, z, u) = AT Ax + ρx − AT b + u − ρz ∂x =⇒ (AT A + ρI)x − (AT b + ρz − u) = 0

x = (AT A + ρI)−1 (AT b + ρz − u)

Hence, x can be updated at time k + 1 with values of iterates z and u from time k. AT A + ρI is positive definite and hence invertible. Similarly, the u update is also obtained by minimizing with respect to uk for xk+1 and zk . The update for zk+1 is obtained from a soft shrinkage solution. In the simulation experiments that we discuss in 5.1, we show how the Lasso optimization model is capable of estimating the response variable with respect to the input variables. The Lasso is used mainly in an optimization problem for inferring the model of the variables. As we shall see in section 5.1, ADMM Lasso is capable of modelling the utility of rides as function of different synthesized ride parameters using distributed optimization. 4.3 Network Lasso ADMM Optimization The Network Lasso [9] algorithm is a generalized version of the Lasso to a network setting that enables simultaneous optimization and clustering of the observations. Network Lasso [9] extends the power of Lasso algorithms through structured learning over a graph topology. The topological structure allows for joint optimization. In our example, this means groups of rides gets automatically clustered and optimized to have the same models of the total fare as a function of their ride parameters. Not

68 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 10

Shaona Ghosh et al.

# 10 5 7.359665

7.35966

drop off

7.359655

7.35965

# 10 5 7.35967

7.359665 7.35966 7.359655 7.35965 7.35965

7.35966

7.35965

# 10 5

7.35966

7.35967

# 10 5

pick up Fig. 5: Correlations between the pick up and drop off times of trip parameters with pick up times on the x axis and drop off times on the y axis. only the grouping of rides is performed based on similar rides being grouped together, the optimization problem also computes the fare model across such groups in a consensus. Let a graph be described by G = (V , E ), where V is the set of vertices and E is a set pf edges connecting neighbouring vertices. The graph or network as shown in Figure 2 encodes the input data such that each data point is represented as a vertex. In our case, each vertex represents a ride trip record or a ride request. The similarity between the trip records or requests is encoded as an edge. The objective that Network Lasso tries to solve is expressed in the Equation below. The variables are xi , ..., xm ∈ R p , where m = |V | is the number of trip records or ride requests, and p is the number of features of the ride, with a total of mp variables for optimization [9]. minimize ∑ fi (xi ) + λ i∈V

( j,k)∈E

w jk ||x j − xk ||2 .

(4)

Similar to Hallac et. al. [9], the function fi at node i is the local objective function for the data point i whereas the g jk = λ w jk ||x j − xk ||2 is the global objective function associated with each edge with λ ≥ 0 and the weight over the edge (a measure of similarity) w jk ≥ 0. The edge objective function penalizes differences between the variables of the adjacent nodes thus inducing similar behaviour; leading to groups of nodes or clusters that behave similarly; the solution to the optimization problem is the same across all nodes xi in a cluster. In other words, each cluster has the same model (functional mapping between the ouput and input variables). In our example this would imply similar ride share records get grouped into clusters and hence similar plans and schedules can be allocated to these clusters. The only assumption is on the convexity of the function fi .

c SmartSociety Consortium 2013-2017

69 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

An Application of Network Lasso Optimization for Ride Sharing Prediction

11

It is important to note the role of λ , which is a regularization parameter to control the optimization process. Based on the value of λ , the optimization process trades off optimzing for the node variables and edge variables. The range of values of λ determine the level of optimization. For smaller values of λ , optimization is performed at the node level while for larger values, the edge optimization comes into play inducing the adjacent nodes to have similar model. The edge cost is the sum of norms on how different the adjacent ride records are from each other and the penalty that needs to be paid within the model for large differences. In other words, the edge objectives encourage nodes to be in consensus (have a similar model). In our application, we use the regularization such that the edge penalties are active. The vanilla Lasso technique discussed above, essentially maps to the scenario where λ = 0, when the individual nodes are optimized independently without the edge optimization. With the Network Lasso formulation 4, we not only achieve a robust, scalable and distributed optimization algorithm, but we are guaranteed to obtain global convergence. The edge objective function is the g jk = λ w jk ||x j − xk ||2 adds the “network” aspect to the vanilla Lasso optimization, by inducing a relationship between individual node variables. In fact, in our example, this edge objective minimization allows for clustering of rides based on their optimization models. In the following section we evaluate Lasso, ADMM and Network Lasso techniques with synthesized and real dataset of ride observations illustrating the accuracy of the optimization while presenting ride sharing opportunities.

5 Model Validation and Experimental Evaluation In this section, we discuss the experiments conducted on synthetic and real world datasets to validate the efficiency of the techniques that we propose in the previous section. The section begins with a description of the synthetic experiment we design in order to evaluate the modelling, feature selection and prediction accuracy of the application of the vanilla ADMM Lasso technique to an unknown linear model of variables encoding a ride. Following this discussion, we explain the real dataset experiments, where the open trip record dataset of the green taxis from the New York Taxi and Limousine Commission [16] is used. We apply Network Lasso on this dataset to enhance the capabilities of vanilla ADMM Lasso in modelling, while being in consensus with the models of the neighbourhood trips.

5.1 Synthetic Dataset and Experiment The synthetic dataset is constructed by exploring the linear relationship between the multidimensional variables of the underlying ride sharing model that we assume. For n simplified linear model for a ride request i, fi (xi ) = ai .ratings + bi .preferences + ci .pickuptime

(5)

+di .pickuploc − ei .cost

70 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 12

Shaona Ghosh et al.

4.5

14

90

4 12

80

2.5 2

1.5

4

2

1

0

100

200

300

iter (k) Iterations

(a)

400

500

600

60

50

40

30

0.5 0

70

k

k

6

3

k

8

k

10

Objective Value f(x ) + g(z )

f(x ) + g(z ) Objective Value

k

k

Objective Value f(x ) + g(z )

3.5

0

20

0

50

100

150

Iterations iter (k)

200

(b)

250

0

2

4

6

Iterations iter (k)

8

10

12

(c)

Fig. 6: Convergence of Lasso ADMM over Iterations for Varying λ (a)λ = 0.0001 with very early convergence but complex model (b)λ = 0.001 convergence at 50 iterations with simpler model (c)λ = 0.01 with no convergence.

we assume variables such as ratings ∈ [1, 10] ⊂ Z, preference ∈ [1, 10] ⊂ Z, pickuptime ∈ {0, 1}, pickuploc ∈ [0, 30] ⊂ R and cost uniformly distributed ∈ [0, 1] ∈ R. The variable ratings encodes the past feedback of the shared ride experience rating in the past. Variable preferences are the choices of the commuter for example sharing with more than 1 or sharing with 1, pickuptime and pickuploc are the requested time and location of pickup respectively while cost is the expenses related to the ride. The value fi (xi ) encodes the utility value of the ride as a function of all the variables. These variables or regressors are related in terms of parameters of the ride given by [ai , bi , ci , di ], which is not available to the algorithm. It should be able to deduce the latent model based on the given final fare values fi (xi ) and the regressors. The application of the ADMM Lasso algorithm enables learning the relationship between the ride variables by modelling them efficiently. The model is improved over many such ride request data records to minimize the error between the internal algorithm model and the true model in hindsight. To this end we apply the Lasso regression technique discussed before in Equation 1. For robustness we randomly sample 1500 examples with 5000 features. The coefficient matrix A constituting the ride variables is generated as a Gaussian distributed sparse matrix with a known sparsity density, we use a sparsity density of 0.02. The output variable b corresponding to the input variables is computed as a linear combination between the coefficient matrix and a random sparse vector and some Gaussian noise. We vary the value of the Lasso parameter λ in a range of [0.1, 0.001, 0.0001], to evaluate the influence on the smoothness of the optimization. The values of the ADMM parameters ρ is fixed to 1.2 and that of α is fixed to 1.8. For all our simulation experiments, we adapt the ADMM Lasso code [1] for our data, the code for which is written by us. The experiments were carried out on a Windows Desktop PC with 16GB RAM and i7 processor using Matlab. The results of our simulation experiments are shown in Figure 6. Higher values of λ induce more sparsity, by penalizing complex models thus allowing simpler model where most of the variables are zeroes. This is desirable for generalization and high accuracy on any test time (new) data. Lower values of the penalty parameter allow for denser solutions with more non-zeroes. A trade-off is often desired to have simpler

c SmartSociety Consortium 2013-2017

71 of 87


c SmartSociety Consortium 2013-2017 An Application of Network Lasso Optimization for Ride Sharing Prediction

Deliverable D6.4 13

models that do not over-fit the training data and that generalizes to test data. In the Figure 6, the value of minimum of the loss function in Equation 1 (vertical axis) on all the plots across time (horizontal axis); the lower the difference between the predicted fare value and the ground truth fare value, the better. In (a), the solution converges quickly for λ = 0.0001. However this model is complex and can overfit the data on the test data. The total number of non zero variables are 3062. In (b) with λ = 0.001, the solution takes longer to converge than (a) with higher prediction error than (a). The number of non-zeroes is 1673 which is about 30 percent of the variables out of 5000. This shows how Lasso is capable in capturing the 30 percent most important ride variables that contribute to the model. In (c), for higher λ = 0.01, the solution is very sparse, however the error increases as the model in unable to fit the data, with number of non-zeroes being only 636. It is important to note that although Lasso by itself can induce generalized solutions, the use of ADMM approach for large datasets is desirable for faster convergence as shown here where the convergence happens within 50 iterations in 6 (b) .

5.2 New York Taxi Data Experiment 5.2.1 Dataset The real-world dataset with attributes as shown in Table 1 constitutes about one billion records of various taxi trips recorded over the entirety of 2015 [16] in the city of New York. Here, we only use the green taxi trip records for the month of January, 2015 to conduct our experiments. Each record in the dataset pertains to a ride that was served by a green taxi. The various attributes of the ride are defined by variables such as pick up time, drop off time, pick up location, drop off location, base fare, tips, tax, passenger count, trip distance, trip type among 20 other variables. It is important to note that the green cabs do not serve on the Manhattan area as we will see later on the plots overlaid on the maps. We perform an initial visualization of the dataset for any obvious data pattern. In Figure 3 shows the distribution of the users asking for a taxi ride. The requests are plotted for each day of the week and every hour of the day. As expected, we observe from the distribution over days of the week in (a) that shows the weekend and Fridays having an increased demand of taxis. On the plot of the distribution over hours of the day in (b) we observe that there is sharp increase in demand during the rush hour as well as there is a surge in taxi demand during the evening from rush hour through the evening. We observe that most demand is in the early morning hours, when it is difficult to take public transport, continuing into the morning rush hour. The second surge is in the evening rush hour that peaks at around 19:00 hours. The plot over the week is a random week in the month of January, while the plot over the day is a random day of the month of January. The Figure 4, we plot the distribution of ride requests throughout the first month in January, 2015. The 27th day of the month is an exception as there were severe travel restrictions on that day due to heavy snowfall. Figure 5 shows the correlations between the pick up time and drop off time of over 2000 ride requests randomly sampled over days 1 and 2 that is used for modelling.

72 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 14

Shaona Ghosh et al.

rate-code passenger distance fare extras tax tip toll surcharge total fare rate-code passenger

distance

fare

extras

tax

tip

toll

surcharge total fare

Fig. 7: Correlation across different trip record optimization parameters. Table 1: NYC Taxi Dataset Attributes vendorId ratecode dropofflat extra ehailfee triptype

pickuptime pickuplong passengercnt mtatax surcharge

dropofftime pickuplat tripdistance tipamt totalamt

storeflag dropofflong fareamt tollamt paytype

As expected the pick up and drop off times are positively correlated. In Figure 7, the correlation between different feature variables are shown. As expected the variables fare amount and trip distance are positively correlated with each other. The surcharge is also positively correlated with the tripdistance and the fare amount. The tip in turn is positively correlated with the trip distance and the fare amount. Figure 8, shows the pick up requests generated on the first day of the month of January 2015 per hour. What is interesting is that the distribution of the pick ups behave like a Poisson distribution based on the nature of the curve. In practice, Poisson distribution is often used to model pick up requests. Since we have such a distribution available from the data, it is practical that we use this realistic distribution to sample our test data to evaluate the prediction of the algorithms. In the Figure 9, we observe the heatmap result from the network obtained from the dataset. The network is a relatively dense network with distances varying between 0 kilometre (distance with itself) indicated in dark blue to 30 kilometres indicated in yellow. The map indicates ride parameters that are closer to each other in similar colour. In Figure 10, we show the network as generated from the data. The network is based on the spatial information encoded in the data. Basic modularity based on this spatial information shows how the network is formed of dense clusters. Each cluster is indicated by an individual colour and 8 clusters are formed each indicated with a different colour. It is interesting to note that since these are only spatial clusters,

c SmartSociety Consortium 2013-2017

73 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

An Application of Network Lasso Optimization for Ride Sharing Prediction

15

7000

#Pick up requests

6000 5000 4000 3000 2000 1000 0 0

5

10

15

20

25

Hour of the day

Fig. 8: Frequency distribution of pickup requests over the hour. optimizing based solely on these clusters would not factor in consensus in the models. For example, two spatially distant rides, may have the same underlying model of optimization with parameters behaving similarly. 5.2.2 Network Lasso Experiments We apply the network lasso technique to data sampled randomly from the taxi trips dataset described in the previous section. Our training set comprises random subsets sampled from different times of the first and second days of the month. The Network Lasso algorithm [9] learns the model on the training set with a known output response variable. Once the model is learnt, the prediction of the model is evaluated on a test set. The test set is again randomly sampled; the test set does not include any training data. The optimized data attributes are all the attributes other the spatio temporal data. The spatio temporal data is fed as the network information to the algorithm; each trip record is a node in the network. The algorithm optimizes for the total fare value at each node while ensuring consensus among neighbourhood data. The result is a grouping of the network into clusters with similar models. The advantage is these clusters can be used to predict on any new data. Here, for every instance in the test set, the error between the predicted total fare value and the true total fare value is calculated, and the mean squared error is reported for different values of Îť . Varying Îť tunes how much of consensus is desired between the node models. We adapt the code provided by Hallac et al. [8]. All the experiments are run on a Linux desktop PC with 12 GB RAM and i5 processor using Python. In Figure 11, (a) we show how the consensus over the test set varies over different values of Îť , the higher the value of the consensus indicates more nodes in the network

74 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 16

Shaona Ghosh et al.

30

50

nodes

25

100

20 15

150 10

200

5 0

50

100

150

200

nodes Fig. 9: Heat map of the network derived from the random subset of trips. are in sync. (b) Shows how the mean squared error (mse) varies with the Îť . As seen, for the right range of Îť , the mse falls to the minimum as lambda slowly increases. This shows the prediction accuracy of the algorithm and its applicability to modelling the economic interrelationships of the underlying ride parameters; and robustness to early convergence resulting from the network consensus. Without the network consensus, the time taken to convergence would be cubic. Such accurate fare prediction can be used by the ride sharing application for efficient fare pricing by jointly factoring in that similar models of rides can be grouped together and shared. 5.3 Discussion In Figure 12, we show the clustering of a portion of the test data set as performed by the algorithm. The test data point is indicated by asterisk markers, the colour of the marker is deduced by the algorithm which decides its cluster membership. There are four clusters (each indicated with a different colour) which the algorithm assigns to the test point such that its variables can be deduced based on the cluster to which it belongs, or is closest in terms of the similarity of their models. (a) uses a five neighbourhoods and (b) uses a ten neighbourhood structure resulting in more overlap. It is important to note the following observations. First, spatially distance rides can be grouped together if there is similarity in their model parameters along with the spatial closeness. This is an unique emergent property of this work in the context of ride sharing. Traditional ride sharing systems group rides that are only close geographically, but the method that we discuss is capable of doing both. Second, in

c SmartSociety Consortium 2013-2017

75 of 87


c SmartSociety Consortium 2013-2017 An Application of Network Lasso Optimization for Ride Sharing Prediction

Deliverable D6.4 17

Fig. 10: Spatial clusters without optimization. Network rendered using Gephi software. Edge colours same as source cluster membership colour. traditional systems the grouping and the optimization are usually separate processes. Optimization decoupled from the network structure takes longer to converge, resulting in delay in responsiveness of serving rides. In Figure 13, we magnify the cluster that the algorithm decides the test point belongs to. The test point is indicated by the black markers. For the test point in (a) it belongs to the red cluster and the test point in (b) belongs to the blue cluster. Figure 14 shows an alternative illustration of the clustering detected by the algorithm where the grouping location is not overlaid on the map and instead just shown on the basis of the predicted values. Similar models predict the similar value and the colour indicates the value ranges that the trip nodes belong to. The Figure 15 shows the vanilla Lasso prediction without the network data to validate the accuracy of prediction. Vanilla lasso converges to the minimum value of the objective function in terms of learning the model but is incapable of finding any cluster or groupings in the rides. 6 Conclusion In this work, we make a novel connection between the ride sharing scenario and a scalable and robust optimization technique called Network Lasso optimization. We apply different well known techniques from the statistical regression analysis and machine learning paradigm to synthetic and real world data that encodes ride related

76 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

Shaona Ghosh et al.

1

0.02

0.995

0.015

mean sq. error

consensus

18

0.99

0.985

0.98 0.2

0.4

0.6

0.8

1

0.01

0.005

0 0.2

0.4

0.6

lambda

lambda

(a)

(b)

0.8

1

Fig. 11: Consensus in optimization model and error in prediction (a) consensus over Îť (b) mean squared error over Îť .

(a)

(b)

Fig. 12: Clustering membership deduced for test data based on training clusters (a) 5 neighbourhood structure (b) 10 neighbourhood structure. attributes and variables, in order to perform joint optimization and clustering of similar rides that share similar models. To the best of our knowledge, this work is the first attempt in applying techniques that jointly optimizes and clusters rides in order to predict new ride sharing opportunities based on a network topology of rides. Rides that are similar in the modelling of their parameters get grouped together and hence can be shared. We evaluate the accuracy of the applications of Lasso, ADMM and Network Lasso on the synthetic dataset and real world dataset of green taxi trip records from the New York Taxi and Limousine Commission open data. We observe

c SmartSociety Consortium 2013-2017

77 of 87


c SmartSociety Consortium 2013-2017

Deliverable D6.4

An Application of Network Lasso Optimization for Ride Sharing Prediction

(a)

19

(b)

Fig. 13: Deducing cluster membership for (a) Test trip node 77 indicated in black (b) Test trip node 193 indicated in black.

0.26

Objective Value

0.24 0.22 0.2 0.18 0.16 0.14

0

500

1000

1500

2000

Trip Variable

Fig. 14: Clustering based on Predicted Values on Training Set.

that vanilla ADMM Lasso achieves convergence within 50 iterations, but cannot sufficiently explore the network topology. Network Lasso however achieves an accuracy of 99.8 percent with a efficient clustering of 8 percent. We also notice that Lasso in itself ensures a sparsity and generalized model with automatic feature selection of 30 percent for rides with variables spanning 5000 dimensions in the synthesized dataset. We conclude that ADMM Network Lasso in particular is an efficient framework for large scale ride sharing systems that require distributed, scalable optimization with sufficient exploitation of the network topology of the similarity among rides for predicting new sharing opportunities. As, future work, we would like to extend the algorithms for prediction in less information environments where there is not enough training data due to lack of real life systems.

78 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4 20

Shaona Ghosh et al.

100

0.35

||r|| 2

0.3 10-2

0.2

10-4

0.15

102

0.1

100

||s|| 2

f(x k ) + g(z k )

0.25

0

50

100

0

50

100

150

200

250

300

150

200

250

300

10-2

0.05

0 0

50

100

150

200

250

300

10-4

iter (k)

(a)

iter (k)

(b)

Fig. 15: Objective values across iterations with ADMM Lasso. References 1. Parikh N. Chu E. Peleato B. Boyd, S. and J. Eckstein. Matlab scripts for alternating direction method of multipliers. https://web.stanford.edu/ boyd/papers/admm/. 2. Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. 3. Roberto Wolfler Calvo, Fabio de Luigi, Palle Haastrup, and Vittorio Maniezzo. A distributed geographic information system for the daily car pooling problem. Computers & Operations Research, 31(13):2263–2278, 2004. 4. Serdar Colak, Antonio Lima, and Marta C Gonzalez. Understanding congested travel in urban areas. Nature communications, 7, 2016. 5. Jean-François Cordeau and Gilbert Laporte. The dial-a-ride problem: models and algorithms. Annals of Operations Research, 153(1):29–46, 2007. 6. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html. 7. Jerome H Friedman. Fast sparse regression and classification. International Journal of Forecasting, 28(3):722–738, 2012. 8. David Hallac, Jure Leskovec, and Stephen Boyd. Network lasso: Clustering and optimization in large graphs. https://github.com/davidhallac/NetworkLasso. 9. David Hallac, Jure Leskovec, and Stephen Boyd. Network lasso: Clustering and optimization in large graphs. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 387–396. ACM, 2015. 10. Mark ET Horn. Fleet scheduling and dispatching for demand-responsive passenger services. Transportation Research Part C: Emerging Technologies, 10(1):35–63, 2002. 11. Alexander Lauren and Marta C González. Assessing the impact of real-time ridesharing on urban traffic using mobile phone data. Transportation Research A: Policy and Practice, 2015. 12. Shuo Ma, Yu Zheng, and Ouri Wolfson. T-share: A large-scale dynamic taxi ridesharing service. pages 410–421, 2013. 13. Shuo Ma, Yu Zheng, and Ouri Wolfson. Real-time city-scale taxi ridesharing. Knowledge and Data Engineering, IEEE Transactions on, 27(7):1782–1795, 2015. 14. Aristide Mingozzi Roberto Baldacci, Vittorio Maniezzo. An exact method for the car pooling problem based on lagrangean column generation. Operations Research, 52(3):422–439, 2004.

c SmartSociety Consortium 2013-2017

79 of 87


c SmartSociety Consortium 2013-2017 An Application of Network Lasso Optimization for Ride Sharing Prediction

Deliverable D6.4 21

15. Paolo Santi, Giovanni Resta, Michael Szell, Stanislav Sobolevsky, Steven Strogatz, and Carlo Ratti. Taxi pooling in new york city: a network-based approach to social sharing problems. arXiv preprint arXiv, 310, 2013. 16. NYC Taxi and Limousine Commission. Tlc trip record data. http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml. 17. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996. 18. Bo Wahlberg, Stephen Boyd, Mariette Annergren, and Yang Wang. An admm algorithm for a class of total variation regularized estimation problems. arXiv preprint arXiv:1203.1828, 2012. 19. Wei Wu, Wee Siong Ng, Shonali Krishnaswamy, and Abhijat Sinha. To taxi or not to taxi?-enabling personalised and real-time transportation decisions for mobile users. Mobile Data Management (MDM), 2012 IEEE 13th International Conference on, pages 320–323, 2012. 20. Desheng Zhang, Tian He, Yunhuai Liu, and John A Stankovic. Callcab: A unified recommendation system for carpooling and regular taxicab services. Big Data, 2013 IEEE International Conference on, pages 439–447, 2013. 21. M. Zhu, X. Y. Liu, F. Tang, M. Qiu, R. Shen, W. Shu, and M. Y. Wu. Public vehicles for future urban transportation. IEEE Transactions on Intelligent Transportation Systems, PP(99):1–10, 2016.

80 of 87

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

E

Workflow adaptation paper

Stable Coalition Formation for Sharing Services with Limited Reporting Paper 2067 Abstract Sharing economy applications constitute interesting domains for multiagent resource allocation and coalition formation. In many of these applications, users cannot completely specify their preferences to a centralized allocation mechanism because the information they can report is limited. In this setting, stable formation of coalitions among users sharing services or engaging in collaborative activity is not well understood. We propose two mechanisms that provide stability guarantees under limited reporting, and which are based on the posted goods protocol—a multidimensional generalization of the posted price mechanism protocol. Each of our two mechanisms applies to a different family of user preferences, hedonic and topological preferences. Our results demonstrate that each family of preferences requires addressing different issues in order to guarantee stability, which suggests that a single mechanism is not likely to be suitable for all sharing systems.

1 Introduction Sharing economy applications constitute interesting domains for multiagent resource allocation [Hamari et al., 2015]. In these applications, ordinary citizens act as providers and users of services. As it is typically not possible for each individual to reach out to people with similar or complementary needs, sharing is necessary to ensure that each user has reliable access to services. To coordinate sharing between agents, it is common to adopt a centralized allocation mechanism, which—with the aid of digital media—matches providers with users. This matching is not necessarily one-to-one: key examples are the matching problems arising in ridesharing services like Uber and BlaBlaCar, group purchase schemes like GroupOn, or accommodation services such as Couchsurfing. In these domains, the problem of allocating users to groups that will share the same service is often one of coalition formation. An important feature of the coalition formation problem is that each agent is strategic, with their own private preferences. Thus, to ensure that coalitions are stable, i.e. no user prefers a coalition different to the one she has been assigned to, the centralized allocation mechanism must elicit each agent’s preferences over all her possible allocations. However, preference elicitation is made particularly difficult in these types of applications by the number of parameters that characterize any given service and the possibility of trade-offs between these parameters. To obtain complete

c SmartSociety Consortium 2013-2017

preference profiles would require a highly involved procedure, even if users were willing to disclose these preferences completely and truthfully. Even in small examples, where a few dozen users choose from, say, ten options where coalitions are composed by three or four users (e.g. the people sharing a ride, or travellers sharing lodgings), there are already thousands of possible combinations over which each user would need to report a full preference ordering. While this is a problem that also occurs in traditional coalition formation settings due to the exponential number of possible coalitions, in service sharing it is aggravated further by considering how the coalition will perform its joint activity. The parameters that specify the details of a concrete service often include times, locations, pricing, and contractual arrangements, as well as many other parameters that may vary across domains. The difficulty in obtaining and exploiting complete preference profiles suggests adopting approaches that are based on partial preference profiles; that is, adopting mechanisms where agents only provide limited reports on their preferences. However, limited reporting introduces its own difficulties. In particular, it is challenging to provide stability guarantees and ensure that the limited reports are truthful, i.e., agents have no incentives to misreport their type. Indeed, there are presently no mechanisms that provide stability guarantees under limited preference reporting. The state-of-the-art mechanisms in [Kamar and Horvitz, 2009; Kleiner et al., 2011; Zhao et al., 2014] are all based on variants of the VickreyClarke-Groves mechanism [Vickrey, 1961; Clarke, 1971; Groves, 1973], which requires participants to be able to completely report their valuation function. Although D¨utting et al. [D¨utting et al., 2011] have investigated the impact of limited reporting on the equilibria of combinatorial auctions, their results do not address the stable coalition formation problem. The huge number of service parameters also creates challenges for the allocation mechanism, which has to decide which coalitions will be formed: even if all user preferences were known, applying existing techniques often leads to complexity problems. Although steps to resolve this issue have been suggested when coalitions are constrained by an underlying social network [Bistaffa et al., 2015], in general settings, resolving the combinatorial nature of the coalition formation problem and its dependence on preference elicitation remain an open problem. In this paper, we address this problem by designing mechanisms capable of forming stable coalitions under limited reporting. Our mechanisms are based on a new signaling protocol, which we call the posted goods protocol (PGP). The PGP can be viewed as a generalization of the signaling protocol in the popular posted price mechanism (see, e.g., [Chawla et al.,

81 of 87


c SmartSociety Consortium 2013-2017 2010]). A convenient feature of the PGP is that it is straightforward to guarantee truthful reporting, due in part to the fact that agents can accept and reject coalition offers. Therefore, the challenge is to ensure that this truthful report is sufficient to guarantee stability of each coalition of users. Indeed, if the mechanism is not properly designed to account for limited reporting then users may be allocated to services they do not want, which may mean they are not willing to make service requests in the future. As such, the key problem we address in this paper can be stated as follows: how should the messages users can report, the allocation of the users, and the side information of the mechanism be designed so that each user is allocated a service that is not dominated by other alternatives? The key problem we address does not have a single solution due to the fact that the structure of different families of user preferences means that limited reports sufficient to guarantee stability for one family are not sufficient for another. To investigate the effect of different user preferences on the design of mechanisms that yield stable coalitions, we focus on two classes of user preferences, topological and hedonic. Topological preferences are characterized by service features that can be decomposed into metrizable topological spaces. In ridesharing, for example, user preferences depend on, inter alia, the day and time they require the service, and the location of pick-up and drop-off points. This suggests that we can treat the set of desirable journeys as being defined over R2 for the locations, or in the real projective space RP 1 for times—leading to metric spaces and hence topological preferences. Hedonic preferences, on the other hand, capture features of services that do not lie in a metrizable topological space. In particular, hedonic preferences depend on the characteristics of other users that share the service. A motivating example is the allocation of employees to shared offices in which each employee’s preferences are completely dictated by the other employees they share the office with, e.g. depending on the volume of music they listen to, or the amount of time they are in the office. While stability of coalitions in games defined by hedonic preferences (hedonic games) has been studied in [Aziz and Brandl, 2012; Bloch and Diamantoudi, 2011; Bogomolnaia and Jackson, 2002], this is under the assumption that agents completely report their preferences. The key contributions of this paper are: 1. The introduction of the PGP, which guarantees that users report truthfully and forms a basis for our mechanisms. 2. The design of a mechanism for stable coalition formation with topological preferences. For this case, we show that the stable coalition formation problem reduces to the design of the messages each user can report. We show that this design problem is equivalent to a (e.g., sphere) packing problem1 , and give explicit solutions in the case of Rn and RP n−1 . 3. The design of a mechanism for stable coalition formation with hedonic preferences. In this case, the crucial 1 See [Conway and Sloane, 1988] for a more detailed introduction to the sphere packing problem.

82 of 87

Deliverable D6.4

design problem is how to allocate each user to a coalition such that the user is willing to share the service with the existing users in the coalition. Under general hedonic preferences we show that this can be achieved by applying an insertion-based approach where each agent has to be added to a coalition sequentially. We also show that for a restricted class of hedonic preferences, i.e. singlesubset-peaked preferences, more general allocation algorithms are possible that do not necessitate insertionbased mechanisms. The remainder of the paper is structured as follows. In Section 2, we introduce the Posted Goods Protocol that defines how users and the mechanism interact in our setting. Section 3 introduces the formal material necessary for describing our approach. In Sections 4 and 5, we design mechanisms that yield stable coalitions for topological and hedonic preferences, respectively. Section 6 concludes the paper.

2 Posted Goods Protocol Consider a system for sharing services, a set of users I = {1, 2, . . . , I}, and a centralized allocation mechanism that allocates users to coalitions based on their requests. As we will show below, due to limited reporting, a mechanism cannot compute an allocation that is guaranteed to be stable without allowing the users to communicate with it. For this reason we introduce a signaling protocol, called the Posted Goods Protocol (PGP) that allows the mechanism to receive acceptance or rejection signals from the agents about possible allocations that have been proposed to them. More specifically, the PGP prescribes the following five interaction phases between each user i and the mechanism: Phase 1 User i sends a report, which contains information about her preferences. In general, this step will only allowed limited reporting, i.e. the number of available reports will be much smaller than the possible preference orderings the agents may hold. Phase 2 The mechanism computes an allocation, which is constrained by user i’s report and the reports of any other users being allocated simultaneously. Phase 3 The mechanism sends offers to user i. Depending on the setting, it may be sufficient for the mechanism to compute just one allocation, or it may be necessary to present multiple offers to the users. Phase 4 User i responds by informing the mechanism if they accept any of the offers. If they accept, then the mechanism assigns them to the accepted offer. Once a user has accepted an offer, they cannot be re-allocated. Phase 5 After the service has been provided, user i informs the mechanism whether they were satisfied with it. This phase ultimately determines whether or not the coalitions are stable. Given this protocol, we can define the mechanism designed using it as a basis as a pair (M, f ), where M is the set of messages that agents can report in Phase 1 and f is the allocation function, which maps user reports to coalition structures.

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

In reality, the PGP represents a family of concrete mechanisms. Any specific instantiation of it can vary, for example, in terms of whether all agents perform Phase 1 simultaneously before the protocol enters Phase 2, or whether their input is processed sequentially. In particular, in Phase 2, we also allow the mechanism to communicate with agents that have already been allocated to coalitions before computing an allocation. We shall see below why this may be necessary under certain circumstances to define mechanisms that ensure stability. Finally, we shall also examine cases where the mechanism has access to side information about agents’ preferences, e.g. based on historical data from previous allocations.

3

Formal Model

In the PGP, user preferences are not necessarily completely reported to the mechanism. This may affect what each user decides to report, and it is therefore important to verify that users have incentives to report truthfully. To this end, we now develop a formal model for user preferences and show that the PGP satisfies the truthfulness requirement in this setting. Consider the set S which contains every possible service that a user might require. For instance, in case a user only cares about pick-up location, S can be modeled as R2 . More generally, S will typically have a more complicated structure. Each user i has a preference ordering i over services in S (which can be interpreted as the user’s type) and seeks a service in the set of undominated services, which is defined as Pi = {s ∈ S|s i s0 , ∀s0 ∈ S}.

(1)

Now, we assume that users cannot completely report their preferences in Phase 1 of the PGP. Thus, the set of services S can be decomposed into a set Ψi corresponding to the aspects of the service that can be reported to the mechanism, and Φi corresponding to the aspects of the service that cannot be reported to the mechanism. More precisely, we assume that S = Ψi × Φi , which means that the set of undominated services can be written as Pi = {(ψ, φ) ∈ Ψi × Φi |(ψ, φ) i (ψ 0 , φ0 ),

∀ψ 0 ∈ Ψi , ∀φ0 ∈ Φi } . (2)

The reports ψ ∈ Ψi of user i impose constraints on the allocation computed by the mechanism. Indeed, if a user i reports ψ, then the mechanism will only allocate services (ψ, φ) for some φ ∈ Φi . This property of our mechanisms is known to the users, and therefore, we can guarantee that they are truthful. To see why this is the case, observe that if a user reports ψ such that for all φ ∈ Φi , the pair (ψ, φ) does not lie in Pi then the user is guaranteed to obtain a service they do not want. This implies that the limited report in Phase 1 is truthful. A similar argument also guarantees the accept or reject report in Phase 4 is also truthful: If a user i reports untruthfully, they will either obtain a service that does not lie in Pi , or may miss out on the opportunity to receive a service in Pi . The remainder of this paper is concerned with the design of mechanisms with limited reports ψ for two specific families of preferences:

c SmartSociety Consortium 2013-2017

• Topological preferences, where Ψi and Φi are assumed to be topological spaces [Munkres, 2000] and Ψi is metrizable. • Hedonic preferences, where Φi is the set {N ⊆ I|i ∈ N }. The set Ψi captures other aspects of the service that can be completely reported to the mechanism instead of the users they prefer to join coalitions with. This situation often arises when the set of users that might request a service is not known initially to the mechanism. In order to ensure that coalitions are stable—i.e., users are offered services (ψ, φ) ∈ Pi —we design our mechanisms by exploiting the three degrees of freedom that are available: 1. the message sets; i.e., among which set of ψ ∈ Ψi can each user i choose her report to report to the mechanism? 2. the allocation; i.e., what are the rules that constrain how users can be inserted into coalitions? 3. side information; i.e., what does the mechanism know about the structure of the user preferences, beyond the basic structure of S?

4 Topological Preferences In this section, we design mechanisms that yield stable coalitions for users with topological preferences. We show that, in this case, the design problem is primarily one of choosing appropriate messages M that users can report in Phase 1 of the PGP. To begin, we formally define topological preference relations. Definition 1. User i has a topological preference relation i over the space of services Ψi ×Φi if Ψi and Φi are topological spaces with topologies TΨi and TΦi , and Ψi is metrizable. Given this type of preferences, the notion of stability we consider is the following one. Definition 2. A family of disjoint coalitions {C1 , C2 , . . . , } with ∪i Ci = I is Nash stable with respect to the topological preference relations i if (ψj , φj ) ∈ Pj for each j ∈ I. In order for the mechanism to form stable coalitions, we assume that it has access to side information for the users’ set of undominated services. Let ΨPi = {(ψ ∈ Ψi |(ψ, φ) i (ψ 0 , φ), ∀ψ 0 ∈ Ψi },

(3)

which for a given unreported φ ∈ Φi is the set of most preferable services. The side information available to the mechanism is the value of an upper bound α ∈ R+ with sup dΨi (t, t0 ) ≤ α

t,t0 ∈ΨPi

(4)

where dΨi is the metric on Ψi . The side information (4) informs the mechanism of the size of the set of reports in Phase 1 of the PGP, for any given unreported φ ∈ Φi . This is not a strong side information assumption as it can in principle be learned from surveys or previous successful allocations. We also assume that the mechanism has knowledge of local sets of undominated services, which are defined as follows. Definition 3. A local set of undominated services Gψˆi is a subset of Pi corresponding to {(ψ, φ) ∈ Pi |ψ = ψˆi }, for a point ψˆi ∈ ΨP . i

83 of 87


c SmartSociety Consortium 2013-2017 Intuitively, the local set of undominated services provide information about the set of acceptable services, given that ψˆi is reported. Essentially, this knowledge implies that we know what range of similar “variations” of undominated solutions that the users are indifferent towards. Note that this is a much weaker condition than assuming knowledge of the full set of undominated services, as knowledge of the local set of undominated services is only assumed for reports ψˆi , whose number we assume to be relatively small.

4.1

The Mechanism

The mechanism design problem at hand is to exploit the mechanism’s side information to ensure that user coalitions are stable. We only consider mechanisms where a single service is offered to each user in Phase 3 of the PGP. This is due to the following observation: if the mechanism could be sure whether or not a user will accept an offer, then it can simultaneously allocate multiple users at once. However, if users are offered multiple offers that satisfy the constraints imposed by the limited report simultaneously, it is not possible to provide any guarantee as to which offer will be accepted by each user. With our side information assumptions, the problem is to ensure that each user reports ψˆi such that the corresponding local set of undominated services Gψˆi is known. We show that the consequence of this observation is that the mechanism design problem reduces to designing the message set M from which users can report in Phase 1 of the PGP. The first step to designing a mechanism that forms stable coalitions is to ensure that user reports are consistent. This notion is defined as follows. Definition 4. User i’s report is said to be consistent if she sends the same report ψˆi irrespective of her beliefs and higher-order beliefs about the preferences of the other users and the allocation used by the mechanism. Since each user generally has more than one acceptable service in their set of undominated services for a fixed unreported φ ∈ Φi , she may report differently depending on their beliefs and higher-order beliefs. For example, she might be inclined to counter-speculate depending on the reports she believes other users might send. By ensuring that the user’s report is consistent, this problem is avoided. This means that to ensure each user is satisfied with their offer, the mechanism only needs to know the local set of undominated services for each consistent report ψˆi . We now show that by exploiting the side information assumption in (4), it is possible to guarantee that users report consistently. This is proven in the following lemma. Lemma 1. Suppose that supt,t0 ∈ΨP dΨi (t, t0 ) ≤ α. If i for all pairs of report options ψˆi , ψˆi0 ∈ M the condition dΨ (ψˆi , ψˆ0 ) > α holds, then user i will report consistently. i

i

Proof. In order to guarantee that reporting is consistent, we need to ensure that at most one report ψˆi ∈ M lies in every possible ΨPi satisfying supt,t0 ∈ΨP dΨi (t, t0 ) ≤ α. This i is immediately guaranteed by the assumption, which implies that reporting will be in fact consistent.

84 of 87

Deliverable D6.4

It is now possible to prove the following theorem, which shows that with the side information assumption in (4) and the knowledge of a small number of local set of undominated services (corresponding to the possible reports in M), it is possible to guarantee that the coalitions formed by the mechanism are stable. Theorem 1. Suppose that supt,t0 ∈ΨP dΨi (t, t0 ) ≤ α and the i mechanism has knowledge of each local set of undominated services Gψˆi for all possible reports ψˆi ∈ M. If, for all pairs of report options ψˆi , ψˆi0 ∈ M the condition dΨi (ψˆi , ψˆi0 ) > α holds, then the allocation is guaranteed to be Nash stable with respect to the topological preference relations i . Proof. Observe that by Lemma 1, consistent reporting is guaranteed. Together with the knowledge of the local set of undominated services, the mechanism can completely characterize (ψˆi , φ) ∈ Pi , for every consistent report ψˆi ∈ M. Therefore, the user is guaranteed to accept their offer and the resulting coalitions are guaranteed to be Nash stable with respect to the topological preference relations i . Theorem 1 shows that when the message set M is designed appropriately—i.e. such that reports are consistent—then it is possible to guarantee stability of the resulting coalitions. An interesting consequence is that it is in fact better, from the perspective of stability, to allow commuters to provide less information, but to make sure that the reports they send are sufficiently different from each other in terms of preference orderings they imply. This complements the result presented in [D¨utting et al., 2011], where it was shown that by restricting the possible bids in combinatorial auctions, undesirable equilibria could be removed.

4.2

Designing Message Sets

We have shown that the mechanism design problem for topological preferences is equivalent to the problem of constructing message sets, one for each agent, such that users report consistently. We now show how to construct these message sets in two specific, but common, scenarios: Ψi = Rn , n ∈ N, the n-dimensional real vector space; and Ψi = RP n , n ∈ N, the n-dimensional real projective space. The first scenario can arise when users report distances or locations, and the second scenario can arise when users report times (since RP 1 is homeomorphic to the circle). For Ψi = Rn , the metric dΨi is kψi k2 for ψi ∈ Rn . In order to guarantee consistent reports, we need to ensure that dΨi (ψˆi , ψˆi0 ) > α for all messages ψˆi , ψˆi0 ∈ M. This can be achieved in a straightforward way by choosing M to lie on a scaled integer lattice. If n = 2, this lattice is αZ × αZ. If Ψi = RP n , we consider the metric dΨi to be the chordal distance [Conway et al., 1996]; i.e., dΨi (ψˆi , ψˆi0 ) = q 1 − |ψˆT ψˆ0 |, where ψˆi , ψˆ0 ∈ Rn+1 with kψˆi k = kψˆi k = 1 i

i

i

and (·)T denoting vector transpose. In order to guarantee consistent reports satisfying the condition dΨi (ψˆi , ψˆi0 ) > α, we construct the message set M using real equiangular tight frames (ETFs) [Sustik et al., 2007; Mondal et al., 2007],

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D6.4

where each pair of messages ψˆi , ψˆi0 satisfies |ψˆiT ψˆi0 |2 =

|M| − n . n(|M| − 1)

(5)

This means that to ensure dΨi (ψˆi , ψˆi0 ) > α, the size of the message set, |M| must be chosen appropriately (i.e. not too large). Algebraic constructions of real ETFs have been derived in [Sustik et al., 2007] and algorithms to construct them are given in [Tropp et al., 2005]. We note that other topological spaces and associated metrics may arise in practice. This does not mean our approach is not applicable, but alternative constructions of the message sets will be required. These constructions can be obtained in many cases by exploiting known packings in the relevant spaces, see, e.g., [Conway and Sloane, 1988].

5

Hedonic Preferences

In the case of hedonic preferences, the preference relation is defined as follows. Definition 5. Define Ni = {N ⊆ I|i ∈ N }. User i’s preference relation i is hedonic if it is a binary reflexive relation over the sets Ni ∪ {∂}, where ∂ corresponds to the scenario that the user prefers to be allocated to no coalition (i.e. not share the service and perform the task alone). If user i is indifferent to two sets N, N 0 ∈ Ni ∪ {∂} we write N ∼i N 0 . The set of undominated services for user i is composed by all the coalitions that the user prefers more than not being allocated at all. Formally: Pi = {N ∈ Ni |N i {∂}},

(6)

Definition 5 for hedonic preferences is motivated by the theory of hedonic games [Aziz and Brandl, 2012]; however, we make a departure from this theory by introducing the possibility that a user is not allocated to any coalition at all. It is necessary to account for this situation in our model as the PGP allows users to reject their offer; i.e., not accept any coalition and “going it alone” by finding an alternative means of obtaining the service (e.g., from a different set of providers). In fact, allowing users to not be allocated to any coalition (“not to play the game”) means that a new game formulation is also required. For this reason we introduce hedonic games on all restrictions (HGRs), which are defined as follows. Definition 6. A hedonic game on all restrictions (HGR) is a pair h2I ∪ {∂}, i, where I = {1, 2, . . . , I} is a set of users and is a profile of hedonic preferences. A partition on restriction k corresponding to the unique set Sk ⊆ 2I with k = 1, 2, . . . , |2I |, denoted by π (k) , consists of disjoint (k) (k) (k) coalitions C1 , C2 , . . . of users in Sk . Each coalition Ci (k) (k) (k) in π satisfies Ci ∩ Cj = ∅, for all j 6= i and for all k. Unlike standard hedonic games, there are two classes of stability concepts in HGRs: ex post and ex interim. The ex post stability concept is used when all users have decided whether to accept or reject offers (in Phase 4 of the PGP) and full information about the service is available to users. The ex post stability concept is analogous to the stability concept in standard hedonic games and is defined as follows.

c SmartSociety Consortium 2013-2017

Definition 7. Consider a family of disjoint coalitions {C1 , C2 , . . .} that partition a set S ⊂ I. The coalitions are individually stable (IS) if for each user i in coalition Ck , Ck i Cj ∪ {i}, for all j 6= k or there exists at least one user l in coalition Cj such that Cj ∪ {i} ≺l Cj . The coalitions are said to be Nash stable (NS) if for each user i in coalition Ck , Ck i Cj , for all j 6= k.

On the other hand, the ex interim stability concept is required when users have not yet accepted or rejected an offer. This means that for a coalition to be ex interim stable, it must be stable irrespective of the users that ultimately accept. This concept is formally defined as follows. Definition 8. Consider a family of partitions Π = {π (k) }k=1,2,...,2|I| for the HGR h2I ∪ {∂}, i. Π is iteratively individually stable (I-IS) if for all users i ∈ I, for all Bk ⊆ Ck such that i ∈ Bk , and for all Bk ⊆ Cj with j 6= k, then Bk i {∂} and at least one of the following holds: (i) Bk k Bj .

(ii) Bj l Bj ∪ {i} for all l ∈ Bj .

Π is iteratively Nash stable (I-NS) if for all users i ∈ I, for all Bk ⊆ Ck such that i ∈ Bk , for all Bj ⊆ Cj with j 6= k, then Bk i {∂} and Bk i Bj .

We note that a game similar to the HGR was considered in [Bloch and Diamantoudi, 2011]; however, their stability concept was based on the core rather than I-IS and I-NS. We focus on stability concepts based on a single user defecting because in our setting all coalition formation is mediated by a centralized mechanism, which does not support users collaborating to defect as a group. We also note that if a coalition is iteratively Nash (or individually) stable, then it will also be Nash (or individually) stable.

5.1

The Mechanism

We now turn to the problem of designing mechanisms that guarantee stable coalitions. In the case of hedonic preferences, this involves a careful choice of the allocation f . We first show that simultaneously allocating more than one user to a coalition using the PGP is not guaranteed to be stable with general hedonic preferences. Theorem 2. Consider the HGR h2I ∪ {∂}, i. Suppose that there exist IS coalitions {C1 , C2 , . . .} and that users in L, with |L| > 1 are allocated and sent offers via the PGP, where ∪i Ci ∪ L = J ⊆ I. Then, the family of partitions Π is not guaranteed to be I-IS. Proof. We prove the result with a counter example, where each user i accepts an offer based on preferences over the set J (and not all restrictions). Without loss of generality, let each user i ∈ L be offered a journey in a single coalition C. Consider the scenario that users accept an offer if C ∪ L i {∂}, where i ∈ L, and that users in L ⊂ L accept. As there exist hedonic preferences such that C ∪ L ≺i {∂} with i ∈ L, it follows that the family of partitions Π is not guaranteed to be I-IS.

85 of 87


c SmartSociety Consortium 2013-2017 While it is not possible to provide stability guarantees for general hedonic preferences, specific classes of hedonic preferences do yield stable coalitions. In particular, we demonstrate that this is the case for single-subset-peaked preferences, defined as follows. Definition 9. A hedonic preference relation i is singlesubset-peaked if for all N, N 0 ∈ 2I ∪ {∂}, N i N 0 implies that there exist unique sets Np , Np0 such that Np i Np0 , where Np ≺i Ns for all Ns ⊆ N and Np0 ≺i N 0 for all Ns0 ⊆ N 0 . We then have the following result. Theorem 3. Consider the HGR h2I ∪ {∂}, i, where j is single-subset-peaked for all j ∈ I. Suppose that there exist IS coalitions {C1 , C2 , . . .} and that users in L, with |L| > 1, are allocated and sent offers of coalitions to join via the the PGP, where ∪i Ci ∪ L = J ⊆ I. Also, suppose that the mechanism can consult with users in existing coalitions Ci whether it is acceptable to insert the users in L. Then, the family of partitions Π is guaranteed to be I-IS. Proof. Without loss of generality, let each user i ∈ L be offered a service in a single coalition C. Note that C ∪ L ∼j C for all L ⊆ L and all j ∈ C by the consultation hypothesis. Since each i is single-subset-peaked, it follows that for each i ∈ L there exists a set Lp ⊂ L such that C ∪ L i {∂} if and only if C ∪ Lp i {∂}. As such, it follows that if user i accepts, they will have no incentive not to commit to C ∪ L, for all L ⊆ L. This implies that the family of partitions Π of I is I-IS. Although simultaneously allocating users to a coalition is ruled out for general hedonic preferences by Theorem 2, inserting one user at a time is not. Therefore, we consider the following insertion mechanism. 1. User i sends a message requesting a service to the mechanism. 2. The mechanism computes K allocations {Ck }K k=1 that both satisfy user i’s request and the users already allocated via consultation (already allocated users only have knowledge of their own coalition). The only new user in each of the allocations {Ck }K k=1 is user i.

3. The mechanism informs user i of the potential allocations {Ck }K k=1 .

4. User i accepts at most one of the service allocations in {Ck }K k=1 . 5. If user i accepted a service, then they inform the mechanism whether they are satisfied with the service.

We now show that the insertion mechanism forms stable coalitions. Theorem 4. Suppose that a family of disjoint coalitions {C1 , C2 , . . .} are formed via the insertion mechanism. Then, the coalitions are IS.

86 of 87

Deliverable D6.4

Proof. In the insertion mechanism, users are inserted into coalitions one at a time. As such, a user i will choose to join a coalition C if C ∪ {i} i {∂}. Note that users in the existing coalition C are consulted, which means that if user i is offered a service in C ∪ {i} then C ∪ {i} j C for all j ∈ C. It then follows that for every insertion all coalitions are IS (already inserted users only have knowledge concerning their own coalition), which implies that the final family of coalitions {C1 , C2 , . . .} is also IS. Our results for hedonic preferences show that simultaneous allocation of users may lead to unstable coalitions unless strong assumptions are made regarding the structure of the preferences. This leads to the greedy insertion mechanism we have proposed. Note that our greedy insertion approach is consistent with the interaction mechanisms provided by realworld ridesharing, accommodation sharing and group purchase services, which suggests that our game-theoretic analysis may in fact be a good reflection of the models implicitly used by these platforms. A possible implication of this might be that more efficient allocation strategies could be applied on some of these platforms if the conditions that allow simultaneous allocation users (e.g. single-subset-peaked preferences) hold.

6 Conclusion Limited information reporting is an important, though not extensively studied, feature of sharing economy applications which all involve solving team allocation problems with complex preference profiles and limited reporting. Since global allocation mechanisms usually lack detailed information regarding users’ preferences in these settings, new coalition formation mechanisms are required. For two families of preferences—hedonic and topological—we proposed mechanisms that yield stable coalitions. To achieve this, we propose a signaling protocol, the Posted Goods Protocol (PGP), that accounts for the limited reporting capabilities of users. Even though both mechanisms we propose share a common foundations, they are significantly different depending on the type of users preferences. This suggests that it is unlikely that a unique mechanism can be broadly used for any service sharing application. In future work, we plan to investigate heuristic or approximate methods for reducing the complexity of coalition formation in these settings, and to explore how much information about users’ preference profiles has to be gathered in practice in order to be able to make acceptable proposals. Moreover, our use of the PGP mechanism, and the inspiration of insertion mechanisms were motivated, in part, by the fact that these protocols are commonly used on real-world web sites. The results present in this paper provide some indication that there may be a close connection between the practical design of these applications and the game-theoretic properties of the mechanisms they implement, which is a question we would like to pursue further.

http://www.smart-society-project.eu


Deliverable D6.4

References [Aziz and Brandl, 2012] H. Aziz and F. Brandl. Existence of stability in hedonic coalition formation games. In AAMAS, 2012. [Bistaffa et al., 2015] F. Bistaffa, A. Farinelli, and S.D. Ramchurn. Sharing rides with friends: a coalition formation algorithm for ridesharing. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2015. [Bloch and Diamantoudi, 2011] F. Bloch and E. Diamantoudi. Noncooperative formation of coalitions in hedonic games. Int. J. Game Theory, 40:263–280, 2011. [Bogomolnaia and Jackson, 2002] A. Bogomolnaia and M.O. Jackson. The stability of hedonic coalition structures. Games and Economic Behavior, 38:201–230, 2002. [Chawla et al., 2010] S. Chawla, J.D Hartline, D. Malec, and B. Siva. Multi-parameter mechanism design and sequential posted pricing. In Proc. of the forty-second ACM Symposium on Theory of Ccomputing (STOC), 2010. [Clarke, 1971] Edward H Clarke. Multipart pricing of public goods. Public choice, 11(1):17–33, 1971. [Conway and Sloane, 1988] J.H. Conway and N.J.A Sloane. Sphere Packings, Lattices and Groups. Springer New York, 1988. [Conway et al., 1996] J.H. Conway, R.H. Hardin, and N.J.A Sloane. Packing lines, planes, etc.: packings in grassmannian spaces. Experimental Mathematics, 5(2):139–159, 1996. [D¨utting et al., 2011] P. D¨utting, F. Fischer, and D.C. Parkes. Simplicity-expressiveness tradeoffs in mechanism design. In Proc. of the ACM Conference on Electronic Commerce, 2011. [Groves, 1973] Theodore Groves. Incentives in teams. Econometrica: Journal of the Econometric Society, pages 617–631, 1973. [Hamari et al., 2015] J. Hamari, M. Sjoklint, and A. Ukkonen. The sharing economy: Why people participate in collaborative consumption. Journal of the Association for Information Science and Technology, 2015. forthcoming. [Kamar and Horvitz, 2009] E. Kamar and E. Horvitz. Collaboration and shared plans in the open world: studies of ridesharing. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), 2009. [Kleiner et al., 2011] A. Kleiner, B. Nebel, and V. Ziparo. A mechanism for dynamic ride sharing based on parallel auctions. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), 2011. [Mondal et al., 2007] B. Mondal, R. Samanta, and R.W. Heath Jr. Congruent voronoi tessellations from equiangular lines. Applied and Computational Harmonic Analysis, 23(2):254–258, 2007. [Munkres, 2000] J.R. Munkres. Topology. Prentice Hall, Inc., 2000.

c SmartSociety Consortium 2013-2017

c SmartSociety Consortium 2013-2017 [Sustik et al., 2007] M.A. Sustik, J.A. Tropp, I.S. Dhillon, and R.W. Heath Jr. On the existence of equiangular tight frames. Linear Algebra and its Applications, 426(23):619–635, 2007. [Tropp et al., 2005] J.A. Tropp, I.S. Dhillon, R.W. Heath Jr., and T. Strohmer. Designing structured tight frames via an alternating projection method. IEEE Transactions on Information Theory, 51(1):188–209, 2005. [Vickrey, 1961] William Vickrey. Counterspeculation, auctions, and competitive sealed tenders. The Journal of finance, 16(1):8–37, 1961. [Zhao et al., 2014] D. Zhao, D. Zhang, E.H. Gerding, Y. Sakurai, and M. Yokoo. Incentives in ridesharing with deficit control. In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2014.

87 of 87


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.