• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Research seminar on September, 27: Fedor Sandomirskiy

Date & time: September, 27 (Thursday), 14.30 - 16.00. Place: 3A Kantemirovskaya st., room 171
Topic: Fairness-efficiency tradeoff in online fair division
Speaker: Fedor Sandomirskiy (HSE - St.Petersburg)

International Laboratory of Game theory and Decision making invites you to participate in the Research seminar by Fedor Sandomirskiy (Research Fellow of the Game theory Lab) this Thursday.

Title: Fairness-efficiency tradeoff in online fair division  (joint work with Herve Moulin and Anna Bogomolnaia)

Abstract: Consider a dynamic allocation problem, where objects arrive sequentially and must be allocated on the spot to a group of agents (e.g., passengers to Uber drivers or other profitable jobs within a firm, allocating food in a food bank, computational resources in a cloud). Such problems are usually called "online". In online problems allocation decisions are made under very limited, usually statistical, information about future objects. This makes the main objectives of the designer --- Fairness "on average" and Efficiency --- hard to achieve.
We investigate the tradeoff between Fairness on average and welfare-maximization in a simple Bayesian model, where there is only one object to allocate but it has random characteristics to allocate. We consider two classes of rules: prior-free (PF) with no information about the environment and prior-dependent (PD) that are much better informed. As a fairness concept, we use ex-ante Equal-Split-Lower-bound, which says that on average every agent must prefer his allocation to a uniformly random allocation.
We characterize fair rules achieving the highest welfare-level in both classes: for PF, we get a new rule which has not appeared in the literature; for PD, the famous Nash rule. 
One could expect that imposing fairness condition on PF rules must lead to a huge welfare-loss compared to PD. We obtain the exact values for the Price of Fairness (PoF, the worst case reduction in welfare caused by fairness constraint): for example, the PoF of PF rules for two agents is 0.91 and 0.93 for PD. This means that we must sacrifice at most 0.09 fraction of welfare to guarantee fairness using PF rules and 0.07 for PD. Thus the benefit from knowing the exact distribution is tiny. This motivates using PF rules even in those cases when the distribution is known: they are simple, do not need detailed information about valuations and thus are more robust.
As a by-product, we get the first known exact values of PoF for classical offline problems of fair cake-cutting and bargaining.

Everyone interested is invited!