Three papers from our lab have been accepted for presentation at the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), congratulations!
We are delighted to inform you that three papers from our lab have been accepted for presentation at the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21):
- Baklanov Artem et al, Achieving Proportionality up to the Maximin Item with Indivisible Goods;
- Ianovski Egor, Kondratev Aleksei, Computing the proportional veto core;
- Omer Ben Porat, Fedor Sandomirskiy, Moshe Tennenholtz, Protecting the Protected Group: Circumventing Harmful Fairness.
A new ranking of CS conferences puts AAAI at #5 out of more than a thousand venues, based on two impact metrics, h5-index and impact score. This year the conference received 9,034 submissions. After a rigorous blind review process, less than 1700 papers were excepted. Not surprisingly, the conference is included in CORE A* list of flagship CS conferences since 2008. Such a low acceptance rate and the top ranking of the conference means that papers constitute a solid contribution to the modern fields of economics where computer science methods become crucial.
- Artem Baklanov together with Pranav Garimidi, Vasilis Gkatzelis, and Daniel Schoepflin from The Drexel Economics and Computation (EconCS) research group (USA) study the problem of fairly allocating indivisible goods. They relax the classic fairness notion by introducing the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations.
- Egor Ianovski and Aleksei Kondratev study computational aspects of the proportional veto principle in social choice. They present a polynomial-time algorithm for computing the proportional veto core and present an anonymous and linear time algorithm for selecting a candidate from it.
- Omer Ben Porat, Fedor Sandomirskiy, and Moshe Tennenholtz draw on the economic approach to design a family of fairness constraints. In particular, they assume a selfish decision-maker and agents having a utility function, and design a fairness constraint under which the disadvantaged group is always better off.
We congratulate the authors!