Programme Committee

  • Thomas Anthony
  • Mohammad Azar
  • David Balduzzi
  • Yoram Bachrach
  • Diana Borsa
  • Branislav Bosansky
  • Nolan Bard
  • Noam Brown
  • Michael Buro
  • Tristan Cazenave
  • Will Dabney
  • Elnaz Davoodi
  • Jakob Foerster
  • Chao Gao
  • Matthieu Geist
  • Ian Gemp
  • Audrunas Gruslys
  • Arthur Guez
  • Daniel Guo
  • Daniel Hennes
  • Pablo Hernandez-Leal
  • Thomas Hubert
  • Rudolf Kadlec
  • Bilal Kartal
  • Guy Lever
  • Viliam Lisy
  • Siqi Liu
  • Edward Lockhart
  • Matej Moravcik
  • Dustin Morrill
  • Martin Mueller
  • Shayegan Omidshafiei
  • Laurent Orseau
  • Andrew Patterson
  • Olivier Pietquin
  • Georgios Piliouras
  • Bilal Piot
  • Mark Rowland
  • Julian Schrittwieser
  • Samuel Sokota
  • Finbarr Timbers
  • Julian Togelius
  • Karl Tuyls
  • James Wright


    Organizers



    The organizing committee consists of expertise in games, multiagent reinforcement learning and planning, computational game theory, and machine learning.

    Julien Pérolat is a research scientist at DeepMind, and has worked on reinforcement learning in Markov games. He obtained his PhD in 2017 at University of Lille. He has co-organized a Multiagent (Deep) Learning tutorial at AAMAS 2018.

    Martin Schmid is a research scientist at DeepMind. He is the a co-author of DeepStack, the first expert no-limit Poker AI, which brings ideas of local search and value functions from RL to imperfect information games. Before joining DeepMind, he worked as a research scientist for IBM Watson.

    Marc Lanctot is a research scientist at DeepMind focused on general multiagent reinforcement learning. Marc obtained his PhD in 2013 from University of Alberta, where he worked on sampling methods for regret minimzation in games. Before joining DeepMind, he worked as a post-doctoral fellow at Maastricht University on search in games.