Programme Committee

  • David Balduzzi
  • Nolan Bard
  • Yngvi Bjornnsson
  • Michael Bowling
  • Noam Brown
  • Michael Buro
  • Trevor Davis
  • Jakob Foerster
  • Matthieu Geist
  • Johannes Heinrich
  • Thomas Hubert
  • Rudolf Kadlec
  • Emilie Kaufmann
  • Ed Lockhart
  • Viliam Lisy
  • Michael Littman
  • Matej Moravcik
  • Martin Mueller
  • Alex Peysakhovich
  • Olivier Pietquin
  • Bilal Piot
  • Tom Schaul
  • Bruno Scherrer
  • Sriram Srinivasan
  • Gerald Tesauro
  • Finbarr Timbers
  • Julian Togelius
  • Karl Tuyls
  • Theophane Weber
  • Vinicius Zambaldi


    Organizers



    The organizing committee consists of expertise in games, multiagent reinforcement learning and planning, computational game theory, and machine learning.

    Marc Lanctot is a research scientist at DeepMind focused on general multiagent reinforcement learning. Marc obtained his PhD in 2013 from University of Alberta, where he worked on sampling methods for regret minimzation in games. Before joining DeepMind, he worked was a post-doctoral fellow at Maastricht University on search in games.

    Julien Pérolat is a research scientist at DeepMind, and has worked on reinforcement learning in Markov games. He obtained his PhD in 2017 at University of Lille. He has co-organized a Multiagent (Deep) Learning tutorial at AAMAS 2018.

    Martin Schmid is a research scientist at DeepMind. He is the a co-author of DeepStack, the first expert no-limit Poker AI, which brings ideas of local search and value functions from RL to imperfect information games. Before joining DeepMind, he worked as a research scientist for IBM Watson.