Schedule


Detailed schedule will be announced later.

The invited talks will be given by:
The exact titles and abstracts of the talks will be announced later.

Accepted presentations



Provably Efficient Decentralized Communication for Multi-Agent RL
Justin Lidard, Udari Madhushani and Naomi Leonard

Subgame solving without common knowledge
Brian Zhang and Tuomas Sandholm

Multi-Agent Learning for Iterative Dominance Elimination: Intrinsic Barriers and New Algorithms
Jibang Wu, Haifeng Xu and Fan Yao

Computing Strategies of American Football via Counterfactual Regret Minimization
Yuki Shimano, Kenshi Abe, Atsushi Iwasaki and Kazunori Ohkawara

Accepted posters



Making Something Out of Nothing: Monte Carlo Graph Search in Sparse Reward Environments
Marko Tot, Michelangelo Conserva, Sam Devlin and Diego Perez Liebana

Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers
Amir Ardalan Kalantari Dehaghi, Mohammad Amini, Sarath Chandar and Doina Precup

Follow your Nose: Using General Value Functions for Directed Exploration in Reinforcement Learning
Somjit Nath, Omkar Shelke, Durgesh Kalwar, Hardik Meisheri and Harshad Khadilkar

Detecting Influence Structures in Multi-Agent Reinforcement Learning Systems
Fabian Raoul Pieroth, Katherine Fitch and Lenz Belzner

Godot Reinforcement Learning Agents
Edward Beeching, Jilles Dibangoye, Olivier Simonin and Christian Wolf

Learning to Bid Long-Term: Multi-Agent Reinforcement Learning with Long-Term and Sparse Reward in Repeated Auction Games
Jing Tan, Ramin Khalili and Holger Karl

Local Information Based Attentional Opponent Modelling In Multi-agent Reinforcement Learning
Binqiang Chen

Exploiting Opponents under Utility Constraints in Extensive-Form Games
Martino Bernasconi de Luca, Federico Cacciamani, Simone Fioravanti, Alberto Marchesi, Nicola Gatti and Francesco Trovò

The Evolutionary Dynamics of Soft-Max Policy Gradient in Games
Martino Bernasconi de Luca, Federico Cacciamani, Simone Fioravanti, Nicola Gatti and Francesco Trovò

Commonsense Knowledge from Scene Graphs for Textual Environments
Tsunehiko Tanaka, Daiki Kimura and Michiaki Tatsubori

Direct Behavior Specification via Constrained Reinforcement Learning
Julien Roy, Roger Girgis, Joshua Romoff, Pierre-Luc Bacon and Christopher Pal

Equilibrium Computation for Auction Games via Multi-Swarm Optimization
Nils Kohring, Carina Fröhlich, Stefan Heidekrueger and Martin Bichler

Exploring Reward Surfaces in Reinforcement Learning Environments
Ryan Sullivan, J. K. Terry, Benjamin Black and John Dickerson

HiRL: Dealing with Non-stationarity in Hierarchical Reinforcement Learning via High-level Relearning
Yuhang Jiao and Yoshimasa Tsuruoka

Batch Monte Carlo Tree Search
Tristan Cazenave

Computing Distributional Bayes Nash Equilibria in Auction Games via Gradient Dynamics
Maximilian Fichtl, Matthias Oberlechner and Martin Bichler

Fast Payoff Matrix Sparsification Techniques for Structured Extensive-Form Games
Gabriele Farina and Tuomas Sandholm

Dreaming with Transformers
Catherine Zeng, Jordan Docter, Alexander Amini, Igor Gilitschenski, Ramin Hasani and Daniela Rus

Coalitional Negotiation Games with Emergent Communication
Xiaoyang Gao, Siqi Chen, Jie Lin, Yang Yang, Haiying Wu and Jianye Hao

Team Correlated Equilibria in Zero-Sum Extensive-Form Games via Tree Decompositions
Brian Zhang and Tuomas Sandholm

Cooperation Learning in Time-Varying Multi-Agent Networks
Vasanth Reddy Baddam, Almuatazbellah Boker and Hoda Eldardiry

Learning Generalizable Behavior via Visual Rewrite Rules
Mingxuan Li, Yiheng Xie, Shangqun Yu and Michael Littman

Graph augmented Deep Reinforcement Learning in the GameRLand3D environment
Edward Beeching, Maxim Peter, Philippe Marcotte, Jilles Debangoye, Olivier Simonin, Joshua Romoff and Christian Wolf

Anytime Optimal PSRO for Two-Player Zero-Sum Games
Stephen McAleer, Kevin Wang, Marc Lanctot, John Lanier, Pierre Baldi and Roy Fox

On the Use and Misuse of Absorbing States in Multi-agent Reinforcement Learning
Andrew Cohen, Ervin Teng, Vincent-Pierre Berges, Ruo-Ping Dong, Hunter Henry, Marwan Mattar, Alexander Zook and Sujoy Ganguly

The Partially Observable History Process
Dustin Morrill, Amy Greenwald and Michael Bowling

A Review for Deep Reinforcement Learning in Atari: Benchmarks, Challenges and Solutions
Jiajun Fan

Continual Depth-limited Responses for Computing Counter-strategies in Extensive-form Games
David Milec and Viliam Lisy

Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning
Yantian Zha, Lin Guan and Subbarao Kambhampati

Stackelberg MADDPG: Learning Emergent Behaviors via Information Asymmetry in Competitive Games
Boling Yang, Liyuan Zheng, Lillian Ratliff, Byron Boots and Joshua Smith

Faster No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium
Ioannis Anagnostides, Gabriele Farina, Tuomas Sandholm and Christian Kroer

Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results
Gabriele Farina, Andrea Celli, Nicola Gatti and Tuomas Sandholm

GDI: Rethinking What Makes Reinforcement Learning Different from Supervised Learning
Jiajun Fan, Changnan Xiao and Yue Huang

Where, When & Which Concepts does AlphaZero Learn? Lessons from the Game of Hex
Jessica Forde, Charles Lovering, Ellie Pavlick and Michael Littman

MDP Abstraction with Successor Features
Dongge Han, Michael Wooldridge and Sebastian Tschiatschek

Fast Algorithms for Poker Require Modelling it as a Sequential Bayesian Game
Vojtech Kovarik, David Milec, Michal Sustr, Dominik Seitz and Viliam Lisy

A Deep Reinforcement Learning Agent with Bayesian Policy Reuse for Bilateral Negotiation Games
Xiaoyang Gao, Siqi Chen, Yan Zheng and Jianye Hao

An Adaptive State Aggregation Algorithm for Markov Decision Processes
Guanting Chen, Johann Gaebler, Matt Peng, Chunlin Sun and Yinyu Ye

Deep Catan
Brahim Driss and Tristan Cazenave