Skip to content

Code for "Two-Memory Reinforcement Learning", COG 2023. A general framework to combine non-parametric episodic memory method and parametric deep reinforcement learning method.

License

Notifications You must be signed in to change notification settings

yangzhao-666/2m

Repository files navigation

Two-Memory Reinforcement Learning

IEEE Conference on Games 2023

Zhao Yang, Thomas Moerland, Mike Preuss, Aske Plaat


To learn more:

If you find our paper or code useful, please reference us:

@article{yang2023two,
  title={Two-Memory Reinforcement Learning},
  author={Yang, Zhao and Moerland, Thomas and Preuss, Mike and Plaat, Aske},
  booktitle={IEEE Conference on Games},
  year={2023}
}

Dependencies Installation

Create the conda environment by running:

conda env create -f environment.yml

In order to run experiments on MinAtar tasks, you need to install MinAtar correctly by following instructions provided.

Running Experiments


The code base uses wandb for logging all the results, for using it, you need to register as a user. Then you can pass --wandb to enable wandb logging.

You can simply run the code python train_2m.py --wandb, tabular experiments presented in the paper python tabular/train_tab.py --wandb.

Please be noted hyper-parameters in this work are quite senstive, in order to fully reproduce the results presented in the paper, you need to set hyper-parameters the same as in file.

Code Overview


The structure of the code base.

2m/
  |- train_2m.py            # start training
  |- DQN.py                 # implementation of DQN agent
  |- MFEC_atari.py          # implementation of model-free episodic control agent for MinAtar tasks
  |- tabular/               # folder of tabular implementations
  |- RB.py                  # implementation of replay buffers
  |- utils.py               # utils functions

Acknowledgements


2M builds on many prior works, and we thank the authors for their contributions.

  • MinAtar for simplified Atari tasks
  • MFEC for the implementation of model-free episodic control agent
  • PEG for their nice READMEs

About

Code for "Two-Memory Reinforcement Learning", COG 2023. A general framework to combine non-parametric episodic memory method and parametric deep reinforcement learning method.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published