Due to the complexities stemming from the large policy search space, renewable uncertainty, and nonlinearity in a complex grid control problem, directly applying RL algorithms to train a satisfactory policy requires extensive tuning to be successful. To address this challenge, this repository provides users an example on using the curriculum learning (CL) technique to design a training curriculum involving a simpler steppingstone problem that guides the RL agent to learn to solve the original hard problem in a progressive and more effective manner.
Specifically, the optimal grid control problem considered is the critical load restoration (CLR) problem after a distribution system is islanded due to a substation outage. As we provide a reinforcement learning control (RLC) solution to the CLR problem, the repo is named RLC4CLR.
Please refer to our published paper and preprint on arXiv for more details.
Prepare the environment
git clone https://github.com/NREL/rlc4clr.git
cd rlc4clr
conda env create -n rlc4clr python=3.10
conda activate rlc4clr
pip install -r requirements.txt
cd rlc4clr
pip install -e .
Download renewable generation profiles and synthetic forecasts data from the OEDI website, unzip the data file and place it under a desired folder. Configuring the path to the renewable data at DEFAULT_CONFIG.py
To test if the environment is properly installed, run the explore_env.ipynb
under the train
folder.
This work was authored by the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. Funding provided by the U.S. Department of Energy Office of Electricity (OE) Advanced Grid Modeling (AGM) Program.
If citing this work, please use the following:
@article{zhang2022curriculum,
title={Curriculum-based reinforcement learning for distribution system critical load restoration},
author={Zhang, Xiangyu and Eseye, Abinet Tesfaye and Knueven, Bernard and Liu, Weijia and Reynolds, Matthew and Jones, Wesley},
journal={IEEE Transactions on Power Systems},
year={2022},
publisher={IEEE}
}
@misc{osti_1887968,
title = {RLC4CLR (Reinforcement Learning Controller for Critical Load Restoration Problems)},
author = {Zhang, Xiangyu and Eseye, Abinet Tesfaye and Knueven, Bernard and Lui, Weijia and Reynolds, Matthew and Jones, Wesley and USDOE Office of Electricity},
doi = {10.11578/dc.20220919.5},
url = {https://www.osti.gov/biblio/1887968},
year = {2022},
month = {3},
}