Skip to content

The simplest implementation of Pensieve (SIGCOMM 17') via state-of-the-art RL algorithms, including DQN, A2C, PPO, and SAC

License

Notifications You must be signed in to change notification settings

ahmad-hl/Pensieve-PPO

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pensieve PPO

About Pensive-PPO

This is an easy TensorFlow implementation of Pensieve[1]. In detail, we trained Pensieve via PPO rather than A3C. It's a stable version, which has already prepared the training set and the test set, and you can run the repo easily: just type

python train.py

instead. Results will be evaluated on the test set (from HSDPA) every 300 epochs.

Experimental Results

We reported the training curve of entropy weight beta, reward, and entropy respectively. Results were evaluated over the Oboe network traces.

Tips: the orange curve: pensieve-ppo; the blue curve: pensieve-a2c

Pretrained Model

Furthermore, we have also added the pretrained model in https://github.com/godka/Pensieve-PPO/tree/master/src/pretrain

The model improves 7.03% (0.924->0.989) on average QoE compared with the original Pensieve model [1].

Please feel free to let me know if you have any questions.

[1] Mao H, Netravali R, Alizadeh M. Neural adaptive video streaming with pensieve[C]//Proceedings of the Conference of the ACM Special Interest Group on Data Communication. ACM, 2017: 197-210.

About

The simplest implementation of Pensieve (SIGCOMM 17') via state-of-the-art RL algorithms, including DQN, A2C, PPO, and SAC

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • DIGITAL Command Language 75.9%
  • Python 24.1%