diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 25196524..2ef10126 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -3,8 +3,8 @@ Changelog
Best viewed on `the website Implementation of the basic policy tree based value iteration as explained
in section 4.1 of Planning and acting in partially observable stochastic
-domains [2]pomdp_py.algorithms package
pomdp_py.algorithms.value_iteration module¶
Warning: No pruning - the number of policy trees explodes very fast.
Best viewed on the website.
-Removed dependency on pygraphviz
;
Added utils.debugging.Treedebugger
, which makes it easier to inspect the search tree.
diff --git a/docs/html/index.html b/docs/html/index.html
index b8c19301..09c344fa 100644
--- a/docs/html/index.html
+++ b/docs/html/index.html
@@ -136,7 +136,7 @@
pomdp_py is a general purpose POMDP library written in Python and Cython. It features simple and comprehensive interfaces to describe POMDP or MDP problems. Originally written to support POMDP planning research, the interfaces also allow extensions to model-free or model-based learning in (PO)MDPs, multi-agent POMDP planning/learning, and task transfer or transfer learning.
Why pomdp_py? It provides a POMDP framework in Python with clean and intuitive interfaces. This makes POMDP-related research or projects accessible to more people. It also helps sharing code and developing a community.
-POMDP stands for Partially Observable Markov Decision Process [2].
+POMDP stands for Partially Observable Markov Decision Process [2].
The code is available on github. We welcome contributions to this library in:
Implementation of additional POMDP solvers (see Existing POMDP Solvers)