Skip to content

Commit

Permalink
docs reupdate
Browse files Browse the repository at this point in the history
  • Loading branch information
csinva committed Jan 3, 2023
1 parent e95a650 commit c18078e
Show file tree
Hide file tree
Showing 28 changed files with 1,649 additions and 283 deletions.
20 changes: 10 additions & 10 deletions docs/discretization/discretizer.html
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@
max values for the range of x

keep_pointwise_bins : boolean
If True, treat duplicate bin_edges as a pointiwse bin,
If True, treat duplicate bin_edges as a pointwise bin,
i.e., [a, a]. If False, these bins are in effect ignored.

Returns
Expand Down Expand Up @@ -480,7 +480,7 @@

manual_discretizer_ : dictionary
Provides bin_edges to feed into _quantile_discretization()
and do quantile discreization manually for features where
and do quantile discretization manually for features where
KBinsDiscretizer() failed. Ignored if strategy != 'quantile'
or no errors in KBinsDiscretizer().

Expand Down Expand Up @@ -515,7 +515,7 @@
self
"""

# initalization and error checking
# initialization and error checking
self._fit_preprocessing(X)

# apply KBinsDiscretizer to the selected columns
Expand Down Expand Up @@ -734,7 +734,7 @@

Parameters
----------
X : data frame of shape (n_samples, n_fatures)
X : data frame of shape (n_samples, n_features)
Training data used to fit RF

y : array-like of shape (n_samples,)
Expand Down Expand Up @@ -1119,7 +1119,7 @@ <h2 id="params">Params</h2>
max values for the range of x

keep_pointwise_bins : boolean
If True, treat duplicate bin_edges as a pointiwse bin,
If True, treat duplicate bin_edges as a pointwise bin,
i.e., [a, a]. If False, these bins are in effect ignored.

Returns
Expand Down Expand Up @@ -1287,7 +1287,7 @@ <h2 id="attributes">Attributes</h2>
<dd>Primary discretization method used to bin numeric data</dd>
<dt><strong><code>manual_discretizer_</code></strong> :&ensp;<code>dictionary</code></dt>
<dd>Provides bin_edges to feed into _quantile_discretization()
and do quantile discreization manually for features where
and do quantile discretization manually for features where
KBinsDiscretizer() failed. Ignored if strategy != 'quantile'
or no errors in KBinsDiscretizer().</dd>
<dt><strong><code>onehot_</code></strong> :&ensp;<code>object</code> of <code>class OneHotEncoder()</code></dt>
Expand Down Expand Up @@ -1352,7 +1352,7 @@ <h2 id="examples">Examples</h2></div>

manual_discretizer_ : dictionary
Provides bin_edges to feed into _quantile_discretization()
and do quantile discreization manually for features where
and do quantile discretization manually for features where
KBinsDiscretizer() failed. Ignored if strategy != &#39;quantile&#39;
or no errors in KBinsDiscretizer().

Expand Down Expand Up @@ -1387,7 +1387,7 @@ <h2 id="examples">Examples</h2></div>
self
&#34;&#34;&#34;

# initalization and error checking
# initialization and error checking
self._fit_preprocessing(X)

# apply KBinsDiscretizer to the selected columns
Expand Down Expand Up @@ -1509,7 +1509,7 @@ <h2 id="returns">Returns</h2>
self
&#34;&#34;&#34;

# initalization and error checking
# initialization and error checking
self._fit_preprocessing(X)

# apply KBinsDiscretizer to the selected columns
Expand Down Expand Up @@ -2135,7 +2135,7 @@ <h2 id="attributes">Attributes</h2>

Parameters
----------
X : data frame of shape (n_samples, n_fatures)
X : data frame of shape (n_samples, n_features)
Training data used to fit RF

y : array-like of shape (n_samples,)
Expand Down
2 changes: 1 addition & 1 deletion docs/discretization/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ <h2 class="section-title" id="header-submodules">Sub-modules</h2>
<dt><code class="name"><a title="imodels.discretization.mdlp" href="mdlp.html">imodels.discretization.mdlp</a></code></dt>
<dd>
<div class="desc"><p>Discretization MDLP
Python implementation of Fayyad and Irani's MDLP criterion discretiation algorithm …</p></div>
Python implementation of Fayyad and Irani's MDLP criterion discretization algorithm …</p></div>
</dd>
<dt><code class="name"><a title="imodels.discretization.simple" href="simple.html">imodels.discretization.simple</a></code></dt>
<dd>
Expand Down
8 changes: 4 additions & 4 deletions docs/discretization/mdlp.html
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
<article id="content">
<section id="section-intro">
<h1 id="discretization-mdlp">Discretization MDLP</h1>
<p>Python implementation of Fayyad and Irani's MDLP criterion discretiation algorithm</p>
<p>Python implementation of Fayyad and Irani's MDLP criterion discretization algorithm</p>
<p><strong>Reference:</strong>
Irani, Keki B. "Multi-interval discretization of continuous-valued attributes for classification learning." (1993).</p>
<details class="source">
Expand All @@ -27,7 +27,7 @@ <h1 id="discretization-mdlp">Discretization MDLP</h1>
</summary>
<pre><code class="python">&#39;&#39;&#39;
# Discretization MDLP
Python implementation of Fayyad and Irani&#39;s MDLP criterion discretiation algorithm
Python implementation of Fayyad and Irani&#39;s MDLP criterion discretization algorithm

**Reference:**
Irani, Keki B. &#34;Multi-interval discretization of continuous-valued attributes for classification learning.&#34; (1993).
Expand Down Expand Up @@ -138,7 +138,7 @@ <h1 id="discretization-mdlp">Discretization MDLP</h1>
&#39;&#39;&#39;
Given an attribute, find all potential cut_points (boundary points)
:param feature: feature of interest
:param partition_index: indices of rows for which feature value falls whithin interval of interest
:param partition_index: indices of rows for which feature value falls within interval of interest
:return: array with potential cut_points
&#39;&#39;&#39;
# get dataframe with only rows of interest, and feature and class columns
Expand Down Expand Up @@ -839,7 +839,7 @@ <h2 id="params">Params</h2>
&#39;&#39;&#39;
Given an attribute, find all potential cut_points (boundary points)
:param feature: feature of interest
:param partition_index: indices of rows for which feature value falls whithin interval of interest
:param partition_index: indices of rows for which feature value falls within interval of interest
:return: array with potential cut_points
&#39;&#39;&#39;
# get dataframe with only rows of interest, and feature and class columns
Expand Down
4 changes: 2 additions & 2 deletions docs/experimental/bartpy/data.html
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@
self._n_features = X.shape[1]
self._mask = mask

# Cache iniialization
# Cache initialization
if unique_columns is not None:
self._unique_columns = [x if x is True else None for x in unique_columns]
else:
Expand Down Expand Up @@ -524,7 +524,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
self._n_features = X.shape[1]
self._mask = mask

# Cache iniialization
# Cache initialization
if unique_columns is not None:
self._unique_columns = [x if x is True else None for x in unique_columns]
else:
Expand Down
14 changes: 7 additions & 7 deletions docs/experimental/bartpy/diagnostics/diagnostics.html
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@
from sklearn.metrics import mean_squared_error

from imodels.util.tree_interaction_utils import get_interacting_features
from ..diagnostics.residuals import plot_qq, plot_homoskedasity_diagnostics
from ..diagnostics.sampling import plot_tree_mutation_acceptance_rate, plot_tree_likelihhod, plot_tree_probs
from ..diagnostics.residuals import plot_qq, plot_homoscedasticity_diagnostics
from ..diagnostics.sampling import plot_tree_mutation_acceptance_rate, plot_tree_likelihood, plot_tree_probs
from ..diagnostics.sigma import plot_sigma_convergence
from ..diagnostics.trees import plot_tree_depth
from ..initializers.sklearntreeinitializer import SklearnTreeInitializer
Expand All @@ -44,9 +44,9 @@
plot_qq(model, ax1)
plot_tree_depth(model, ax2)
plot_sigma_convergence(model, ax3)
plot_homoskedasity_diagnostics(model, ax4)
plot_homoscedasticity_diagnostics(model, ax4)
plot_tree_mutation_acceptance_rate(model, ax5)
# plot_tree_likelihhod(model, ax6)
# plot_tree_likelihood(model, ax6)
# plot_tree_probs(model, ax7)

plt.show()
Expand Down Expand Up @@ -108,7 +108,7 @@
# plot_tree_depth(bart_figs, ax2,
# f&#34;FIGS initialization (MSE: {np.round(mean_squared_error(bart_figs_preds, y_test), 4)}&#34;
# f&#34;, FIGS MSE: {np.round(mean_squared_error(figs_preds, y_test), 2)})&#34;, x_label=True)
# plt.title(f&#34;Bayesian tree with different initilization of Friedman 1 dataset n={n}&#34;)
# plt.title(f&#34;Bayesian tree with different initialization of Friedman 1 dataset n={n}&#34;)

plt.show()

Expand Down Expand Up @@ -138,9 +138,9 @@ <h2 class="section-title" id="header-functions">Functions</h2>
plot_qq(model, ax1)
plot_tree_depth(model, ax2)
plot_sigma_convergence(model, ax3)
plot_homoskedasity_diagnostics(model, ax4)
plot_homoscedasticity_diagnostics(model, ax4)
plot_tree_mutation_acceptance_rate(model, ax5)
# plot_tree_likelihhod(model, ax6)
# plot_tree_likelihood(model, ax6)
# plot_tree_probs(model, ax7)

plt.show()</code></pre>
Expand Down
4 changes: 2 additions & 2 deletions docs/experimental/bartpy/diagnostics/motivation.html
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@

for c in range(n_chains):
clr = next(color)
chain_preds = model.chain_precitions(X, c)
chain_preds = model.chain_predictions(X, c)
mean_pred = np.array(chain_preds).mean(axis=0)

y_plt = [mean_squared_error(mean_pred, p) for p in chain_preds]
Expand Down Expand Up @@ -445,7 +445,7 @@ <h2 class="section-title" id="header-functions">Functions</h2>

for c in range(n_chains):
clr = next(color)
chain_preds = model.chain_precitions(X, c)
chain_preds = model.chain_predictions(X, c)
mean_pred = np.array(chain_preds).mean(axis=0)

y_plt = [mean_squared_error(mean_pred, p) for p in chain_preds]
Expand Down
10 changes: 5 additions & 5 deletions docs/experimental/bartpy/diagnostics/residuals.html
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
return ax


def plot_homoskedasity_diagnostics(model: SklearnModel, ax=None):
def plot_homoscedasticity_diagnostics(model: SklearnModel, ax=None):
if ax is None:
_, ax = plt.subplots(1, 1, figsize=(5, 5))
sns.regplot(model.predict(model.data.X.values), model.residuals(model.data.X.values), ax=ax)
Expand All @@ -54,16 +54,16 @@
<section>
<h2 class="section-title" id="header-functions">Functions</h2>
<dl>
<dt id="imodels.experimental.bartpy.diagnostics.residuals.plot_homoskedasity_diagnostics"><code class="name flex">
<span>def <span class="ident">plot_homoskedasity_diagnostics</span></span>(<span>model: <a title="imodels.experimental.bartpy.sklearnmodel.SklearnModel" href="../sklearnmodel.html#imodels.experimental.bartpy.sklearnmodel.SklearnModel">SklearnModel</a>, ax=None)</span>
<dt id="imodels.experimental.bartpy.diagnostics.residuals.plot_homoscedasticity_diagnostics"><code class="name flex">
<span>def <span class="ident">plot_homoscedasticity_diagnostics</span></span>(<span>model: <a title="imodels.experimental.bartpy.sklearnmodel.SklearnModel" href="../sklearnmodel.html#imodels.experimental.bartpy.sklearnmodel.SklearnModel">SklearnModel</a>, ax=None)</span>
</code></dt>
<dd>
<div class="desc"></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def plot_homoskedasity_diagnostics(model: SklearnModel, ax=None):
<pre><code class="python">def plot_homoscedasticity_diagnostics(model: SklearnModel, ax=None):
if ax is None:
_, ax = plt.subplots(1, 1, figsize=(5, 5))
sns.regplot(model.predict(model.data.X.values), model.residuals(model.data.X.values), ax=ax)
Expand Down Expand Up @@ -109,7 +109,7 @@ <h1>Index 🔍</h1>
</li>
<li><h3><a href="#header-functions">Functions</a></h3>
<ul class="">
<li><code><a title="imodels.experimental.bartpy.diagnostics.residuals.plot_homoskedasity_diagnostics" href="#imodels.experimental.bartpy.diagnostics.residuals.plot_homoskedasity_diagnostics">plot_homoskedasity_diagnostics</a></code></li>
<li><code><a title="imodels.experimental.bartpy.diagnostics.residuals.plot_homoscedasticity_diagnostics" href="#imodels.experimental.bartpy.diagnostics.residuals.plot_homoscedasticity_diagnostics">plot_homoscedasticity_diagnostics</a></code></li>
<li><code><a title="imodels.experimental.bartpy.diagnostics.residuals.plot_qq" href="#imodels.experimental.bartpy.diagnostics.residuals.plot_qq">plot_qq</a></code></li>
</ul>
</li>
Expand Down
10 changes: 5 additions & 5 deletions docs/experimental/bartpy/diagnostics/sampling.html
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
ax.set_ylim((0, 1.1))
return ax

def plot_tree_likelihhod(model: SklearnModel, ax=None):
def plot_tree_likelihood(model: SklearnModel, ax=None):
if ax is None:
fig, ax = plt.subplots(1, 1)

Expand Down Expand Up @@ -70,16 +70,16 @@
<section>
<h2 class="section-title" id="header-functions">Functions</h2>
<dl>
<dt id="imodels.experimental.bartpy.diagnostics.sampling.plot_tree_likelihhod"><code class="name flex">
<span>def <span class="ident">plot_tree_likelihhod</span></span>(<span>model: <a title="imodels.experimental.bartpy.sklearnmodel.SklearnModel" href="../sklearnmodel.html#imodels.experimental.bartpy.sklearnmodel.SklearnModel">SklearnModel</a>, ax=None)</span>
<dt id="imodels.experimental.bartpy.diagnostics.sampling.plot_tree_likelihood"><code class="name flex">
<span>def <span class="ident">plot_tree_likelihood</span></span>(<span>model: <a title="imodels.experimental.bartpy.sklearnmodel.SklearnModel" href="../sklearnmodel.html#imodels.experimental.bartpy.sklearnmodel.SklearnModel">SklearnModel</a>, ax=None)</span>
</code></dt>
<dd>
<div class="desc"></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def plot_tree_likelihhod(model: SklearnModel, ax=None):
<pre><code class="python">def plot_tree_likelihood(model: SklearnModel, ax=None):
if ax is None:
fig, ax = plt.subplots(1, 1)

Expand Down Expand Up @@ -152,7 +152,7 @@ <h1>Index 🔍</h1>
</li>
<li><h3><a href="#header-functions">Functions</a></h3>
<ul class="">
<li><code><a title="imodels.experimental.bartpy.diagnostics.sampling.plot_tree_likelihhod" href="#imodels.experimental.bartpy.diagnostics.sampling.plot_tree_likelihhod">plot_tree_likelihhod</a></code></li>
<li><code><a title="imodels.experimental.bartpy.diagnostics.sampling.plot_tree_likelihood" href="#imodels.experimental.bartpy.diagnostics.sampling.plot_tree_likelihood">plot_tree_likelihood</a></code></li>
<li><code><a title="imodels.experimental.bartpy.diagnostics.sampling.plot_tree_mutation_acceptance_rate" href="#imodels.experimental.bartpy.diagnostics.sampling.plot_tree_mutation_acceptance_rate">plot_tree_mutation_acceptance_rate</a></code></li>
<li><code><a title="imodels.experimental.bartpy.diagnostics.sampling.plot_tree_probs" href="#imodels.experimental.bartpy.diagnostics.sampling.plot_tree_probs">plot_tree_probs</a></code></li>
</ul>
Expand Down
22 changes: 11 additions & 11 deletions docs/experimental/bartpy/initializers/sklearntreeinitializer.html
Original file line number Diff line number Diff line change
Expand Up @@ -102,14 +102,14 @@
return


def enumarate_tree(tree: Node, num_iter=iter(range(int(1e+06)))):
def enumerate_tree(tree: Node, num_iter=iter(range(int(1e+06)))):
if tree is None:
return
tree.number = next(num_iter)
# if hasattr(tree, &#39;left&#39;):
enumarate_tree(get_child(tree, &#39;left&#39;), num_iter)
enumerate_tree(get_child(tree, &#39;left&#39;), num_iter)
# if hasattr(tree, &#39;right&#39;):
enumarate_tree(get_child(tree, &#39;right&#39;), num_iter)
enumerate_tree(get_child(tree, &#39;right&#39;), num_iter)


def fill_nodes_dict(tree: Node, node_dict: dict):
Expand All @@ -125,7 +125,7 @@
class SkTree:
def __init__(self, tree: Node):
nodes_dict = {}
enumarate_tree(tree, num_iter=iter(range(int(1e+06))))
enumerate_tree(tree, num_iter=iter(range(int(1e+06))))
fill_nodes_dict(tree, nodes_dict)
self.children_left = []
self.children_right = []
Expand Down Expand Up @@ -231,23 +231,23 @@
<section>
<h2 class="section-title" id="header-functions">Functions</h2>
<dl>
<dt id="imodels.experimental.bartpy.initializers.sklearntreeinitializer.enumarate_tree"><code class="name flex">
<span>def <span class="ident">enumarate_tree</span></span>(<span>tree: <a title="imodels.tree.figs.Node" href="../../../tree/figs.html#imodels.tree.figs.Node">Node</a>, num_iter=&lt;range_iterator object&gt;)</span>
<dt id="imodels.experimental.bartpy.initializers.sklearntreeinitializer.enumerate_tree"><code class="name flex">
<span>def <span class="ident">enumerate_tree</span></span>(<span>tree: <a title="imodels.tree.figs.Node" href="../../../tree/figs.html#imodels.tree.figs.Node">Node</a>, num_iter=&lt;range_iterator object&gt;)</span>
</code></dt>
<dd>
<div class="desc"></div>
<details class="source">
<summary>
<span>Expand source code</span>
</summary>
<pre><code class="python">def enumarate_tree(tree: Node, num_iter=iter(range(int(1e+06)))):
<pre><code class="python">def enumerate_tree(tree: Node, num_iter=iter(range(int(1e+06)))):
if tree is None:
return
tree.number = next(num_iter)
# if hasattr(tree, &#39;left&#39;):
enumarate_tree(get_child(tree, &#39;left&#39;), num_iter)
enumerate_tree(get_child(tree, &#39;left&#39;), num_iter)
# if hasattr(tree, &#39;right&#39;):
enumarate_tree(get_child(tree, &#39;right&#39;), num_iter)</code></pre>
enumerate_tree(get_child(tree, &#39;right&#39;), num_iter)</code></pre>
</details>
</dd>
<dt id="imodels.experimental.bartpy.initializers.sklearntreeinitializer.fill_nodes_dict"><code class="name flex">
Expand Down Expand Up @@ -427,7 +427,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
<pre><code class="python">class SkTree:
def __init__(self, tree: Node):
nodes_dict = {}
enumarate_tree(tree, num_iter=iter(range(int(1e+06))))
enumerate_tree(tree, num_iter=iter(range(int(1e+06))))
fill_nodes_dict(tree, nodes_dict)
self.children_left = []
self.children_right = []
Expand Down Expand Up @@ -580,7 +580,7 @@ <h1>Index 🔍</h1>
</li>
<li><h3><a href="#header-functions">Functions</a></h3>
<ul class="">
<li><code><a title="imodels.experimental.bartpy.initializers.sklearntreeinitializer.enumarate_tree" href="#imodels.experimental.bartpy.initializers.sklearntreeinitializer.enumarate_tree">enumarate_tree</a></code></li>
<li><code><a title="imodels.experimental.bartpy.initializers.sklearntreeinitializer.enumerate_tree" href="#imodels.experimental.bartpy.initializers.sklearntreeinitializer.enumerate_tree">enumerate_tree</a></code></li>
<li><code><a title="imodels.experimental.bartpy.initializers.sklearntreeinitializer.fill_nodes_dict" href="#imodels.experimental.bartpy.initializers.sklearntreeinitializer.fill_nodes_dict">fill_nodes_dict</a></code></li>
<li><code><a title="imodels.experimental.bartpy.initializers.sklearntreeinitializer.get_bartpy_tree_from_sklearn" href="#imodels.experimental.bartpy.initializers.sklearntreeinitializer.get_bartpy_tree_from_sklearn">get_bartpy_tree_from_sklearn</a></code></li>
<li><code><a title="imodels.experimental.bartpy.initializers.sklearntreeinitializer.get_child" href="#imodels.experimental.bartpy.initializers.sklearntreeinitializer.get_child">get_child</a></code></li>
Expand Down
Loading

0 comments on commit c18078e

Please sign in to comment.