Skip to content

Commit

Permalink
auto update
Browse files Browse the repository at this point in the history
  • Loading branch information
elkoz authored and github-actions[bot] committed Nov 21, 2023
1 parent 1943f69 commit cb92f4c
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 6 deletions.
28 changes: 22 additions & 6 deletions docs/data/torch.html
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ <h1 class="title">Module <code>proteinflow.data.torch</code></h1>
mask_all_cdrs : bool, default False
if `True`, all CDRs are masked instead of just the sampled one
classes_dict_path : str, optional
path to the pickled classes dictionary
path to the pickled classes dictionary; if not given, we will try to find split dictionaries in the parent folder of `dataset_folder`
load_ligands : bool, default False
if `True`, the ligands will be loaded from the PDB files and added to the features
cut_edges : bool, default False
Expand Down Expand Up @@ -414,6 +414,14 @@ <h1 class="title">Module <code>proteinflow.data.torch</code></h1>
&#34;&#34;&#34;
self.debug = debug_verbose

if classes_dict_path is None:
dataset_parent = os.path.dirname(dataset_folder)
classes_dict_path = os.path.join(
dataset_parent, &#34;splits_dict&#34;, &#34;classes.pickle&#34;
)
if not os.path.exists(classes_dict_path):
classes_dict_path = None

alphabet = ALPHABET
self.alphabet_dict = defaultdict(lambda: 0)
for i, letter in enumerate(alphabet):
Expand Down Expand Up @@ -515,7 +523,7 @@ <h1 class="title">Module <code>proteinflow.data.torch</code></h1>
classes_to_exclude = []
elif classes_dict_path is None:
raise ValueError(
&#34;classes_to_exclude is not None, but classes_dict_path is None&#34;
&#34;The classes_to_exclude parameter is not None, but classes_dict_path is None. Please provide a path to a pickled classes dictionary.&#34;
)
if clustering_dict_path is not None:
if entry_type == &#34;pair&#34;:
Expand Down Expand Up @@ -1329,6 +1337,14 @@ <h2 id="parameters">Parameters</h2>
&#34;&#34;&#34;
self.debug = debug_verbose

if classes_dict_path is None:
dataset_parent = os.path.dirname(dataset_folder)
classes_dict_path = os.path.join(
dataset_parent, &#34;splits_dict&#34;, &#34;classes.pickle&#34;
)
if not os.path.exists(classes_dict_path):
classes_dict_path = None

alphabet = ALPHABET
self.alphabet_dict = defaultdict(lambda: 0)
for i, letter in enumerate(alphabet):
Expand Down Expand Up @@ -1430,7 +1446,7 @@ <h2 id="parameters">Parameters</h2>
classes_to_exclude = []
elif classes_dict_path is None:
raise ValueError(
&#34;classes_to_exclude is not None, but classes_dict_path is None&#34;
&#34;The classes_to_exclude parameter is not None, but classes_dict_path is None. Please provide a path to a pickled classes dictionary.&#34;
)
if clustering_dict_path is not None:
if entry_type == &#34;pair&#34;:
Expand Down Expand Up @@ -2215,7 +2231,7 @@ <h2 id="parameters">Parameters</h2>
mask_all_cdrs : bool, default False
if `True`, all CDRs are masked instead of just the sampled one
classes_dict_path : str, optional
path to the pickled classes dictionary
path to the pickled classes dictionary; if not given, we will try to find split dictionaries in the parent folder of `dataset_folder`
load_ligands : bool, default False
if `True`, the ligands will be loaded from the PDB files and added to the features
cut_edges : bool, default False
Expand Down Expand Up @@ -2355,7 +2371,7 @@ <h2 id="parameters">Parameters</h2>
<dt><strong><code>mask_all_cdrs</code></strong> :&ensp;<code>bool</code>, default <code>False</code></dt>
<dd>if <code>True</code>, all CDRs are masked instead of just the sampled one</dd>
<dt><strong><code>classes_dict_path</code></strong> :&ensp;<code>str</code>, optional</dt>
<dd>path to the pickled classes dictionary</dd>
<dd>path to the pickled classes dictionary; if not given, we will try to find split dictionaries in the parent folder of <code>dataset_folder</code></dd>
<dt><strong><code>load_ligands</code></strong> :&ensp;<code>bool</code>, default <code>False</code></dt>
<dd>if <code>True</code>, the ligands will be loaded from the PDB files and added to the features</dd>
<dt><strong><code>cut_edges</code></strong> :&ensp;<code>bool</code>, default <code>False</code></dt>
Expand Down Expand Up @@ -2446,7 +2462,7 @@ <h2 id="parameters">Parameters</h2>
mask_all_cdrs : bool, default False
if `True`, all CDRs are masked instead of just the sampled one
classes_dict_path : str, optional
path to the pickled classes dictionary
path to the pickled classes dictionary; if not given, we will try to find split dictionaries in the parent folder of `dataset_folder`
load_ligands : bool, default False
if `True`, the ligands will be loaded from the PDB files and added to the features
cut_edges : bool, default False
Expand Down
6 changes: 6 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@ <h2 id="installation">Installation</h2>
<p>docker:</p>
<pre><code class="language-bash">docker pull adaptyvbio/proteinflow
</code></pre>
<p>By default installing <code><a title="proteinflow" href="#proteinflow">proteinflow</a></code> with conda or pip will only load the dependencies that are required for the main functions of the package: downloading, generating and splitting datasets. If you are interested in using other functions like visualization, metrics and other data processing methods, please install the package with <code>pip install <a title="proteinflow" href="#proteinflow">proteinflow</a>[<a title="proteinflow.processing" href="processing/index.html">proteinflow.processing</a>]</code> or use the docker image.</p>
<p>Some metric functions also have separate requirements, see the documentation for details.</p>
<h3 id="troubleshooting">Troubleshooting</h3>
<ul>
<li>If you are using python 3.10 and encountering installation problems, try running <code>python -m pip install prody==2.4.0</code> before installing <code><a title="proteinflow" href="#proteinflow">proteinflow</a></code>.</li>
Expand Down Expand Up @@ -265,6 +267,10 @@ <h2 id="proteinflow-stable-releases">ProteinFlow Stable Releases</h2>
docker pull adaptyvbio/proteinflow
```

By default installing `proteinflow` with conda or pip will only load the dependencies that are required for the main functions of the package: downloading, generating and splitting datasets. If you are interested in using other functions like visualization, metrics and other data processing methods, please install the package with `pip install proteinflow[processing]` or use the docker image.

Some metric functions also have separate requirements, see the documentation for details.

### Troubleshooting
- If you are using python 3.10 and encountering installation problems, try running `python -m pip install prody==2.4.0` before installing `proteinflow`.
- If you are planning to generate new datasets and installed `proteinflow` with `pip` (or with `conda` on Mac OS with an M1 processor), you will need to additionally install [`mmseqs`](https://github.com/soedinglab/MMseqs2).
Expand Down
4 changes: 4 additions & 0 deletions proteinflow/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@
docker pull adaptyvbio/proteinflow
```
By default installing `proteinflow` with conda or pip will only load the dependencies that are required for the main functions of the package: downloading, generating and splitting datasets. If you are interested in using other functions like visualization, metrics and other data processing methods, please install the package with `pip install proteinflow[processing]` or use the docker image.
Some metric functions also have separate requirements, see the documentation for details.
### Troubleshooting
- If you are using python 3.10 and encountering installation problems, try running `python -m pip install prody==2.4.0` before installing `proteinflow`.
- If you are planning to generate new datasets and installed `proteinflow` with `pip` (or with `conda` on Mac OS with an M1 processor), you will need to additionally install [`mmseqs`](https://github.com/soedinglab/MMseqs2).
Expand Down

0 comments on commit cb92f4c

Please sign in to comment.