Skip to content

Commit

Permalink
Update README, use env variable for max torch, add typing
Browse files Browse the repository at this point in the history
  • Loading branch information
dsikka committed Jun 22, 2023
1 parent 3cdacb1 commit a2907bd
Show file tree
Hide file tree
Showing 5 changed files with 41 additions and 11 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ SparseML enables you to create a sparse model trained on your dataset in two way
This repository is tested on Python 3.8-3.10, and Linux/Debian systems.

It is recommended to install in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep your system in order.
Currently supported ML Frameworks are the following: `torch>=1.1.0,<=2.0`, `tensorflow>=1.8.0,<2.0.0`, `tensorflow.keras >= 2.2.0`. NOTE: `CLIP` examples under `integrations/clip` require torch nightly to be installed.
Currently supported ML Frameworks are the following: `torch>=1.1.0,<=2.0`, `tensorflow>=1.8.0,<2.0.0`, `tensorflow.keras >= 2.2.0`.

Install with pip using:

Expand Down
26 changes: 26 additions & 0 deletions integrations/clip/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
<!--
Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# CLIP Export Examples

The examples in `clip_onnx_export.py` provide the steps needed to export a CLIP model using sparseml's onnx exporting functionality. The models and pretrained weights are pulled in from [OpenClip](https://github.com/mlfoundations/open_clip/tree/main) and the command line tools provided allow exporting of a given model's Text and Visual branches. See the OpenClip repository for a full list of available models. For the CoCa models available in OpenClip, an additional text-decoder is also exported.

## Installation

The examples provided require torch nighly and `open_clip_torch==2.20.0` to be installed. To work within the `sparseml` environment, be sure to set the environment variable `MAX_TORCH` to your installed version when
installing torch nightly.

Example: `MAX_TORCH="2.1.0.dev20230613+cpu"`
14 changes: 7 additions & 7 deletions integrations/clip/clip_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@


class VisualModel(nn.Module):
def __init__(self, visual_model, output_tokens):
def __init__(self, visual_model: torch.nn.Module, output_tokens: bool):

super().__init__()

Expand All @@ -31,12 +31,12 @@ def forward(self, x):
class TextModel(nn.Module):
def __init__(
self,
token_embedding,
positional_embedding,
transformer,
ln_final,
text_projection,
attn_mask,
token_embedding: torch.nn.Embedding,
positional_embedding: torch.nn.parameter.Parameter,
transformer: torch.nn.Module,
ln_final: torch.nn.LayerNorm,
text_projection: torch.nn.parameter.Parameter,
attn_mask: torch.Tensor,
):

super().__init__()
Expand Down
6 changes: 5 additions & 1 deletion integrations/clip/clip_onnx_export.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,10 @@ def _export_text(
**export_kwargs,
):
module_name = "clip_text.onnx"
# If the model is a CLIP CoCa model, store the text model as is. For non-CoCa
# models, OpenCLIP does not provide access to the text model, only the transformer
# therefore in that case, create a new TextModel object to wrap the transformer
# and all relevant properties needed for the forward pass.
if is_coca:
text_model = model.text
else:
Expand Down Expand Up @@ -144,7 +148,7 @@ def main():
output nodes of the graph can also be assigned, using the `input_name` and
`output_name` arguments.
Specifically fo CoCa models, an additional text-decoder is also exported and saved
Specifically for CoCa models, an additional text-decoder is also exported and saved
in the same folder. Currently, only coca_ViT-B-32 and coca_ViT-L-14 are supported.
Example:
Expand Down
4 changes: 2 additions & 2 deletions src/sparseml/pytorch/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.


import functools
import os
from typing import Optional

from sparseml.base import check_version
Expand Down Expand Up @@ -49,7 +49,7 @@


_TORCH_MIN_VERSION = "1.0.0"
_TORCH_MAX_VERSION = "2.1.0.dev20230613+cpu"
_TORCH_MAX_VERSION = os.environ.get("MAX_TORCH", "2.0.100")


def check_torch_install(
Expand Down

0 comments on commit a2907bd

Please sign in to comment.