Skip to content

Commit

Permalink
[CLIP] Captioning Pipeline (#1145)
Browse files Browse the repository at this point in the history
* initial refactor

* move BasePipeline to a new file

* test fix

* anothe test fix

* fix import

* revert

* initial refactor

* add tests for BasePipeline

* move BasePipeline to a new file

* initial refactor

* update test; finish off initial refactoring changes post local testing

* initial commit for clip zero-shot

* add basic structure for text branch and zeroshot

* add schema details

* update pipelines after running mock engine tests

* add zeroshot tests

* rebase fix

* clean-up comments; add note about onnx export issue

* move paths to fixtures

* rebase fix

* rebase fix

* refactor pipelines to separate visual, text, and zeroshot. also add pytest skips until model issues are resolved

* fix rebase

* initial refactor

* move BasePipeline to a new file

* initial refactor

* move BasePipeline to a new file

* initial refactor

* rebase fix

* move paths to fixtures

* initial refactor

* initial caption functionality

* debugging

* more debugging

* post debugging code

* fix imports

* cleanup post model fix

* fix variable names, some clean-up

* remove image embs loading

* update dimensions

* rebase

* remove extra param

* remove typo

* update README instructions; fix linalg import

* clean-up pipelines, updatetyping and descriptions

* rebase fix

* expose pipeline engine args
  • Loading branch information
dsikka committed Aug 7, 2023
1 parent d7f037c commit ffeb98f
Show file tree
Hide file tree
Showing 9 changed files with 587 additions and 60 deletions.
6 changes: 5 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,11 @@ def _parse_requirements_file(file_path):
"haystack_reqs.txt",
)
_haystack_integration_deps = _parse_requirements_file(_haystack_requirements_file_path)
_clip_deps = ["open_clip_torch==2.20.0", "scipy==1.10.1"]
_clip_deps = [
"open_clip_torch==2.20.0",
"scipy==1.10.1",
f"{'nm-transformers' if is_release else 'nm-transformers-nightly'}",
]

_torch_deps = ["torch>=1.7.0,<=2.0"]

Expand Down
59 changes: 54 additions & 5 deletions src/deepsparse/clip/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ DeepSparse allows inference on [CLIP](https://github.com/mlfoundations/open_clip

The CLIP integration currently supports the following task:
- **Zero-shot Image Classification** - Classifying images given possible classes
- **Caption Generation** - Generate a caption given an image

## Getting Started

Expand All @@ -13,24 +14,38 @@ Before you start your adventure with the DeepSparse Engine, make sure that your
```pip install deepsparse[clip]```

### Model Format
By default, to deploy CLIP models using the DeepSparse Engine, it is required to supply the model in the ONNX format. This grants the engine the flexibility to serve any model in a framework-agnostic environment. To see examples of pulling CLIP models and exporting them to ONNX, please see the [sparseml documentation](https://github.com/neuralmagic/sparseml/tree/main/integrations/clip). For the Zero-shot image classification workflow, two ONNX models are required, a visual model for CLIP's visual branch, and a text model for CLIP's text branch. Both of these model should be produced through the sparseml integration linked above.
By default, to deploy CLIP models using the DeepSparse Engine, it is required to supply the model in the ONNX format. This grants the engine the flexibility to serve any model in a framework-agnostic environment. To see examples of pulling CLIP models and exporting them to ONNX, please see the [sparseml documentation](https://github.com/neuralmagic/sparseml/tree/main/integrations/clip).

For the Zero-shot image classification workflow, two ONNX models are required, a visual model for CLIP's visual branch, and a text model for CLIP's text branch. Both of these models can be produced through the sparseml integration linked above. For caption generation, specific models called CoCa models are required and instructions on how to export CoCa models are also provided in the sparseml documentation above. The CoCa exporting pathway will generate one additional decoder model, along with the text and visual models.

### Deployment examples:
The following example uses pipelines to run the CLIP models for inference. As input, the pipeline ingests a list of images and a list of possible classes. A class is returned for each of the provided images.
The following example uses pipelines to run the CLIP models for inference. For Zero-shot prediction, the pipeline ingests a list of images and a list of possible classes. A class is returned for each of the provided images. For caption generation, only an image file is required.

If you don't have images ready, pull down the sample images using the following commands:

```bash
wget -O basilica.jpg https://github.com/raw/neuralmagic/deepsparse/main/src/deepsparse/yolo/sample_images/basilica.jpg
```

```bash
wget -O buddy.jpeg https://github.com/raw/neuralmagic/deepsparse/main/tests/deepsparse/pipelines/sample_images/buddy.jpeg
```

This will pull down two images, one with a happy dog and one with St.Peter's basilica.
```bash
wget -O thailand.jpg https://github.com/raw/neuralmagic/deepsparse/main/src/deepsparse/yolact/sample_images/thailand.jpg
```

<p float="left">
<img src="https://github.com/raw/neuralmagic/deepsparse/main/src/deepsparse/yolo/sample_images/basilica.jpg" width="300" />
<img src="https://github.com/raw/neuralmagic/deepsparse/main/tests/deepsparse/pipelines/sample_images/buddy.jpeg" width="300" />
<img src="https://github.com/raw/neuralmagic/deepsparse/main/src/deepsparse/yolact/sample_images/thailand.jpg" width="300" />
</p>

This will pull down 3 images, a happy dog, St.Peter's basilica, and two elephants.

#### Zero-shot Prediction

Let's run an example to clasify the images. We'll provide the images in a list with their file names as well as a list of possible classes. We'll also provide paths to the exported ONNX models.
Let's run an example to clasify the images. We'll provide the images in a list with their file names as well as a list of possible classes. We'll also provide paths to the exported ONNX models under the `zeroshot_research` root folder.

```python
import numpy as np
Expand All @@ -43,7 +58,7 @@ from deepsparse.clip import (
)

possible_classes = ["ice cream", "an elephant", "a dog", "a building", "a church"]
images = ["basilica.jpg", "buddy.jpeg"]
images = ["basilica.jpg", "buddy.jpeg", "thailand.jpg"]

model_path_text = "zeroshot_research/text/model.onnx"
model_path_visual = "zeroshot_research/visual/model.onnx"
Expand Down Expand Up @@ -72,4 +87,38 @@ DeepSparse, Copyright 2021-present / Neuralmagic, Inc. version: 1.6.0.20230727 C
Image basilica.jpg is a picture of a church
Image buddy.jpeg is a picture of a dog
Image thailand.jpg is a picture of an elephant
```

#### Caption Generation
Let's try a caption generation example. We'll leverage the `thailand.jpg` file that was pulled down earlier. We'll also provide the 3 exported CoCa ONNX models under the `caption_models` folder.

```python
from deepsparse import BasePipeline
from deepsparse.clip import CLIPCaptionInput, CLIPVisualInput

root = "caption_models"
model_path_visual = f"{root}/clip_visual.onnx"
model_path_text = f"{root}/clip_text.onnx"
model_path_decoder = f"{root}/clip_text_decoder.onnx"
engine_args = {"num_cores": 8}

kwargs = {
"visual_model_path": model_path_visual,
"text_model_path": model_path_text,
"decoder_model_path": model_path_decoder,
"pipeline_engine_args": engine_args
}
pipeline = BasePipeline.create(task="clip_caption", **kwargs)

pipeline_input = CLIPCaptionInput(image=CLIPVisualInput(images="thailand.jpg"))
output = pipeline(pipeline_input).caption
print(output[0])
```
Running the code above, we get the following caption:

```
DeepSparse, Copyright 2021-present / Neuralmagic, Inc. version: 1.6.0.20230727 COMMUNITY | (3cb4a3e5) (optimized) (system=avx2, binary=avx2)
an adult elephant and a baby elephant .
```
22 changes: 6 additions & 16 deletions src/deepsparse/clip/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,21 +11,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# flake8: noqa
from deepsparse.clip.decoder_pipeline import *
from deepsparse.clip.text_pipeline import *
from deepsparse.clip.visual_pipeline import *


from deepsparse.clip.text_pipeline import (
CLIPTextInput,
CLIPTextOutput,
CLIPTextPipeline,
)
from deepsparse.clip.visual_pipeline import (
CLIPVisualInput,
CLIPVisualOutput,
CLIPVisualPipeline,
)
from deepsparse.clip.zeroshot_pipeline import (
CLIPZeroShotInput,
CLIPZeroShotOutput,
CLIPZeroShotPipeline,
)
from deepsparse.clip.zeroshot_pipeline import * # isort:skip
from deepsparse.clip.captioning_pipeline import * # isort:skip
Loading

0 comments on commit ffeb98f

Please sign in to comment.