Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main'
Browse files Browse the repository at this point in the history
  • Loading branch information
Benjoyo committed Apr 5, 2024
2 parents 77a586f + fe5d209 commit 5c976ce
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 17 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ jobs:
- name: Install dependencies
run: poetry install --only test --no-root --no-cache
- name: Run pytest
run: ZEEBE_TEST_IMAGE_TAG=8.4.0 CONNECTOR_IMAGE=${{ needs.build-push-inference.outputs.image }}-amd64 INFERENCE_IMAGE=holisticon/bpm-ai-inference:latest-cpu OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} poetry run pytest
run: ZEEBE_TEST_IMAGE_TAG=8.4.0 CONNECTOR_IMAGE=${{ needs.build-push.outputs.image }}-amd64 INFERENCE_IMAGE=holisticon/bpm-ai-inference:latest-cpu OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} poetry run pytest

create-push-manifest:
runs-on: ubuntu-latest
Expand Down
21 changes: 10 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,18 +20,19 @@ local OCR with tesseract and local audio transcription with Whisper. All running
<img src="assets/screenshots/example.png" width="100%" alt="Example usage">
</figure>

### 🆕 What's New in 1.0
### 🆕 What's New
* Anthropic Claude 3 model options
* Option to use small **AI models running 100% locally on the CPU** - no API key or GPU needed!
* Curated models known to work well, just select from dropdown
* Or use any compatible model from [HuggingFace Hub](https://huggingface.co/models)
* Multimodal input:
* **Audio** (voice messages, call recordings, ...) using local or API-based transcription
* **Images / Documents** (document scans, PDFs, ...) using local or API-based OCR or multimodal AI models
* Ultra slim docker image (**60mb** without local AI)
* Use files from Amazon S3 or Azure Blob Storage
* Logging & Tracing support with [Langfuse](https://langfuse.com)

### 🔜 Upcoming
* higher quality local and API-based OCR
* higher quality local OCR
* support for local, open-access LLMs

---
Expand Down Expand Up @@ -101,25 +102,23 @@ mkdir ./data
and launch the connector runtime with a local zeebe cluster:

```bash
docker compose --profile default --profile platform up -d
docker compose --profile platform up -d
```

For Camunda Cloud, remove the platform profile.

To use the larger **inference** image that includes dependencies to run local AI model inference for decide, extract and translate, use the inference profile instead of default:
To use the **inference** extension container that includes local AI model inference implementations for decide, extract and translate, as well as local OCR, additionally use the inference profile:

```bash
docker compose --profile inference --profile platform up -d
```

#### Available Image Tags
#### Available Images

Two types of Docker images are available on [DockerHub](https://hub.docker.com/r/holisticon/bpm-ai-connectors-camunda-8):
* The lightweight (**~60mb** compressed) default image suitable for users only needing the OpenAI API (and other future API-based services)
* Use `latest` tag (multiarch)
* The more heavy-weight (~500mb) inference image that contains all dependencies to run transformer AI models (and more) **locally on the CPU**,
allowing you to use the `decide`, `extract` and `translate` connectors 100% locally without any API key needed
* Use `latest-inference` tag (multiarch)
* The main image suitable for users only needing the Anthropic/OpenAI and Azure/Amazon APIs (and other future API-based services)
* An optional inference image that contains all dependencies to run transformer AI models (and more) **locally on the CPU**,
allowing you to use the `decide`, `extract` and `translate` connectors 100% locally and perform OCR without any API key needed

## 📚 Connector Documentation

Expand Down
4 changes: 2 additions & 2 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
services:

#####################################################################################################################
# Default AI Connectors - enabled by profile `--profile default` #
# AI Connectors #
#####################################################################################################################

connectors:
profiles: [default]
container_name: bpm-ai-connectors-camunda-8
image: holisticon/bpm-ai-connectors-camunda-8:latest
build:
dockerfile: Dockerfile
depends_on:
bpm-ai-inference:
condition: service_started
required: false
zeebe:
condition: service_healthy
required: false
Expand Down
4 changes: 1 addition & 3 deletions wizard.sh
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,6 @@ inference=${inference:-n}

if [ "$inference" = "y" ]; then
profile_flags="$profile_flags --profile inference"
else
profile_flags="$profile_flags --profile default"
fi

##############################################################################################################################
Expand Down Expand Up @@ -215,4 +213,4 @@ done
# Start docker compose with selected profile(s)
##############################################################################################################################

eval "docker compose$profile_flags up -d"
eval "docker compose$profile_flags up -d"

0 comments on commit 5c976ce

Please sign in to comment.