Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(cuda): downgrade to 12.0 to increase compatibility range #2994

Merged
merged 2 commits into from
Jul 23, 2024

Conversation

mudler
Copy link
Owner

@mudler mudler commented Jul 23, 2024

Description

This PR is an attempt to fix #2394

The PR needs more testing too - especially on newer CUDA/drivers combo to see if it's compatible with new driver versions.

Relevant CUDA docs: https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#id5

Copy link

netlify bot commented Jul 23, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit f2e7057
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/669feef5024dd500083755d6
😎 Deploy Preview https://deploy-preview-2994--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@dave-gray101
Copy link
Collaborator

@mudler - specifically on master only what do you think about trying to build an image against both 12.0 (instead of 12.4 apparently) and the latest 12.5? Is there any value in adding an extra matrix entry in that situation so we can be aware of upcoming incompatibilities / errors before they hit us severely?

@mudler
Copy link
Owner Author

mudler commented Jul 23, 2024

@mudler - specifically on master only what do you think about trying to build an image against both 12.0 (instead of 12.4 apparently) and the latest 12.5? Is there any value in adding an extra matrix entry in that situation so we can be aware of upcoming incompatibilities / errors before they hit us severely?

What I'd like to try first if 12.0 works as well on hosts that have 12.4 and 12.5 - I have access to a machine with 12.4, and runpod has 12.2 - I miss only 12.5 to double check.

From the CUDA docs it looks like it does, but needs some cross-check first to see if that's really the case.

Adding another host to the matrix would be my last resort - It really puts too much into CI/docs/installation scripts, but we can do it if really needed

@mudler
Copy link
Owner Author

mudler commented Jul 23, 2024

waiting for https://github.com/mudler/LocalAI/actions/runs/10060851192/job/27809452863?pr=2994 to push a test image so can give that a shot, and will report later the hosts I could test on.

@dave-gray101
Copy link
Collaborator

@mudler - specifically on master only what do you think about trying to build an image against both 12.0 (instead of 12.4 apparently) and the latest 12.5? Is there any value in adding an extra matrix entry in that situation so we can be aware of upcoming incompatibilities / errors before they hit us severely?

What I'd like to try first if 12.0 works as well on hosts that have 12.4 and 12.5 - I have access to a machine with 12.4, and rupod has 12.2 - I miss only 12.4 to double check.

Adding another host to the matrix would be my last resort - It really puts too much into CI/docs/installation scripts, but we can do it if really needed

Works for me - I'll admit I'm a bit confused with all the different versions of CUDA 12 floating around - I have only have a 12.5 CUDA dev laptop, but the docker host environment is temporarily snared up in an WSL/Docker incompatibility. I'll try to downgrade things and get it up and running again if we need to run more tests there locally.

@mudler
Copy link
Owner Author

mudler commented Jul 23, 2024

@mudler - specifically on master only what do you think about trying to build an image against both 12.0 (instead of 12.4 apparently) and the latest 12.5? Is there any value in adding an extra matrix entry in that situation so we can be aware of upcoming incompatibilities / errors before they hit us severely?

What I'd like to try first if 12.0 works as well on hosts that have 12.4 and 12.5 - I have access to a machine with 12.4, and rupod has 12.2 - I miss only 12.4 to double check.
Adding another host to the matrix would be my last resort - It really puts too much into CI/docs/installation scripts, but we can do it if really needed

Works for me - I'll admit I'm a bit confused with all the different versions of CUDA 12 floating around - I have only have a 12.5 CUDA dev laptop, but the docker host environment is temporarily snared up in an WSL/Docker incompatibility. I'll try to downgrade things and get it up and running again if we need to run more tests there locally.

12.5 sounds good - sorry I did a mistake in writing, I miss 12.5 in my test matrix

@mudler
Copy link
Owner Author

mudler commented Jul 23, 2024

Works with host with CUDA 12.2 (runpod.io)

logs.txt

@mudler
Copy link
Owner Author

mudler commented Jul 23, 2024

Works as well on 12.4:

Tue Jul 23 16:39:52 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14              Driver Version: 550.54.14      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L40S                    On  |   00000000:01:00.0 Off |                  Off |
| N/A   44C    P0            103W /  350W |    5202MiB /  49140MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

@dave-gray101 can you test on 12.5? container image: ttl.sh/localai-ci-pr-2994:sha-1a89570-cublas-cuda12-ffmpeg

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
@mudler mudler merged commit a9757fb into master Jul 23, 2024
31 checks passed
@mudler mudler deleted the downgrade_cuda branch July 23, 2024 21:35
@mudler mudler added the bug Something isn't working label Jul 24, 2024
truecharts-admin added a commit to truecharts/charts that referenced this pull request Jul 24, 2024
…9.2@2f86113 by renovate (#24238)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1` -> `v2.19.2` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.2`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.2)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.1...v2.19.2)

This release is a patch release to fix well known issues from 2.19.x

#### What's Changed

##### Bug fixes 🐛

- fix: pin setuptools 69.5.1 by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#2949
- fix(cuda): downgrade to 12.0 to increase compatibility range by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2994
- fix(llama.cpp): do not set anymore lora_base by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2999

##### Exciting New Features 🎉

- ci(Makefile): reduce binary size by compressing by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2947
- feat(p2p): warn the user to start with --p2p by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2993

##### 🧠 Models

- models(gallery): add tulu 8b and 70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2931
- models(gallery): add suzume-orpo by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2932
- models(gallery): add archangel_sft_pythia2-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2933
- models(gallery): add celestev1.2 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2937
- models(gallery): add calme-2.3-phi3-4b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2939
- models(gallery): add calme-2.8-qwen2-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2940
- models(gallery): add StellarDong-72b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2941
- models(gallery): add calme-2.4-llama3-70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2942
- models(gallery): add llama3.1 70b and 8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3000

##### 📖 Documentation and examples

- docs: add federation by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#2929
- docs: ⬆️ update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2935

##### 👒 Dependencies

- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2936
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2943
- chore(deps): Bump grpcio from 1.64.1 to 1.65.1 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2956
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/sentencetransformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2955
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/bark
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2951
- chore(deps): Bump docs/themes/hugo-theme-relearn from `1b2e139` to
`7aec99b` by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2952
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2959
- chore(deps): Bump numpy from 1.26.4 to 2.0.1 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2958
- chore(deps): Bump sqlalchemy from 2.0.30 to 2.0.31 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2957
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/vllm
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2964
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2966
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/common/template by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2963
- chore(deps): Bump weaviate-client from 4.6.5 to 4.6.7 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2965
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2970
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in /examples/functions
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2973
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/diffusers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2969
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/exllama2 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2971
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2974
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2980
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2982
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2981
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2990
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2984
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2986
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/mamba by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2989
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2992
- chore(deps): Bump langchain-community from 0.2.7 to 0.2.9 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2960
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2961
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/functions by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2975
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2988
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2987
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2995

##### Other Changes

- ci(Makefile): enable p2p on cross-arm64 builds by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2928

**Full Changelog**:
mudler/LocalAI@v2.19.1...v2.19.2

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40NDAuNiIsInVwZGF0ZWRJblZlciI6IjM3LjQ0MC42IiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
truecharts-admin added a commit to truecharts/charts that referenced this pull request Jul 24, 2024
…9.2@4757d5e by renovate (#24248)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-cublas-cuda11-core` -> `v2.19.2-cublas-cuda11-core` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.2`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.2)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.1...v2.19.2)

This release is a patch release to fix well known issues from 2.19.x

#### What's Changed

##### Bug fixes 🐛

- fix: pin setuptools 69.5.1 by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#2949
- fix(cuda): downgrade to 12.0 to increase compatibility range by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2994
- fix(llama.cpp): do not set anymore lora_base by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2999

##### Exciting New Features 🎉

- ci(Makefile): reduce binary size by compressing by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2947
- feat(p2p): warn the user to start with --p2p by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2993

##### 🧠 Models

- models(gallery): add tulu 8b and 70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2931
- models(gallery): add suzume-orpo by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2932
- models(gallery): add archangel_sft_pythia2-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2933
- models(gallery): add celestev1.2 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2937
- models(gallery): add calme-2.3-phi3-4b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2939
- models(gallery): add calme-2.8-qwen2-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2940
- models(gallery): add StellarDong-72b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2941
- models(gallery): add calme-2.4-llama3-70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2942
- models(gallery): add llama3.1 70b and 8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3000

##### 📖 Documentation and examples

- docs: add federation by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#2929
- docs: ⬆️ update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2935

##### 👒 Dependencies

- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2936
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2943
- chore(deps): Bump grpcio from 1.64.1 to 1.65.1 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2956
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/sentencetransformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2955
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/bark
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2951
- chore(deps): Bump docs/themes/hugo-theme-relearn from `1b2e139` to
`7aec99b` by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2952
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2959
- chore(deps): Bump numpy from 1.26.4 to 2.0.1 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2958
- chore(deps): Bump sqlalchemy from 2.0.30 to 2.0.31 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2957
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/vllm
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2964
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2966
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/common/template by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2963
- chore(deps): Bump weaviate-client from 4.6.5 to 4.6.7 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2965
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2970
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in /examples/functions
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2973
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/diffusers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2969
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/exllama2 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2971
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2974
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2980
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2982
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2981
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2990
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2984
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2986
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/mamba by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2989
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2992
- chore(deps): Bump langchain-community from 0.2.7 to 0.2.9 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2960
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2961
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/functions by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2975
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2988
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2987
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2995

##### Other Changes

- ci(Makefile): enable p2p on cross-arm64 builds by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2928

**Full Changelog**:
mudler/LocalAI@v2.19.1...v2.19.2

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40NDAuNiIsInVwZGF0ZWRJblZlciI6IjM3LjQ0MC42IiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
truecharts-admin added a commit to truecharts/charts that referenced this pull request Jul 25, 2024
…9.2 by renovate (#24258)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-aio-cpu` -> `v2.19.2-aio-cpu` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-aio-gpu-nvidia-cuda-11` ->
`v2.19.2-aio-gpu-nvidia-cuda-11` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-aio-gpu-nvidia-cuda-12` ->
`v2.19.2-aio-gpu-nvidia-cuda-12` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-cublas-cuda11-ffmpeg-core` ->
`v2.19.2-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-cublas-cuda12-ffmpeg-core` ->
`v2.19.2-cublas-cuda12-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-cublas-cuda12-core` -> `v2.19.2-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.1-ffmpeg-core` -> `v2.19.2-ffmpeg-core` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.2`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.2)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.1...v2.19.2)

This release is a patch release to fix well known issues from 2.19.x

##### What's Changed

##### Bug fixes 🐛

- fix: pin setuptools 69.5.1 by
[@&#8203;fakezeta](https://togithub.com/fakezeta) in
[mudler/LocalAI#2949
- fix(cuda): downgrade to 12.0 to increase compatibility range by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2994
- fix(llama.cpp): do not set anymore lora_base by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2999

##### Exciting New Features 🎉

- ci(Makefile): reduce binary size by compressing by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2947
- feat(p2p): warn the user to start with --p2p by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2993

##### 🧠 Models

- models(gallery): add tulu 8b and 70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2931
- models(gallery): add suzume-orpo by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2932
- models(gallery): add archangel_sft_pythia2-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2933
- models(gallery): add celestev1.2 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2937
- models(gallery): add calme-2.3-phi3-4b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2939
- models(gallery): add calme-2.8-qwen2-7b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2940
- models(gallery): add StellarDong-72b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2941
- models(gallery): add calme-2.4-llama3-70b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2942
- models(gallery): add llama3.1 70b and 8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3000

##### 📖 Documentation and examples

- docs: add federation by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#2929
- docs: ⬆️ update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2935

##### 👒 Dependencies

- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2936
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2943
- chore(deps): Bump grpcio from 1.64.1 to 1.65.1 in
/backend/python/openvoice by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2956
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/sentencetransformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2955
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/bark
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2951
- chore(deps): Bump docs/themes/hugo-theme-relearn from `1b2e139` to
`7aec99b` by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2952
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2959
- chore(deps): Bump numpy from 1.26.4 to 2.0.1 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2958
- chore(deps): Bump sqlalchemy from 2.0.30 to 2.0.31 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2957
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/vllm
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2964
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2966
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/common/template by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2963
- chore(deps): Bump weaviate-client from 4.6.5 to 4.6.7 in
/examples/chainlit by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2965
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2970
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in /examples/functions
by [@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2973
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/diffusers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2969
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/exllama2 by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2971
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2974
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2980
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2982
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2981
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2990
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2984
- chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2986
- chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in
/backend/python/mamba by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2989
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2992
- chore(deps): Bump langchain-community from 0.2.7 to 0.2.9 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2960
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2961
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/functions by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2975
- chore(deps): Bump openai from 1.35.13 to 1.37.0 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2988
- chore(deps): Bump langchain from 0.2.8 to 0.2.10 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#2987
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#2995

##### Other Changes

- ci(Makefile): enable p2p on cross-arm64 builds by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#2928

**Full Changelog**:
mudler/LocalAI@v2.19.1...v2.19.2

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40NDAuNyIsInVwZGF0ZWRJblZlciI6IjM3LjQ0MC43IiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CUDA 12.5 support or GPU acceleration not working after graphics driver update
2 participants