Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(llama-cpp): do not compress with UPX #3084

Merged
merged 1 commit into from
Jul 30, 2024
Merged

fix(llama-cpp): do not compress with UPX #3084

merged 1 commit into from
Jul 30, 2024

Conversation

mudler
Copy link
Owner

@mudler mudler commented Jul 30, 2024

Fixes: #3041

Fixes: #3041

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Copy link

netlify bot commented Jul 30, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 27acb81
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/66a8c38caa44520008dde5ce
😎 Deploy Preview https://deploy-preview-3084--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@mudler mudler merged commit 274487c into master Jul 30, 2024
30 of 31 checks passed
@mudler mudler deleted the fix/compression branch July 30, 2024 13:04
truecharts-admin added a commit to truecharts/charts that referenced this pull request Aug 1, 2024
…9.4 by renovate (#24653)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-aio-cpu` -> `v2.19.4-aio-cpu` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-aio-gpu-nvidia-cuda-11` ->
`v2.19.4-aio-gpu-nvidia-cuda-11` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-aio-gpu-nvidia-cuda-12` ->
`v2.19.4-aio-gpu-nvidia-cuda-12` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-cublas-cuda11-ffmpeg-core` ->
`v2.19.4-cublas-cuda11-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-cublas-cuda11-core` -> `v2.19.4-cublas-cuda11-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-cublas-cuda12-core` -> `v2.19.4-cublas-cuda12-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-ffmpeg-core` -> `v2.19.4-ffmpeg-core` |
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3` -> `v2.19.4` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.4`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.4)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.3...v2.19.4)

<!-- Release notes generated using configuration in .github/release.yml
at master -->

##### What's Changed

##### 🧠 Models

- chore(model-gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3040
- chore(model-gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3043
- models(gallery): add magnum-32b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3044
- models(gallery): add lumimaid-v0.2-70b-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3045
- models(gallery): add sekhmet_aleph-l3.1-8b-v0.1-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3046
- models(gallery): add l3.1-8b-llamoutcast-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3047
- models(gallery): add l3.1-8b-celeste-v1.5 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3080
- models(gallery): add llama-guard-3-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3082
- models(gallery): add meta-llama-3-instruct-8.9b-brainstorm-5x-form-11
by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3083
- models(gallery): add sunfall-simpo by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3088
- models(gallery): add genius-llama3.1-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3089
- models(gallery): add seeker-9b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3090
- models(gallery): add llama3.1-chinese-chat by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3091
- models(gallery): add gemmasutra-pro-27b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3092
- models(gallery): add leetwizard by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3093
- models(gallery): add tarnished-9b-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3096
- models(gallery): add meta-llama-3-instruct-12.2b-brainstorm-20x-form-8
by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3097
- models(gallery): add loki-base-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3098
- models(gallery): add tifa by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3099

##### 👒 Dependencies

- chore(deps): Bump langchain from 0.2.10 to 0.2.11 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3053
- chore(deps): Bump openai from 1.37.0 to 1.37.1 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3051
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3048
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/vllm by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3061
- chore(deps): Bump chromadb from 0.5.4 to 0.5.5 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3060
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3062
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3067
- chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3066
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3068
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3069
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/petals by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3070
- chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3071
- chore(deps): Bump streamlit from 1.36.0 to 1.37.0 in
/examples/streamlit-bot by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3072

##### Other Changes

- docs: ⬆️ update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3039
- fix: install.sh bash specific equality check by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[mudler/LocalAI#3038
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3075
- Revert "chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers-musicgen" by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3077
- Revert "chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers" by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3078
- Revert "chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/vllm" by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#3079
- fix(llama-cpp): do not compress with UPX by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3084
- fix(ci): update openvoice checkpoints URLs by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3085
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3086
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3102

**Full Changelog**:
mudler/LocalAI@v2.19.3...v2.19.4

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4xOC4wIiwidXBkYXRlZEluVmVyIjoiMzguMTguMCIsInRhcmdldEJyYW5jaCI6Im1hc3RlciIsImxhYmVscyI6WyJhdXRvbWVyZ2UiLCJ1cGRhdGUvZG9ja2VyL2dlbmVyYWwvbm9uLW1ham9yIl19-->
truecharts-admin added a commit to truecharts/charts that referenced this pull request Aug 1, 2024
…9.4@c08cf2b by renovate (#24665)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
patch | `v2.19.3-cublas-cuda12-ffmpeg-core` ->
`v2.19.4-cublas-cuda12-ffmpeg-core` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

###
[`v2.19.4`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.4)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.19.3...v2.19.4)

<!-- Release notes generated using configuration in .github/release.yml
at master -->

#### What's Changed

##### 🧠 Models

- chore(model-gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3040
- chore(model-gallery): ⬆️ update checksum by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3043
- models(gallery): add magnum-32b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3044
- models(gallery): add lumimaid-v0.2-70b-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3045
- models(gallery): add sekhmet_aleph-l3.1-8b-v0.1-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3046
- models(gallery): add l3.1-8b-llamoutcast-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3047
- models(gallery): add l3.1-8b-celeste-v1.5 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3080
- models(gallery): add llama-guard-3-8b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3082
- models(gallery): add meta-llama-3-instruct-8.9b-brainstorm-5x-form-11
by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3083
- models(gallery): add sunfall-simpo by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3088
- models(gallery): add genius-llama3.1-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3089
- models(gallery): add seeker-9b by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3090
- models(gallery): add llama3.1-chinese-chat by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3091
- models(gallery): add gemmasutra-pro-27b-v1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3092
- models(gallery): add leetwizard by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3093
- models(gallery): add tarnished-9b-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3096
- models(gallery): add meta-llama-3-instruct-12.2b-brainstorm-20x-form-8
by [@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3097
- models(gallery): add loki-base-i1 by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3098
- models(gallery): add tifa by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3099

##### 👒 Dependencies

- chore(deps): Bump langchain from 0.2.10 to 0.2.11 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3053
- chore(deps): Bump openai from 1.37.0 to 1.37.1 in
/examples/langchain/langchainpy-localai-example by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3051
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/autogptq by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3048
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/vllm by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3061
- chore(deps): Bump chromadb from 0.5.4 to 0.5.5 in
/examples/langchain-chroma by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3060
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/parler-tts by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3062
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/rerankers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3067
- chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers-musicgen by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3066
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/coqui by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3068
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/vall-e-x by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3069
- chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/petals by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3070
- chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3071
- chore(deps): Bump streamlit from 1.36.0 to 1.37.0 in
/examples/streamlit-bot by
[@&#8203;dependabot](https://togithub.com/dependabot) in
[mudler/LocalAI#3072

##### Other Changes

- docs: ⬆️ update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3039
- fix: install.sh bash specific equality check by
[@&#8203;dave-gray101](https://togithub.com/dave-gray101) in
[mudler/LocalAI#3038
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3075
- Revert "chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers-musicgen" by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3077
- Revert "chore(deps): Bump setuptools from 69.5.1 to 72.1.0 in
/backend/python/transformers" by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3078
- Revert "chore(deps): Bump setuptools from 70.3.0 to 72.1.0 in
/backend/python/vllm" by [@&#8203;mudler](https://togithub.com/mudler)
in
[mudler/LocalAI#3079
- fix(llama-cpp): do not compress with UPX by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3084
- fix(ci): update openvoice checkpoints URLs by
[@&#8203;mudler](https://togithub.com/mudler) in
[mudler/LocalAI#3085
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3086
- chore: ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[mudler/LocalAI#3102

**Full Changelog**:
mudler/LocalAI@v2.19.3...v2.19.4

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4xOC40IiwidXBkYXRlZEluVmVyIjoiMzguMTguNCIsInRhcmdldEJyYW5jaCI6Im1hc3RlciIsImxhYmVscyI6WyJhdXRvbWVyZ2UiLCJ1cGRhdGUvZG9ja2VyL2dlbmVyYWwvbm9uLW1ham9yIl19-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

llama-cpp: object file has no dynamic section
1 participant