Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: When uploading a document getting network error #1885

Closed
OllTee opened this issue Jul 17, 2024 · 17 comments
Closed

[BUG]: When uploading a document getting network error #1885

OllTee opened this issue Jul 17, 2024 · 17 comments
Labels
Desktop investigating Core team or maintainer will or is currently looking into this issue needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug

Comments

@OllTee
Copy link

OllTee commented Jul 17, 2024

How are you running AnythingLLM?

AnythingLLM desktop app

What happened?

When uploading a document getting network error

Are there known steps to reproduce?

Load Model for me it's Llama3 8B
Upload a file to the workspace
And when attempting to send a request i get this error:
Could not respond to message.
An error occurred while streaming response. network error

@OllTee OllTee added the possible bug Bug was reported but is not confirmed or is unable to be replicated. label Jul 17, 2024
@OllTee OllTee changed the title [BUG]: [BUG]: When uploading a document getting network error Jul 17, 2024
@timothycarambat
Copy link
Member

And when attempting to send a request i get this error:

Do you mean as a chat?

@timothycarambat timothycarambat added the needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug label Jul 18, 2024
@OllTee
Copy link
Author

OllTee commented Jul 18, 2024

  1. I ask a question to it who is MattvsJapan and it anwsers wrongfully:
    image

  2. I import the yoututbers channel into AnythingLLM and the .html file gets imported:
    image

3.When asking the same question again it give me this error this also happens with every other local file:
image

@timothycarambat
Copy link
Member

  • What Embedder are you using?
  • What LLM are you using?
  • When you open the workspace's settings -> Vector database -> does "Vector count" have a non-zero number?

@OllTee
Copy link
Author

OllTee commented Jul 18, 2024

  1. Anything LLM Embedder.
  2. Llama3 8B.
  3. i have LanceDB and there is "There is no configuration needed for LanceDB." written on there.

@timothycarambat
Copy link
Member

timothycarambat commented Jul 19, 2024

I meant workspace settings, not the system settings - if you hover over the workspace on the sidebar there is a "gear" icon that will show - click that. That is the Workspace settings

And for the LLM, i mean the provider not the LLM model itself - sorry that was unclear

@OllTee
Copy link
Author

OllTee commented Jul 19, 2024

  1. Workspace Provider: System Default
  2. Workspace Agent: None for now
  3. Vector Count: 2

@OllTee
Copy link
Author

OllTee commented Jul 19, 2024

and when is choose the Agent Provider as AnythingLLM it's the same error

@timothycarambat
Copy link
Member

I still dont have the information for what LLM provider you are using. Please go to exactly this screen and tell me which provider is selected on screen.

What we can determine at least is that the connection issue is not the vector database

Screenshot 2024-07-19 at 3 20 23 PM

@timothycarambat timothycarambat removed the possible bug Bug was reported but is not confirmed or is unable to be replicated. label Jul 19, 2024
@OllTee
Copy link
Author

OllTee commented Jul 19, 2024

image

@timothycarambat
Copy link
Member

Okay, are you on a machine with a GPU? If so, what is in it? It seems like the internal ollama process is not attaching or is failing to use the GPU- which since you are on Windows would not be out of the realm of possibilities.

The next best step is to get logs from debug mode
https://docs.useanything.com/debug

If you open powershell and drag the desktop icon into the terminal and remove the double quotes and press enter it will show all the logs of the app. Then you would send a chat and it would tell us what the error is.

If you swapped to another LLM this problem would "go away",

@OllTee
Copy link
Author

OllTee commented Jul 20, 2024

GTX 970

[Preferences] preference config stored at C:\Users\ot\AppData\Roaming\anythingllm-desktop\config.json
Prisma schema loaded from prisma\schema.prisma
Datasource "db": SQLite database "anythingllm.db" at "file:C:/Users/ot/AppData/Roaming/anythingllm-desktop/storage/anythingllm.db"

Already in sync, no schema change or pending migration was found.

Running generate... (Use --skip-generate to skip the generators)
Error: EPERM: operation not permitted, copyfile 'C:\Users\ot\AppData\Roaming\Prisma\master\61e140623197a131c2a6189271ffee05a7aa9a59\windows\libquery-engine' -> 'C:\Program Files\AnythingLLM\resources\backend\node_modules\prisma\query_engine-windows.dll.node'
Prisma schema loaded from prisma\schema.prisma
Error: EPERM: operation not permitted, copyfile 'C:\Users\ot\AppData\Roaming\Prisma\master\61e140623197a131c2a6189271ffee05a7aa9a59\windows\libquery-engine' -> 'C:\Program Files\AnythingLLM\resources\backend\node_modules\prisma\query_engine-windows.dll.node'
[OllamaProcessManager] Ollama will bind on port 11434 when booted.
[Preferences] Will load window with last know bounds.
[collector] info: Collector hot directory and tmp storage wiped!
[collector] info: [production] AnythingLLM Standalone Document processor listening on port 8888
[backend] info: [TELEMETRY ENABLED] Anonymous Telemetry enabled. Telemetry helps Mintplex Labs Inc improve AnythingLLM.
[backend] info: prisma:info
[backend] info: [TELEMETRY SENT]
[backend] info: Hot loading of AnythingLLMOllama - LLM_PROVIDER is anythingllm_ollama.
[backend] info: [NativeEmbedder] Initialized
[backend] info: [CommunicationKey] RSA key pair generated for signed payloads within AnythingLLM services.
[backend] info: [EncryptionManager] Loaded existing key & salt for encrypting arbitrary data.
[backend] info: [production] AnythingLLM Standalone Backend listening on port 3001
[OllamaProcessManager] SINGLETON LOCK: Using existing OllamaProcessManager.
[OllamaProcessManager] [windows] Ollama subprocess running. Port 11434. PID 6496.
[backend] info: OllamaAPI offline - retrying. 1/3
[backend] info: [BackgroundWorkerService] Feature is not enabled and will not be started.
[backend] info: [NativeEmbedder] Initialized
[backend] info: [NativeEmbedder] Embedded Chunk 1 of 1

@timothycarambat
Copy link
Member

To confirm, the logs above are exactly what is shown when the error is reproduced, correct? Looks like the prompt is sent, embedded, and then ???

@OllTee
Copy link
Author

OllTee commented Jul 20, 2024

2024-07-20.21-14-16.mp4

@timothycarambat timothycarambat added the investigating Core team or maintainer will or is currently looking into this issue label Jul 22, 2024
@timothycarambat timothycarambat self-assigned this Jul 22, 2024
@timothycarambat
Copy link
Member

After reviewing this for some time this detail sticks out a lot

GTX 970

What are the specs of your machine? Your machine very well may be dramatically underpowered or possibly outdatated to do CPU embedding and may not even support AVX2 instructions.

What are the full specs of your machine?

@timothycarambat timothycarambat removed their assignment Jul 30, 2024
@OllTee
Copy link
Author

OllTee commented Aug 2, 2024

image

@timothycarambat
Copy link
Member

Screenshot 2024-08-02 at 1 50 16 PM

Your computer does not support AVX2 instruction sets (AVX2 is like 10 years old now!!) so you cannot use the default lancedb. You can set up a cloud or local vector database via any of the other vector storage providers, but you can't use the no-setup default with a machine that old.

Alternatively, you can use a cloud-hosted version of AnythingLLM and avoid that entirely. That is why you have been getting such odd errors.

@OllTee
Copy link
Author

OllTee commented Aug 2, 2024

Ok ill try doing that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Desktop investigating Core team or maintainer will or is currently looking into this issue needs info / can't replicate Issues that require additional information and/or cannot currently be replicated, but possible bug
Projects
None yet
Development

No branches or pull requests

2 participants