Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error Encountered After Configuring LLM with Ollama Provider in Config File #1903

Open
vrajpatel04 opened this issue Sep 25, 2024 · 6 comments
Labels
bug Something isn't working

Comments

@vrajpatel04
Copy link

vrajpatel04 commented Sep 25, 2024

Issue with current documentation:

I've configured the LLM in my config file as follows:

image

However, I'm encountering an error (see screenshot). Could you help me resolve this issue ?
image

@parshvadaftari
Copy link
Contributor

@vrajpatel04 Are you using the config as file or as a dictionary?

@vrajpatel04
Copy link
Author

@vrajpatel04 Are you using the config as file or as a dictionary?

I created a separate config.yaml file.

@ketangangal
Copy link
Contributor

ketangangal commented Sep 27, 2024

Hi @vrajpatel04 use ollama_base_url key instead of base_url:

@vrajpatel04
Copy link
Author

Hi @vrajpatel04 use ollama_base_url key instead of base_url:

Hi @ketangangal ,

still giving the same error:

image

@PranavPuranik
Copy link
Contributor

I can take a look at this tomorrow @Dev-Khant

@PranavPuranik
Copy link
Contributor

PranavPuranik commented Oct 13, 2024

@vrajpatel04
stream doesn't seem like a parameter for llm, can you remove it and send me your config and code so I can reproduce?

https://docs.mem0.ai/components/llms/config

Also, could you please upgrade your Mem0 version.

@Dev-Khant Dev-Khant added the bug Something isn't working label Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants