Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chatbot UI does not seems to be working #165

Open
ksingh7 opened this issue May 3, 2023 · 16 comments
Open

Chatbot UI does not seems to be working #165

ksingh7 opened this issue May 3, 2023 · 16 comments

Comments

@ksingh7
Copy link

ksingh7 commented May 3, 2023

Hey Guys, love this project and willing to contribute to it.
To learn more about the stuff, i need some help in getting the Chatbot UI to work

Following the example , here is my docker-compose.yaml

version: '3.6'

services:

  api:
    image: quay.io/go-skynet/local-ai:latest
    restart: always
    build:
      context: .
      dockerfile: Dockerfile.dev
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai --threads 8" ]

  web-ui:
    image: ghcr.io/mckaywrigley/chatbot-ui:main
    restart: always
    ports:
      - 3000:3000
    environment:
      - 'OPENAI_API_KEY='
      - 'OPENAI_API_HOST=http://api:8080'
  • The chatbot UI keeps on loading and throws message unable to find model
    image

  • I am exposing Chatbot UI over the internet

Can you pls guide me what should be the value of OPEN_API_KEY and OPEN_API_HOST in this case , i am sure something is wrong in my config.

@ksingh7
Copy link
Author

ksingh7 commented May 3, 2023

@mudler helping hand

@mudler
Copy link
Owner

mudler commented May 3, 2023

Hey @ksingh7 👋

what do you see in the console? I'd also suggest to set threads with the .env file instead of replacing the command in the docker-compose file.

@javea7171
Copy link

Same problem when running locally. Chatbot UI fails to display models

LocalAI:
[127.0.0.1]:43394 200 - GET /v1/models
[127.0.0.1]:43400 200 - GET /v1/models
[127.0.0.1]:53952 200 - GET /v1/models

Chatbot UI: spinner where models selection dropdown should be

@adamyodinsky
Copy link

Same issue with the endless spinner, can't find models

@Soberia
Copy link

Soberia commented May 27, 2023

The spinner will go away if one of the models is named gpt-3.5-turbo. However, it's not possible to load more than one model.

@EchedelleLR
Copy link

I mentioned this at mckaywrigley/chatbot-ui#770

@PierreMesure
Copy link

PierreMesure commented May 30, 2023

I'm also unsuccessful with Chatbot-UI, I added all the .tmpl files to make sure the UI detects gpt4all as gpt-3.5-turbo and it shows up when creating a new chat. I can also see that the UI's call to https://{{ chat }}/api/models is successful.

But trying to talk to the bot returns nothing.

image

The API works.

curl https://{{ api }}/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'
{"object":"chat.completion","model":"gpt-3.5-turbo","choices":[{"message":{"role":"assistant","content":"I am doing well. How about you?"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}

EDIT: I used the new docker-compose provided by @mudler yesterday (thanks! ♥️) and it now works! I think the issue was with the files provided in the manual installation or an error on my side when I copied them to my directory.

@akhiljalagam
Copy link

+1

@hanwsf
Copy link

hanwsf commented Jun 3, 2023

These files should be availble to make gpt-3.5-turbo models appeal in the model:
completion.tmpl gpt-3.5-turbo.yaml
ggml-gpt4all-j gpt4all.tmpl

The model is from: wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
It works if all files in the models folder.

Btw, do you know how to switch to another model in the same models folder? I have rwkv works with curl.

@hanwsf
Copy link

hanwsf commented Jun 3, 2023

Hey Guys, love this project and willing to contribute to it. To learn more about the stuff, i need some help in getting the Chatbot UI to work

Following the example , here is my docker-compose.yaml

version: '3.6'

services:

  api:
    image: quay.io/go-skynet/local-ai:latest
    restart: always
    build:
      context: .
      dockerfile: Dockerfile.dev
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai --threads 8" ]

  web-ui:
    image: ghcr.io/mckaywrigley/chatbot-ui:main
    restart: always
    ports:
      - 3000:3000
    environment:
      - 'OPENAI_API_KEY='
      - 'OPENAI_API_HOST=http://api:8080'
  • The chatbot UI keeps on loading and throws message unable to find model
    image
  • I am exposing Chatbot UI over the internet

Can you pls guide me what should be the value of OPEN_API_KEY and OPEN_API_HOST in this case , i am sure something is wrong in my config.

OPEN_API_KEY and OPEN_API_HOST don't need to change if you use local model to inference. OPEN_API_KEY can be anything for local model.

@iPenx
Copy link

iPenx commented Jun 5, 2023

that is because chatbot-ui doesn't define GGML_GPT4ALL_J in the enum OpenAIModelID.

image

i add these code into chatbot-ui and it worked.

image

image

@nutmilk10
Copy link

that is because chatbot-ui doesn't define GGML_GPT4ALL_J in the enum OpenAIModelID.

image

i add these code into chatbot-ui and it worked.

image

image

What file is this under, I can't seem to find that line of code in the current repo

@localai-bot
Copy link
Contributor

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

To get the chatbot UI to work, you need to update the OPENAI_API_KEY and OPENAI_API_HOST environment variables in the chatgpt service's environment section.
The OPENAI_API_KEY should be set to your OpenAI API key, which you can obtain from the OpenAI website. The OPENAI_API_HOST should be set to the IP address or FQDN of the externally managed LocalAI service, which you have already updated in the docker-compose file as marked <<LOCALAI_IP>>.
Here's an example of what your chatgpt service's environment section should look like:

  chatgpt:
    environment:
      - 'OPENAI_API_KEY=sk-XXXXXXXXXXXXXXXXXXXX'
      - 'OPENAI_API_HOST=http://<<LOCALAI_IP>>:8080'

Make sure to replace the <<LOCALAI_IP>> placeholder with the IP address or FQDN of your externally managed LocalAI service.

Sources:

@unifreak
Copy link

EDIT: I used the new docker-compose provided by @mudler yesterday (thanks! hearts) and it now works! I think the issue was with the files provided in the manual installation or an error on my side when I copied them to my directory.

Can you paste the "docker-compose provided by @mudler"?

@PierreMesure
Copy link

I added a link in my original message for future reference.

@fraschm1998
Copy link

Chatbot UI doesn't seem to be using custom models. I have {"object":"list","data":[{"id":"thebloke__wizardlm-13b-v1-0-uncensored-superhot-8k-ggml__wizardlm-13b-v1.0-superhot-8k.ggmlv3.q4_k_m.bin","object":"model"}]} which I can query via terminal however chatbot ui does not display available models and tries to use gpt-3.5-turbo.

My docker-compose.yaml:

version: '3.6'

services:
  api:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    image: quay.io/go-skynet/local-ai:master-cublas-cuda12
    tty: true # enable colorized logs
    restart: always # should this be on-failure ?
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models
    command: ["/usr/bin/local-ai" ]

  chatgpt:
    depends_on:
      api:
        condition: service_healthy
    image: ghcr.io/mckaywrigley/chatbot-ui:main
    ports:
      - 3000:3000
    environment:
      - 'OPENAI_API_KEY=sk-XXXXXXXXXXXXXXXXXXXX'
      - 'OPENAI_API_HOST=http://api:8080'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests