Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nemo/Langchain with Chainlit #659

Closed
BWardell15 opened this issue Aug 1, 2024 · 3 comments
Closed

Nemo/Langchain with Chainlit #659

BWardell15 opened this issue Aug 1, 2024 · 3 comments
Assignees
Labels
status: waiting confirmation Issue is waiting confirmation whether the proposed solution/workaround works.

Comments

@BWardell15
Copy link

BWardell15 commented Aug 1, 2024

Hello, I am building a RAG bot that utilizes Langchain and Nemo Guardrails. I currently have the chain portion of the code configured as the following:

from langchain.chains import ConversationalRetrievalChain

chatbot_chain = ConversationalRetrievalChain.from_llm(
    llm,
    retriever=retriever,
    memory=memory,
    combine_docs_chain_kwargs={"prompt": custom_prompt}
)

guardrails = RunnableRails(config)

@cl.on_chat_start
def quey_llm():
    llm
    conversation_memory = memory
    llm_chain = guardrails | chatbot_chain
        cl.user_session.set("llm_chain", llm_chain)

@cl.on_message
async def query_llm(message: cl.Message):
    llm_chain = cl.user_session.get("llm_chain")
        response = await llm_chain.acall(message.content, 
                                     callbacks=[
                                         cl.AsyncLangchainCallbackHandler()])
      await cl.Message(response["answer"]).send()

However, when I run this I get the following error: AttributeError: 'RunnableRails' object has no attribute 'acall'

Is there a different attribute that I should be using there? I haven't been able to figure this out on my own so was hoping I could get a nudge in the right direction here.

@drazvan
Copy link
Collaborator

drazvan commented Aug 6, 2024

@BWardell15 : can you try to use llm_chain.invoke? See the example at the bottom here: https://docs.nvidia.com/nemo/guardrails/user_guides/langchain/runnable-rails.html.

@drazvan drazvan self-assigned this Aug 6, 2024
@Pouyanpi
Copy link
Collaborator

@BWardell15, have you tried the above suggestion? Did it resolve your issue?

@Pouyanpi Pouyanpi added the status: waiting confirmation Issue is waiting confirmation whether the proposed solution/workaround works. label Aug 15, 2024
@BWardell15
Copy link
Author

BWardell15 commented Aug 15, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: waiting confirmation Issue is waiting confirmation whether the proposed solution/workaround works.
Projects
None yet
Development

No branches or pull requests

3 participants