Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python SDK to add or receive jupyter-chat message events in the jupyter notebook #42

Open
codekiln opened this issue May 24, 2024 · 10 comments
Labels
enhancement New feature or request

Comments

@codekiln
Copy link

Problem

When developing LLM-based chat applications, people need a chat interface to see how it works. Many times people use a library like StreamLit in a separate file and which runs in a separate process.

This works, but there would be advantages if one were able to prototype a chat application right in JupyterLab:

  • a single JupyterLab instance can be shared by multiple people; once it's stood up it can be re-used
  • the JupyterLab instance doesn't require knowledge of how to use the terminal CLI; it's more friendly for non-technical users, which is growing in importance as Red Teaming and AI UX experts need to be able to be included in the AI development process, even though they don't necessarily know how to run and modify a python application from the CLI
  • a Jupyter notebook tends to be a single file encapsulating everything needed to prototype a particular behavior, so an individual user could clone a Jupyter notebook and change parts with a feeling of "safety."

Given that there's not an idiomatic chat widget built into the Jupyter notebooks yet, I think it might be nice if a Jupyter notebook situated in a JupyterLab instance had access to be able to use the jupyter-chat extension panel as an interface to prototype chat applications.

Proposed Solution

Provide a jupyter_chat SDK that makes it possible to

  • listen to message events
  • write message events
  • read all messages
  • clear all messages

By calling the SDK from inside the cells of a jupyter notebook, one could control the chat interface so as to render an LLM-powered AI chat.

@codekiln codekiln added the enhancement New feature or request label May 24, 2024
Copy link

welcome bot commented May 24, 2024

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@krassowski
Copy link
Member

Maybe this could be implemented via https://github.com/jtpio/ipylab; appropriate commands and signals would need to be exposed in jupyter-chat.

@martinRenou
Copy link
Member

martinRenou commented May 30, 2024

Another approach could be to connect to the ydoc using ypywidgets.

This is an approach we took in jupytercad which seems to work nicely. Using this approach allows us to connect to the ydoc from the kernel and apply changes to it.

If we take this approach for the chat, we could provide an API like:

class ChatDoc(CommWidget):

    def __init__(self, path: Optional[str] = None):
        # [...]
        pass


    def add_message(self, author: str, content: str):
        # [...]
        pass

cc. @davidbrochart @trungleduc

@martinRenou
Copy link
Member

martinRenou commented May 30, 2024

One could even connect to a notebook shared model, and have the AI bot update that notebook content based of messages it receives in the chat...

@trungleduc
Copy link
Member

Another approach could be to connect to the ydoc using ypywidgets.

This is an approach we took in jupytercad which seems to work nicely. Using this approach allows us to connect to the ydoc from the kernel and apply changes to it.

If we take this approach for the chat, we could provide an API like:

class ChatDoc(CommWidget):

    def __init__(self, path: Optional[str] = None):
        # [...]
        pass


    def add_message(self, author: str, content: str):
        # [...]
        pass

cc. @davidbrochart @trungleduc

the downside of this approach is that it only works in notebooks, not from a python script

@davidbrochart
Copy link

@martinRenou Correct me if I'm wrong, but it seems that you want to have a kernel act as a client to the chat system? So a human could address the kernel, which could use an LLM to provide answers. But wouldn't we be reinventing jupyter-ai 😄 ?

@trungleduc Actually the kernel would not necessarily be "attached" to a notebook, it could be a standalone kernel. It would just run some code to connect its YDoc to the YDoc of the chat in the frontend, through a Comm. But that would not be optimal, as we could connect the kernel to the YDoc of the chat directly in the backend (through a WebSocket or a new 0MQ socket).

Then, as Martin said, if we let the kernel also connect to a notebook shared model, the kernel could read and modify it, and that would become pretty interesting 😄 We could make the bot write new cells, or improve existing ones, and the bot could also help us in real-time as we are typing code in a cell...

@trungleduc
Copy link
Member

Actually the kernel would not necessarily be "attached" to a notebook, it could be a standalone kernel. It would just run some code to connect its YDoc to the YDoc of the chat in the frontend, through a Comm. But that would not be optimal, as we could connect the kernel to the YDoc of the chat directly in the backend (through a WebSocket or a new 0MQ socket).

Yeah, this is what we wanted in JupyterCAD: the kernel can talk directly to the YDoc, but then we had issues with jupyter server authentication since the kernel and the serve could be on different machines.

if we let the kernel also connect to a notebook shared model

This could also help the serve side execution, the kernel can write output directly to the notebook model, no need to go back and forth with the jupyter server

@martinRenou
Copy link
Member

Correct me if I'm wrong, but it seems that you want to have a kernel act as a client to the chat system?

I'm only responding to the original issue which is asking for communicating on the chat from notebook code -> from the kernel :)

But wouldn't we be reinventing jupyter-ai 😄 ?

jupyter-ai has its LLM running on the server, not on the kernel, right? That does not let notebook users implement their own LLM to discuss with the chat.

I'm also not saying WE should do that and compete with jupyter-ai, just proposing a solution to the original issue.

@brichet
Copy link
Collaborator

brichet commented May 31, 2024

Correct me if I'm wrong, but it seems that you want to have a kernel act as a client to the chat system ? So a human could address the kernel, which could use an LLM to provide answers. But wouldn't we be reinventing jupyter-ai 😄 ?

👍

Even without LLM, it can be interesting to have bidirectional communication between the chat and the notebook (some widgets displaying messages, some actions performed from messages content...). Especially if this chat extension is to be used in jupyter-ai in the future.

@martinRenou
Copy link
Member

martinRenou commented May 31, 2024

Yeah, LLM running on the server OR on the kernel are both valid solutions, and letting users do what they want with it is always good.

Also I could see benefits of being able to send messages to the chat from the kernel even on non-AI related things.
Like you could send a message to the chat whenever your computation is complete to notify the user. EDIT: I see Nicolas just commented the same thing :P

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants