Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If a chain has a ConfigurableField and the chain is invoked through .with_config(...), then the config should be logged in LangSmith #809

Open
codekiln opened this issue Jun 20, 2024 · 2 comments

Comments

@codekiln
Copy link

codekiln commented Jun 20, 2024

Feature request

If a chain has a ConfigurableField and the chain is invoked through RemoteRunnable.with_config(...), then the config should be logged in LangSmith. As far as I can tell, it is not always automatically logged.

Motivation

We have a prompt management UI that allows us to tune parameters of the prompt, which are then sent to LangServe via RemoteRunnable invocation using .with_config(). One of the goals of LangSmith logging for us is to identify the parameters correlated with a good run. For example, the search_kwargs of our VectorStoreRetriever are not getting logged automatically in any place I can see. This even happens when running the retriever.with_config(...).invoke;

image

(trace id: e2853ba1-835d-4842-ac49-6eabd222a2e1)

image

Right now is the assumption that we need to add all configurable fields through custom logging? If so, do you have a recommended way we could do that?

@hinthornw
Copy link
Collaborator

Configurable values should be auto-added as metadata - could you elaborate on this line retriever.with_config(...).invoke() - what are you putting in with_config?

You can also explicitly add metadata in with_config if you'd like

@codekiln
Copy link
Author

codekiln commented Jun 24, 2024

what are you putting in with_config?

@hinthornw For example, we call our chains via a LangServe endpoint using RemoteRunable.with_config(configurable={"prompt_for_agent_a": ..., "prompt_for_agent_b": ..., "search_kwargs_for_retriever": { complex dictionary including k, similarity score cutoff, filters, etc }, ... }). We're storing the configurable dictionary in stateful persistence so that we can compare these hyperparameters across different experiments.

You can also explicitly add metadata in with_config if you'd like

For now maybe I'll try adding the entire dictionary sent to configurable to metadata as well, but it seems like that isn't very DRY. I imagine that we're not the only one who needs to see the "runtime" configuration of a given chain in the traces. Also, based on my experimentation it seems like LangServe doesn't pass through all of the metadata: langchain-ai/langserve#694

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants