diff --git a/docs/core_docs/docs/concepts.mdx b/docs/core_docs/docs/concepts.mdx index fd02e5f694d5..91f267823241 100644 --- a/docs/core_docs/docs/concepts.mdx +++ b/docs/core_docs/docs/concepts.mdx @@ -1,34 +1,3 @@ ---- -keywords: - [ - prompt, - prompttemplate, - chatprompttemplate, - tool, - tools, - runnable, - runnables, - invoke, - vector, - vectorstore, - vectorstores, - embedding, - embeddings, - chat, - chat model, - llm, - llms, - retriever, - retrievers, - loader, - loaders, - document, - documents, - output, - output parser, - ] ---- - # Conceptual guide This section contains introductions to key parts of LangChain. @@ -106,6 +75,8 @@ export LANGCHAIN_API_KEY=ls__... ## LangChain Expression Language + + LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: @@ -135,6 +106,8 @@ With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.sm ### Interface + + To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below. @@ -163,6 +136,8 @@ Some components LangChain implements, some components we rely on third-party int ### LLMs + + Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are `ChatModels`, see below). @@ -174,6 +149,8 @@ LangChain does not provide any LLMs, rather we rely on third party integrations. ### Chat models + + Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are traditionally newer models (older models are generally `LLMs`, see above). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages. @@ -280,6 +257,8 @@ This represents the result of a tool call. This is distinct from a FunctionMessa ### Prompt templates + + Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. @@ -327,6 +306,8 @@ The second is a HumanMessage, and will be formatted by the `topic` variable the #### MessagesPlaceholder + + This prompt template is responsible for adding an array of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in an array of messages that we would slot into a particular spot? @@ -369,6 +350,8 @@ Example Selectors are classes responsible for selecting and then formatting exam ### Output parsers + + :::note The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. @@ -427,6 +410,8 @@ Future interactions will then load those messages and pass them into the chain a ### Document + + A Document object in LangChain contains information about some data. It has two attributes: - `pageContent: string`: The content of this document. Currently is only a string. @@ -434,6 +419,8 @@ A Document object in LangChain contains information about some data. It has two ### Document loaders + + These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc. Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method. @@ -467,6 +454,8 @@ That means there are two different axes along which you can customize your text ### Embedding models + + The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. @@ -475,6 +464,8 @@ The base Embeddings class in LangChain provides two methods: one for embedding d ### Vectorstores + + One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. @@ -488,6 +479,8 @@ const retriever = vectorstore.asRetriever(); ### Retrievers + + A retriever is an interface that returns relevant documents given an unstructured query. They are more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. @@ -497,6 +490,8 @@ Retrievers accept a string query as input and return an array of `Document`s as ### Tools + + Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things: