Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs[patch]: Adds heading keywords for search #5678

Merged
merged 2 commits into from
Jun 5, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 26 additions & 31 deletions docs/core_docs/docs/concepts.mdx
Original file line number Diff line number Diff line change
@@ -1,34 +1,3 @@
---
keywords:
[
prompt,
prompttemplate,
chatprompttemplate,
tool,
tools,
runnable,
runnables,
invoke,
vector,
vectorstore,
vectorstores,
embedding,
embeddings,
chat,
chat model,
llm,
llms,
retriever,
retrievers,
loader,
loaders,
document,
documents,
output,
output parser,
]
---

# Conceptual guide

This section contains introductions to key parts of LangChain.
Expand Down Expand Up @@ -106,6 +75,8 @@ export LANGCHAIN_API_KEY=ls__...

## LangChain Expression Language

<span data-heading-keywords="lcel"></span>

LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains
(we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
Expand Down Expand Up @@ -135,6 +106,8 @@ With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.sm

### Interface

<span data-heading-keywords="invoke"></span>

To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol.
Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.

Expand Down Expand Up @@ -163,6 +136,8 @@ Some components LangChain implements, some components we rely on third-party int

### LLMs

<span data-heading-keywords="llm,llms"></span>

Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).

Expand All @@ -174,6 +149,8 @@ LangChain does not provide any LLMs, rather we rely on third party integrations.

### Chat models

<span data-heading-keywords="chat model,chat models"></span>

Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
These are traditionally newer models (older models are generally `LLMs`, see above).
Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Expand Down Expand Up @@ -280,6 +257,8 @@ This represents the result of a tool call. This is distinct from a FunctionMessa

### Prompt templates

<span data-heading-keywords="prompt,prompttemplate,chatprompttemplate"></span>

Prompt templates help to translate user input and parameters into instructions for a language model.
This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.

Expand Down Expand Up @@ -327,6 +306,8 @@ The second is a HumanMessage, and will be formatted by the `topic` variable the

#### MessagesPlaceholder

<span data-heading-keywords="messagesplaceholder"></span>

This prompt template is responsible for adding an array of messages in a particular place.
In the above ChatPromptTemplate, we saw how we could format two messages, each one a string.
But what if we wanted the user to pass in an array of messages that we would slot into a particular spot?
Expand Down Expand Up @@ -369,6 +350,8 @@ Example Selectors are classes responsible for selecting and then formatting exam

### Output parsers

<span data-heading-keywords="output parser"></span>

:::note

The information here refers to parsers that take a text output from a model try to parse it into a more structured representation.
Expand Down Expand Up @@ -427,13 +410,17 @@ Future interactions will then load those messages and pass them into the chain a

### Document

<span data-heading-keywords="document,documents"></span>

A Document object in LangChain contains information about some data. It has two attributes:

- `pageContent: string`: The content of this document. Currently is only a string.
- `metadata: Record<string, any>`: Arbitrary metadata associated with this document. Can track the document id, file name, etc.

### Document loaders

<span data-heading-keywords="document loader,document loaders"></span>

These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.

Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method.
Expand Down Expand Up @@ -467,6 +454,8 @@ That means there are two different axes along which you can customize your text

### Embedding models

<span data-heading-keywords="embedding,embeddings"></span>

The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.

Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
Expand All @@ -475,6 +464,8 @@ The base Embeddings class in LangChain provides two methods: one for embedding d

### Vectorstores

<span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span>

One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors,
and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query.
A vector store takes care of storing embedded data and performing vector search for you.
Expand All @@ -488,6 +479,8 @@ const retriever = vectorstore.asRetriever();

### Retrievers

<span data-heading-keywords="retriever,retrievers"></span>

A retriever is an interface that returns relevant documents given an unstructured query.
They are more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Expand All @@ -497,6 +490,8 @@ Retrievers accept a string query as input and return an array of `Document`s as

### Tools

<span data-heading-keywords="tool,tools"></span>

Tools are interfaces that an agent, chain, or LLM can use to interact with the world.
They combine a few things:

Expand Down
Loading