Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concepts: some fixes #27560

Merged
merged 3 commits into from
Oct 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 19 additions & 21 deletions docs/docs/concepts/architecture.mdx
Original file line number Diff line number Diff line change
@@ -1,31 +1,42 @@
import ThemedImage from '@theme/ThemedImage';
import useBaseUrl from '@docusaurus/useBaseUrl';

## Architecture
# Architecture

LangChain as a framework consists of a number of packages.

### langchain-core
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}
/>


## langchain-core

This package contains base abstractions of different components and ways to compose them together.
The interfaces for core components like LLMs, vector stores, retrievers and more are defined here.
No third party integrations are defined here.
The dependencies are kept purposefully very lightweight.

### langchain
## langchain

The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture.
These are NOT third party integrations.
All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations.

### langchain-community
## langchain-community

This package contains third party integrations that are maintained by the LangChain community.
Key partner packages are separated out (see below).
This contains all integrations for various components (LLMs, vector stores, retrievers).
All dependencies in this package are optional to keep the package as lightweight as possible.

### Partner packages
## Partner packages

While the long tail of integrations is in `langchain-community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). This was done in order to improve support for these important integrations.

Expand All @@ -34,7 +45,7 @@ For more information see:
* A list [LangChain integrations](/docs/integrations/providers/)
* The [LangChain API Reference](https://python.langchain.com/api_reference/) where you can find detailed information about the API reference of each partner package.

### LangGraph
## LangGraph

`langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.

Expand All @@ -47,9 +58,7 @@ LangGraph exposes high level interfaces for creating common types of agents, as

:::



### LangServe
## LangServe

A package to deploy LangChain chains as REST APIs. Makes it easy to get a production ready API up and running.

Expand All @@ -62,19 +71,8 @@ If you need a deployment option for LangGraph, you should instead be looking at
For more information, see the [LangServe documentation](/docs/langserve).


### LangSmith
## LangSmith

A developer platform that lets you debug, test, evaluate, and monitor LLM applications.

For more information, see the [LangSmith documentation](https://docs.smith.langchain.com)


<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}
/>
41 changes: 21 additions & 20 deletions docs/docs/concepts/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,32 +16,32 @@ The conceptual guide will not cover step-by-step instructions or specific implem

## Concepts

- **[Chat Models](/docs/concepts/chat_models)**: Modern LLMs exposed via a chat interface which process sequences of messages as input and output a message.
- **[Chat models](/docs/concepts/chat_models)**: LLMs exposed via a chat interface which process sequences of messages as input and output a message.
- **[Messages](/docs/concepts/messages)**: Messages are the unit of communication in modern LLMs, used to represent input and output of a chat model, as well as any additional context or metadata that may be associated with the conversation.
- **[Chat History](/docs/concepts/chat_history)**: Chat history is a record of the conversation between the user and the chat model, used to maintain context and state throughout the conversation.
- **[Tools](/docs/concepts/tools)**: The **tool** abstraction in LangChain associates a Python **function** with a **schema** defining the function's **name**, **description**, and **input**.
- **[Tool Calling](/docs/concepts/tool_calling)**: Tool calling is the process of invoking a tool from a chat model.
- **[Structured Output](/docs/concepts/structured_outputs)**: A technique to make the chat model respond in a structured format, such as JSON and matching a specific schema.
- **[Memory](https://langchain-ai.github.io/langgraph/concepts/memory/)**: Explanation of **short-term memory** and **long-term memory** and how to implement them using LangGraph.
- **[Chat history](/docs/concepts/chat_history)**: Chat history is a record of the conversation between the user and the chat model, used to maintain context and state throughout the conversation.
- **[Tools](/docs/concepts/tools)**: The tool abstraction in LangChain associates a Python function** with a schema defining the function's name, description, and input.
- **[Tool calling](/docs/concepts/tool_calling)**: Tool calling is a special type of chat model API that allows you to pass tool schemas to a model and get back invocations of those tools.
- **[Structured output](/docs/concepts/structured_outputs)**: A technique to make the chat model respond in a structured format, such as JSON that's matching a specified schema.
- **[Memory](https://langchain-ai.github.io/langgraph/concepts/memory/)**: Persisting information from conversations, so it can be used in future conversations.
- **[Multimodality](/docs/concepts/multimodality)**: The ability to work with data that comes in different forms, such as text, audio, images, and video.
- **[Tokens](/docs/concepts/tokens)**: Modern large language models (LLMs) are typically based on a transformer architecture that processes a sequence of units known as tokens.
- **[Runnable Interface](/docs/concepts/runnables)**: Description of the standard Runnable interface which is implemented by many components in LangChain.
- **[LangChain Expression Language (LCEL)](/docs/concepts/lcel)**: A declarative approach to building new Runnables from existing Runnables.
- **[Document Loaders](/docs/concepts/document_loaders)**: Abstraction for loading documents.
- **[Runnable interface](/docs/concepts/runnables)**: A standard Runnable interface implemented across many in LangChain components.
- **[LangChain Expression Language (LCEL)](/docs/concepts/lcel)**: A declarative approach to building pipelines with LangChain components. LCEL servers as a simple orchestration language for LangChain.
- **[Document loaders](/docs/concepts/document_loaders)**: Components that help loading documents from various sources.
- **[Retrieval](/docs/concepts/retrieval)**: Information retrieval systems can retrieve structured or unstructured data from a datasource in response to a query.
- **[Text Splitters](/docs/concepts/text_splitters)**: Use to split long content into smaller more manageable chunks.
- **[Embedding Models](/docs/concepts/embedding_models)**: Embedding models are models that can represent data in a vector space.
- **[VectorStores](/docs/concepts/vectorstores)**: A datastore that can store embeddings and associated data and supports efficient vector search.
- **[Text splitters](/docs/concepts/text_splitters)**: Use to split long content into smaller more manageable chunks.
- **[Embedding models](/docs/concepts/embedding_models)**: Embedding models are models that can represent data in a vector space.
- **[Vector stores](/docs/concepts/vectorstores)**: A datastore that can store embeddings and associated data and supports efficient vector search.
- **[Retriever](/docs/concepts/retrievers)**: A retriever is a component that retrieves relevant documents from a knowledge base in response to a query.
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A powerful technique that enhances language models by combining them with external knowledge bases.
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tool calling](/docs/concepts/tool_calling).
- **[Prompt Templates](/docs/concepts/prompt_templates)**: Use to define prompt **templates** that can be lazily evaluated to generate prompts for [language models](/docs/concepts/chat_models). Primarily used with [LCEL](/docs/concepts/lcel) or prompts need to be serialized and stored for later use (e.g., in a database).
- **[Async Programming with LangChain](/docs/concepts/async)**: This guide covers some basic things that one should know to work with LangChain in an asynchronous context.
- **[Callbacks](/docs/concepts/callbacks)**: Learn about the callback system in LangChain. It is composed of CallbackManagers (which dispatch events to the registered handlers) and CallbackHandlers (which handle the events). Callbacks are used to stream outputs from LLMs in LangChain, observe the progress of an LLM application, and more.
- **[Output Parsers](/docs/concepts/output_parsers)**: Output parsers are responsible for taking the output of a model and transforming it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [chat models](/docs/concepts/chat_models) that natively support [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
- **[Retrieval Augmented Generation (RAG)](/docs/concepts/rag)**: A technique that enhances language models by combining them with external knowledge bases.
- **[Agents](/docs/concepts/agents)**: Use a [language model](/docs/concepts/chat_models) to choose a sequence of actions to take. Agents can interact with external resources via [tools](/docs/concepts/tools).
- **[Prompt templates](/docs/concepts/prompt_templates)**: Used to define reusable structures for generating prompts dynamically, allowing for variables or placeholders to be filled in when needed. This is particularly useful with [LCEL](/docs/concepts/lcel) or when prompts need to be stored and retrieved from a database for repeated use.
- **[Async programming with LangChain](/docs/concepts/async)**: Guidelines about programming with LangChain in an asynchronous context.
- **[Callbacks](/docs/concepts/callbacks)**: Callbacks are used to stream outputs from LLMs in LangChain, observe the progress of an LLM application, and more.
- **[Output parsers](/docs/concepts/output_parsers)**: Components that take the output of a model and transform it into a more suitable format for downstream tasks. Output parsers were primarily useful prior to the general availability of [chat models](/docs/concepts/chat_models) that natively support [tool calling](/docs/concepts/tool_calling) and [structured outputs](/docs/concepts/structured_outputs).
- **[Few shot prompting](/docs/concepts/few_shot_prompting)**: Few-shot prompting is a technique used improve the performance of language models by providing them with a few examples of the task they are expected to perform.
- **[Example Selectors](/docs/concepts/example_selectors)**: Example selectors are used to select examples from a dataset based on a given input. They can be used to select examples randomly, by semantic similarity, or based on some other constraints. Example selectors are used in few-shot prompting to select examples for a prompt.
- **[Tracing](/docs/concepts/tracing)**: Tracing is the process of recording the steps that an application takes to go from input to output. Tracing is essential for debugging and diagnosing issues in complex applications. For more information on tracing in LangChain, see the [LangSmith documentation](https://docs.smith.langchain.com/concepts/tracing).
- **[Example selectors](/docs/concepts/example_selectors)**: Example selectors are used to select examples from a dataset based on a given input. They can be used to select examples randomly, by semantic similarity, or based on some other constraints. Example selectors are used in few-shot prompting to select examples for a prompt.
- **[Tracing](/docs/concepts/tracing)**: Tracing is the process of recording the steps that an application takes to go from input to output. Tracing is essential for debugging and diagnosing issues in complex applications.
- **[Evaluation](/docs/concepts/evaluation)**: Evaluation is the process of assessing the performance and effectiveness of your LLM-powered applications. It involves testing the model's responses against a set of predefined criteria or benchmarks to ensure it meets the desired quality standards and fulfills the intended purpose. This process is vital for building reliable applications. For more information on evaluation in LangChain, see the [LangSmith documentation](https://docs.smith.langchain.com/concepts/evaluation).

## Glossary
Expand All @@ -57,6 +57,7 @@ The conceptual guide will not cover step-by-step instructions or specific implem
- **[Configurable Runnables](/docs/concepts/runnables#configurable-Runnables)**: Creating configurable Runnables.
- **[Context window](/docs/concepts/chat_models#context-window)**: The maximum size of input a chat model can process.
- **[Conversation patterns](/docs/concepts/chat_history#conversation-patterns)**: Common patterns in chat interactions.
- **[Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)**: LangChain's representation of a document.
- **[Embedding models](/docs/concepts/multimodality#embedding-models)**: Models that generate vector embeddings for various data types.
- **[HumanMessage](/docs/concepts/messages#humanmessage)**: Represents a message from a human user.
- **[InjectedState](/docs/concepts/tools#injectedstate)**: A state injected into a tool function.
Expand Down
Loading