docs[minor]: Format

pull/19590/head
bracesproul 2 months ago
parent 9c7e860cf6
commit be8b3433aa

@ -0,0 +1,53 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*
* @format
*/
const OFF = 0;
const WARNING = 1;
const ERROR = 2;
module.exports = {
root: true,
env: {
browser: true,
commonjs: true,
jest: true,
node: true,
},
parser: "@babel/eslint-parser",
parserOptions: {
allowImportExportEverywhere: true,
},
extends: ["airbnb", "prettier"],
plugins: ["react-hooks", "header"],
ignorePatterns: [
"build",
"docs/api",
"node_modules",
"docs/_static",
"static",
],
rules: {
// Ignore certain webpack alias because it can't be resolved
"import/no-unresolved": [
ERROR,
{ ignore: ["^@theme", "^@docusaurus", "^@generated"] },
],
"import/extensions": OFF,
"react/jsx-filename-extension": OFF,
"react-hooks/rules-of-hooks": ERROR,
"react/prop-types": OFF, // PropTypes aren't used much these days.
"react/function-component-definition": [
WARNING,
{
namedComponents: "function-declaration",
unnamedComponents: "arrow-function",
},
],
},
};

1
docs/.gitignore vendored

@ -1,2 +1,3 @@
/.quarto/
src/supabase.d.ts
.eslintcache

@ -0,0 +1,7 @@
node_modules
build
.docusaurus
docs/api
docs/_static
static
quarto-1.3.450

@ -1,31 +1,30 @@
[comment: Please, a reference example here "docs/integrations/arxiv.md"]::
[comment: Use this template to create a new .md file in "docs/integrations/"]::
[comment: Please, a reference example here "docs/integrations/arxiv.md"]: :
[comment: Use this template to create a new .md file in "docs/integrations/"]: :
# Title_REPLACE_ME
[comment: Only one Tile/H1 is allowed!]::
[comment: Only one Tile/H1 is allowed!]: :
>
[comment: Description: After reading this description, a reader should decide if this integration is good enough to try/follow reading OR]::
[comment: go to read the next integration doc. ]::
[comment: Description should include a link to the source for follow reading.]::
> [comment: Description: After reading this description, a reader should decide if this integration is good enough to try/follow reading OR]: :
> [comment: go to read the next integration doc. ]: :
> [comment: Description should include a link to the source for follow reading.]: :
## Installation and Setup
[comment: Installation and Setup: All necessary additional package installations and setups for Tokens, etc]::
[comment: Installation and Setup: All necessary additional package installations and setups for Tokens, etc]: :
```bash
pip install package_name_REPLACE_ME
```
[comment: OR this text:]::
[comment: OR this text:]: :
There isn't any special setup for it.
[comment: The next H2/## sections with names of the integration modules, like "LLM", "Text Embedding Models", etc]::
[comment: see "Modules" in the "index.html" page]::
[comment: Each H2 section should include a link to an example(s) and a Python code with the import of the integration class]::
[comment: Below are several example sections. Remove all unnecessary sections. Add all necessary sections not provided here.]::
[comment: The next H2/## sections with names of the integration modules, like "LLM", "Text Embedding Models", etc]: :
[comment: see "Modules" in the "index.html" page]: :
[comment: Each H2 section should include a link to an example(s) and a Python code with the import of the integration class]: :
[comment: Below are several example sections. Remove all unnecessary sections. Add all necessary sections not provided here.]: :
## LLM

File diff suppressed because it is too large Load Diff

@ -6,16 +6,19 @@
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
## Tutorials
## Tutorials
### [by Greg Kamradt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5)
### [by Sam Witteveen](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ)
### [by James Briggs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F)
### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr)
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
## Courses
@ -45,6 +48,4 @@
## [Documentation: Use cases](/docs/use_cases)
---------------------
---

@ -5,6 +5,7 @@
### [Official LangChain YouTube channel](https://www.youtube.com/@LangChain)
### Introduction to LangChain with Harrison Chase, creator of LangChain
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@FullStackDeepLearning)
@ -13,7 +14,7 @@
## Videos (sorted by views)
- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain OpenAI API)](https://youtu.be/9AXP7tCI9PI) by [TechLead](https://www.youtube.com/@TechLead)
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
@ -37,7 +38,7 @@
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nick_daigs)
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
@ -108,7 +109,7 @@
- ⛓ [How to Run `LLaMA` Locally on CPU or GPU | Python & Langchain & CTransformers Guide](https://youtu.be/SvjWDX2NqiM?si=DxFml8XeGhiLTzLV) by [Code With Prince](https://www.youtube.com/@CodeWithPrince)
- ⛓ [PyData Heidelberg #11 - TimeSeries Forecasting & LLM Langchain](https://www.youtube.com/live/Glbwb5Hxu18?si=PIEY8Raq_C9PCHuW) by [PyData](https://www.youtube.com/@PyDataTV)
- ⛓ [Prompt Engineering in Web Development | Using LangChain and Templates with OpenAI](https://youtu.be/pK6WzlTOlYw?si=fkcDQsBG2h-DM8uQ) by [Akamai Developer
](https://www.youtube.com/@AkamaiDeveloper)
](https://www.youtube.com/@AkamaiDeveloper)
- ⛓ [Retrieval-Augmented Generation (RAG) using LangChain and `Pinecone` - The RAG Special Episode](https://youtu.be/J_tCD_J6w3s?si=60Mnr5VD9UED9bGG) by [Generative AI and Data Science On AWS](https://www.youtube.com/@GenerativeAIDataScienceOnAWS)
- ⛓ [`LLAMA2 70b-chat` Multiple Documents Chatbot with Langchain & Streamlit |All OPEN SOURCE|Replicate API](https://youtu.be/vhghB81vViM?si=dszzJnArMeac7lyc) by [DataInsightEdge](https://www.youtube.com/@DataInsightEdge01)
- ⛓ [Chatting with 44K Fashion Products: LangChain Opportunities and Pitfalls](https://youtu.be/Zudgske0F_s?si=8HSshHoEhh0PemJA) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
@ -123,8 +124,8 @@
- ⛓ [Build Chat PDF app in Python with LangChain, OpenAI, Streamlit | Full project | Learn Coding](https://www.youtube.com/watch?v=WYzFzZg4YZI) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
- ⛓ [Build Eminem Bot App with LangChain, Streamlit, OpenAI | Full Python Project | Tutorial | AI ChatBot](https://www.youtube.com/watch?v=a2shHB4MRZ4) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
### [Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
- [Getting Started with LangChain: Load Custom Data, Run OpenAI Models, Embeddings and `ChatGPT`](https://www.youtube.com/watch?v=muXbPpG_ys4)
- [Loaders, Indexes & Vectorstores in LangChain: Question Answering on `PDF` files with `ChatGPT`](https://www.youtube.com/watch?v=FQnvfR8Dmr0)
- [LangChain Models: `ChatGPT`, `Flan Alpaca`, `OpenAI Embeddings`, Prompt Templates & Streaming](https://www.youtube.com/watch?v=zy6LiK5F5-s)
@ -132,6 +133,6 @@
- [Analyze Custom CSV Data with `GPT-4` using Langchain](https://www.youtube.com/watch?v=Ew3sGdX8at4)
- [Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations](https://youtu.be/CyuUlf54wTs)
---
---------------------
⛓ icon marks a new addition [last update 2024-02-04]

@ -10,8 +10,8 @@ No deletions.
Deprecated classes and methods will be removed in 0.2.0
| Deprecated | Alternative | Reason |
|---------------------------------|-----------------------------------|------------------------------------------------|
| Deprecated | Alternative | Reason |
| ------------------------------- | --------------------------------- | ---------------------------------------------- |
| ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers |
| create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood |
| created_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood |
@ -33,4 +33,4 @@ Deprecated classes and methods will be removed in 0.2.0
| OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class |
| SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class |
| StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class |
| XMLAgent | create_xml_agent | Use LCEL builder over a class |
| XMLAgent | create_xml_agent | Use LCEL builder over a class |

@ -1,6 +1,7 @@
---
sidebar_position: 1
---
# Contribute Code
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
@ -13,6 +14,7 @@ Pull requests cannot land without passing the formatting, linting, and testing c
[Formatting and Linting](#formatting-and-linting) for how to run these checks locally.
It's essential that we maintain great documentation and testing. If you:
- Fix a bug
- Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
- Make an improvement
@ -34,7 +36,7 @@ For a [development container](https://containers.dev/), see the [.devcontainer f
This project utilizes [Poetry](https://python-poetry.org/) v1.7.1+ as a dependency manager.
❗Note: *Before installing Poetry*, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
❗Note: _Before installing Poetry_, if you use `Conda`, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
Install Poetry: **[documentation on how to install it](https://python-poetry.org/docs/#installation)**.
@ -44,6 +46,7 @@ tell Poetry to use the virtualenv python environment (`poetry config virtualenvs
### Different packages
This repository contains multiple packages:
- `langchain-core`: Base interfaces for key abstractions as well as logic for combining them in chains (LangChain Expression Language).
- `langchain-community`: Third-party integrations of various components.
- `langchain`: Chains, agents, and retrieval logic that makes up the cognitive architecture of your applications.
@ -219,16 +222,20 @@ any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
poetry add --optional [package_name]
```
```bash
poetry add --optional [package_name]
```
2. Open pyproject.toml and add the dependency to the `extended_testing` extra
3. Relock the poetry file to update the extra.
```bash
poetry lock --no-update
```
```bash
poetry lock --no-update
```
4. Add a unit test that the very least attempts to import the new code. Ideally, the unit
test makes use of lightweight fixtures to test the logic of the code.
test makes use of lightweight fixtures to test the logic of the code.
5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
## Adding a Jupyter Notebook

@ -1,19 +1,20 @@
---
sidebar_position: 3
---
# Contribute Documentation
LangChain documentation consists of two components:
1. Main Documentation: Hosted at [python.langchain.com](https://python.langchain.com/),
this comprehensive resource serves as the primary user-facing documentation.
It covers a wide array of topics, including tutorials, use cases, integrations,
and more, offering extensive guidance on building with LangChain.
The content for this documentation lives in the `/docs` directory of the monorepo.
this comprehensive resource serves as the primary user-facing documentation.
It covers a wide array of topics, including tutorials, use cases, integrations,
and more, offering extensive guidance on building with LangChain.
The content for this documentation lives in the `/docs` directory of the monorepo.
2. In-code Documentation: This is documentation of the codebase itself, which is also
used to generate the externally facing [API Reference](https://api.python.langchain.com/en/latest/langchain_api_reference.html).
The content for the API reference is autogenerated by scanning the docstrings in the codebase. For this reason we ask that
developers document their code well.
used to generate the externally facing [API Reference](https://api.python.langchain.com/en/latest/langchain_api_reference.html).
The content for the API reference is autogenerated by scanning the docstrings in the codebase. For this reason we ask that
developers document their code well.
The main documentation is built using [Quarto](https://quarto.org) and [Docusaurus 2](https://docusaurus.io/).
@ -59,7 +60,7 @@ From the **monorepo root**, run the following command to install the dependencie
```bash
poetry install --with lint,docs --no-root
````
```
### Building
@ -171,4 +172,4 @@ make lint
After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.
This will take you to a preview of the documentation changes.
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).
This preview is created by [Vercel](https://vercel.com/docs/getting-started-with-vercel).

@ -2,6 +2,7 @@
sidebar_position: 6
sidebar_label: FAQ
---
# Frequently Asked Questions
## Pull Requests (PRs)
@ -13,7 +14,7 @@ necessary before merging it. Oftentimes, it is more efficient for the
maintainers to make these changes themselves before merging, rather than asking you
to do so in code review.
By default, most pull requests will have a
By default, most pull requests will have a
`✅ Maintainers are allowed to edit this pull request.`
badge in the right-hand sidebar.

@ -1,6 +1,7 @@
---
sidebar_position: 0
---
# Welcome Contributors
Hi there! Thank you for even being interested in contributing to LangChain.
@ -51,4 +52,4 @@ we do not want these to get in the way of getting good code into the codebase.
# 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

@ -1,6 +1,7 @@
---
sidebar_position: 5
---
# Contribute Integrations
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](./code).
@ -18,7 +19,7 @@ In the following sections, we'll walk through how to contribute to each of these
The `langchain-community` package is in `libs/community` and contains most integrations.
It can be installed with `pip install langchain-community`, and exported members can be imported with code like
It can be installed with `pip install langchain-community`, and exported members can be imported with code like
```python
from langchain_community.chat_models import ChatParrotLink
@ -26,7 +27,7 @@ from langchain_community.llms import ParrotLinkLLM
from langchain_community.vectorstores import ParrotLinkVectorStore
```
The `community` package relies on manually-installed dependent packages, so you will see errors
The `community` package relies on manually-installed dependent packages, so you will see errors
if you try to import a package that is not installed. In our fake example, if you tried to import `ParrotLinkLLM` without installing `parrot-link-sdk`, you will see an `ImportError` telling you to install it when trying to use it.
Let's say we wanted to implement a chat model for Parrot Link AI. We would create a new file in `libs/community/langchain_community/chat_models/parrot_link.py` with the following code:
@ -61,11 +62,11 @@ And add documentation to:
Partner packages can be hosted in the `LangChain` monorepo or in an external repo.
Partner package in the `LangChain` repo is placed in `libs/partners/{partner}`
Partner package in the `LangChain` repo is placed in `libs/partners/{partner}`
and the package source code is in `libs/partners/{partner}/langchain_{partner}`.
A package is
installed by users with `pip install langchain-{partner}`, and the package members
A package is
installed by users with `pip install langchain-{partner}`, and the package members
can be imported with code like:
```python
@ -142,11 +143,11 @@ to the relevant `docs/docs/integrations` directory in the monorepo root.
### (If Necessary) Deprecate community integration
Note: this is only necessary if you're migrating an existing community integration into
a partner package. If the component you're integrating is net-new to LangChain (i.e.
Note: this is only necessary if you're migrating an existing community integration into
a partner package. If the component you're integrating is net-new to LangChain (i.e.
not already in the `community` package), you can skip this step.
Let's pretend we migrated our `ChatParrotLink` chat model from the community package to
Let's pretend we migrated our `ChatParrotLink` chat model from the community package to
the partner package. We would need to deprecate the old model in the community package.
We would do that by adding a `@deprecated` decorator to the old model as follows, in
@ -165,15 +166,15 @@ After our change, it would look like this:
from langchain_core._api.deprecation import deprecated
@deprecated(
since="0.0.<next community version>",
removal="0.2.0",
since="0.0.<next community version>",
removal="0.2.0",
alternative_import="langchain_parrot_link.ChatParrotLink"
)
class ChatParrotLink(BaseChatModel):
...
```
You should do this for *each* component that you're migrating to the partner package.
You should do this for _each_ component that you're migrating to the partner package.
### Additional steps
@ -190,12 +191,12 @@ Maintainer steps (Contributors should **not** do these):
## Partner package in external repo
If you are creating a partner package in an external repo, you should follow the same steps as above,
If you are creating a partner package in an external repo, you should follow the same steps as above,
but you will need to set up your own CI/CD and package management.
Name your package as `langchain-{partner}-{integration}`.
Still, you have to create the `libs/partners/{partner}-{integration}` folder in the `LangChain` monorepo
and add a `README.md` file with a link to the external repo.
and add a `README.md` file with a link to the external repo.
See this [example](https://github.com/langchain-ai/langchain/tree/master/libs/partners/google-genai).
This allows keeping track of all the partner packages in the `LangChain` documentation.

@ -1,6 +1,7 @@
---
sidebar_position: 0.5
---
# Repository Structure
If you plan on contributing to LangChain code or documentation, it can be useful
@ -31,8 +32,8 @@ Here's the structure visualized as a tree:
The root directory also contains the following files:
* `pyproject.toml`: Dependencies for building docs and linting docs, cookbook.
* `Makefile`: A file that contains shortcuts for building, linting and docs and cookbook.
- `pyproject.toml`: Dependencies for building docs and linting docs, cookbook.
- `Makefile`: A file that contains shortcuts for building, linting and docs and cookbook.
There are other files in the root directory level, but their presence should be self-explanatory. Feel free to browse around!

@ -46,11 +46,11 @@ If you add support for a new external API, please add a new integration test.
**Warning:** Almost no tests should be integration tests.
Tests that require making network connections make it difficult for other
developers to test the code.
Tests that require making network connections make it difficult for other
developers to test the code.
Instead favor relying on `responses` library and/or mock.patch to mock
requests using small fixtures.
Instead favor relying on `responses` library and/or mock.patch to mock
requests using small fixtures.
To install dependencies for integration tests:
@ -96,7 +96,6 @@ docker-compose -f elasticsearch.yml up
For environments that requires more involving preparation, look for `*.sh`. For instance,
`opensearch.sh` builds a required docker image and then launch opensearch.
### Prepare environment variables for local testing:
- copy `tests/integration_tests/.env.example` to `tests/integration_tests/.env`

@ -8,4 +8,4 @@ import DocCardList from "@theme/DocCardList";
Example code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the [Prompt + LLM](/docs/expression_language/cookbook/prompt_llm_parser) page is a good place to start.
<DocCardList />
<DocCardList />

@ -6,4 +6,4 @@ sidebar_position: 2
import DocCardList from "@theme/DocCardList";
<DocCardList />
<DocCardList />

@ -4,8 +4,8 @@
To install LangChain run:
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import CodeBlock from "@theme/CodeBlock";
<Tabs>
@ -13,7 +13,9 @@ import CodeBlock from "@theme/CodeBlock";
<CodeBlock language="bash">pip install langchain</CodeBlock>
</TabItem>
<TabItem value="conda" label="Conda">
<CodeBlock language="bash">conda install langchain -c conda-forge</CodeBlock>
<CodeBlock language="bash">
conda install langchain -c conda-forge
</CodeBlock>
</TabItem>
</Tabs>
@ -30,6 +32,7 @@ pip install -e .
```
## LangChain community
The `langchain-community` package contains third-party integrations. It is automatically installed by `langchain`, but can also be used separately. Install with:
```bash
@ -37,6 +40,7 @@ pip install langchain-community
```
## LangChain core
The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with:
```bash
@ -44,6 +48,7 @@ pip install langchain-core
```
## LangChain experimental
The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses.
Install with:
@ -52,6 +57,7 @@ pip install langchain-experimental
```
## LangServe
LangServe helps developers deploy LangChain runnables and chains as a REST API.
LangServe is automatically installed by LangChain CLI.
If not using LangChain CLI, install with:
@ -59,9 +65,11 @@ If not using LangChain CLI, install with:
```bash
pip install "langserve[all]"
```
for both client and server dependencies. Or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code.
## LangChain CLI
The LangChain CLI is useful for working with LangChain templates and other LangServe projects.
Install with:
@ -70,6 +78,7 @@ pip install langchain-cli
```
## LangSmith SDK
The LangSmith SDK is automatically installed by LangChain.
If not using LangChain, install with:

@ -5,27 +5,30 @@ sidebar_position: 0
# Introduction
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
This framework consists of several parts.
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
import ThemedImage from '@theme/ThemedImage';
import ThemedImage from "@theme/ThemedImage";
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: '/svg/langchain_stack.svg',
dark: '/svg/langchain_stack_dark.svg',
light: "/svg/langchain_stack.svg",
dark: "/svg/langchain_stack_dark.svg",
}}
title="LangChain Framework Overview"
/>
Together, these products simplify the entire application lifecycle:
- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
- **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
- **Deploy**: Turn any chain into an API with LangServe.
@ -33,12 +36,14 @@ Together, these products simplify the entire application lifecycle:
## LangChain Libraries
The main value props of the LangChain packages are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
The LangChain libraries themselves are made up of several different packages.
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
@ -66,39 +71,45 @@ LCEL is a declarative way to compose chains. LCEL was designed from day 1 to sup
- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL
- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks
## Modules
LangChain provides standard, extendable interfaces and integrations for the following modules:
#### [Model I/O](/docs/modules/model_io/)
Interface with language models
#### [Retrieval](/docs/modules/data_connection/)
Interface with application-specific data
#### [Agents](/docs/modules/agents/)
Let models choose which tools to use given high-level directives
Let models choose which tools to use given high-level directives
## Examples, ecosystem, and resources
### [Use cases](/docs/use_cases/question_answering/)
Walkthroughs and techniques for common end-to-end use cases, like:
- [Document question answering](/docs/use_cases/question_answering/)
- [Chatbots](/docs/use_cases/chatbots/)
- [Analyzing structured data](/docs/use_cases/sql/)
- and much more...
### [Integrations](/docs/integrations/providers/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
### [Guides](../guides/debugging.md)
Best practices for developing with LangChain.
### [API reference](https://api.python.langchain.com)
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.
### [Developer's guide](/docs/contributing)
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.

@ -1,6 +1,7 @@
# Quickstart
In this quickstart we'll show you how to:
- Get setup with LangChain, LangSmith and LangServe
- Use the most basic and common components of LangChain: prompt templates, models, and output parsers
- Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining
@ -22,8 +23,8 @@ You do not NEED to go through the guide in a Jupyter Notebook, but it is recomme
To install LangChain run:
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import CodeBlock from "@theme/CodeBlock";
<Tabs>
@ -31,11 +32,12 @@ import CodeBlock from "@theme/CodeBlock";
<CodeBlock language="bash">pip install langchain</CodeBlock>
</TabItem>
<TabItem value="conda" label="Conda">
<CodeBlock language="bash">conda install langchain -c conda-forge</CodeBlock>
<CodeBlock language="bash">
conda install langchain -c conda-forge
</CodeBlock>
</TabItem>
</Tabs>
For more details, see our [Installation guide](/docs/get_started/installation).
### LangSmith
@ -105,10 +107,11 @@ llm = ChatOpenAI(openai_api_key="...")
First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:
* [Download](https://ollama.ai/download)
* Fetch a model via `ollama pull llama2`
- [Download](https://ollama.ai/download)
- Fetch a model via `ollama pull llama2`
Then, make sure the Ollama server is running. After that, you can do:
```python
from langchain_community.llms import Ollama
llm = Ollama(model="llama2")
@ -198,7 +201,7 @@ prompt = ChatPromptTemplate.from_messages([
We can now combine these into a simple LLM chain:
```python
chain = prompt | llm
chain = prompt | llm
```
We can now invoke it and ask the same question. It still won't know the answer, but it should respond in a more proper tone for a technical writer!
@ -231,15 +234,14 @@ chain.invoke({"input": "how can langsmith help with testing?"})
We've now successfully set up a basic LLM chain. We only touched on the basics of prompts, models, and output parsers - for a deeper dive into everything mentioned here, see [this section of documentation](/docs/modules/model_io).
## Retrieval Chain
In order to properly answer the original question ("how can langsmith help with testing?"), we need to provide additional context to the LLM.
We can do this via *retrieval*.
We can do this via _retrieval_.
Retrieval is useful when you have **too much data** to pass to the LLM directly.
You can then use a retriever to fetch only the most relevant pieces and pass those in.
In this process, we will look up relevant documents from a *Retriever* and then pass them into the prompt.
In this process, we will look up relevant documents from a _Retriever_ and then pass them into the prompt.
A Retriever can be backed by anything - a SQL table, the internet, etc - but in this instance we will populate a vector store and use that as a retriever. For more information on vectorstores, see [this documentation](/docs/modules/data_connection/vectorstores).
First, we need to load the data that we want to index. In order to do this, we will use the WebBaseLoader. This requires installing [BeautifulSoup](https://beautiful-soup-4.readthedocs.io/en/latest/):
@ -250,7 +252,6 @@ pip install beautifulsoup4
After that, we can import and use WebBaseLoader.
```python
from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")
@ -283,6 +284,7 @@ from langchain_community.embeddings import OllamaEmbeddings
embeddings = OllamaEmbeddings()
```
</TabItem>
<TabItem value="cohere" label="Cohere (API)" default>
@ -411,6 +413,7 @@ retriever_chain.invoke({
"input": "Tell me how"
})
```
You should see that this returns documents about testing in LangSmith. This is because the LLM generated a new query, combining the chat history with the follow up question.
Now that we have this new retriever, we can create a new chain to continue the conversation with these retrieved documents in mind.
@ -435,6 +438,7 @@ retrieval_chain.invoke({
"input": "Tell me how"
})
```
We can see that this gives a coherent answer - we've successfully turned our retrieval chain into a chatbot!
## Agent
@ -462,12 +466,12 @@ retriever_tool = create_retriever_tool(
)
```
The search tool that we will use is [Tavily](/docs/integrations/retrievers/tavily). This will require an API key (they have generous free tier). After creating it on their platform, you need to set it as an environment variable:
```shell
export TAVILY_API_KEY=...
```
If you do not want to set up an API key, you can skip creating this tool.
```python
@ -485,6 +489,7 @@ tools = [retriever_tool, search]
Now that we have the tools, we can create an agent to use them. We will go over this pretty quickly - for a deeper dive into what exactly is going on, check out the [Agent's Getting Started documentation](/docs/modules/agents)
Install langchain hub first
```bash
pip install langchainhub
```
@ -530,7 +535,6 @@ agent_executor.invoke({
We've now successfully set up a basic agent. We only touched on the basics of agents - for a deeper dive into everything mentioned here, see [this section of documentation](/docs/modules/agents).
## Serving with LangServe
Now that we've built an application, we need to serve it. That's where LangServe comes in.
@ -539,6 +543,7 @@ LangServe helps developers deploy LangChain chains as a REST API. You do not nee
While the first part of this guide was intended to be run in a Jupyter Notebook, we will now move out of that. We will be creating a Python file and then interacting with it from the command line.
Install with:
```bash
pip install "langserve[all]"
```
@ -546,6 +551,7 @@ pip install "langserve[all]"
### Server
To create a server for our application we'll make a `serve.py` file. This will contain our logic for serving our application. It consists of three things:
1. The definition of our chain that we just built above
2. Our FastAPI app
3. A definition of a route from which to serve the chain, which is done with `langserve.add_routes`
@ -632,9 +638,11 @@ if __name__ == "__main__":
```
And that's it! If we execute this file:
```bash
python serve.py
```
we should see our chain being served at localhost:8000.
### Playground

@ -4,8 +4,6 @@ If you're building with LLMs, at some point something will break, and you'll nee
Here are a few different tools and functionalities to aid in debugging.
## Tracing
Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [WandB](/docs/integrations/providers/wandb_tracing) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them.
@ -16,13 +14,12 @@ For anyone building production-grade LLM applications, we highly recommend using
## `set_debug` and `set_verbose`
If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run.
If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run.
There are a number of ways to enable printing at varying degrees of verbosity.
Let's suppose we have a simple agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:
```python
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain_openai import ChatOpenAI
@ -32,7 +29,6 @@ tools = load_tools(["ddg-search", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
```
```python
agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?")
```
@ -49,7 +45,6 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
Setting the global `debug` flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs.
```python
from langchain.globals import set_debug
@ -62,7 +57,7 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
<CodeOutputBlock lang="python">
```
````
[chain/start] [1:RunTypeEnum.chain:AgentExecutor] Entering Chain run with input:
{
"input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?"
@ -370,7 +365,7 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.'
```
````
</CodeOutputBlock>
@ -380,7 +375,6 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic.
```python
from langchain.globals import set_verbose
@ -393,21 +387,21 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
<CodeOutputBlock lang="python">
```
````
> Entering new AgentExecutor chain...
> Entering new LLMChain chain...
Prompt after formatting:
Answer the following questions as best you can. You have access to the following tools:
duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
Calculator: Useful for when you need to answer questions about math.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [duckduckgo_search, Calculator]
@ -416,28 +410,28 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?
Thought:
> Finished chain.
First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.
Action: duckduckgo_search
Action Input: "Director of the 2023 film Oppenheimer"
Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.
Thought:
> Entering new LLMChain chain...
Prompt after formatting:
Answer the following questions as best you can. You have access to the following tools:
duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
Calculator: Useful for when you need to answer questions about math.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [duckduckgo_search, Calculator]
@ -446,32 +440,32 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?
Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.
Action: duckduckgo_search
Action Input: "Director of the 2023 film Oppenheimer"
Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.
Thought:
> Finished chain.
The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age.
Action: duckduckgo_search
Action Input: "Christopher Nolan birth date"
Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ...
Thought:
> Entering new LLMChain chain...
Prompt after formatting:
Answer the following questions as best you can. You have access to the following tools:
duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
Calculator: Useful for when you need to answer questions about math.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [duckduckgo_search, Calculator]
@ -480,9 +474,9 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?
Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.
Action: duckduckgo_search
@ -493,19 +487,19 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
Action Input: "Christopher Nolan birth date"
Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ...
Thought:
> Finished chain.
Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days.
Action: Calculator
Action Input: (2023 - 1970) * 365
> Entering new LLMMathChain chain...
(2023 - 1970) * 365
> Entering new LLMChain chain...
Prompt after formatting:
Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.
Question: ${Question with math problem.}
```text
${single line mathematical expression that solves the problem}
@ -515,9 +509,9 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
${Output of running the code}
```
Answer: ${Answer}
Begin.
Question: What is 37593 * 67?
```text
37593 * 67
@ -527,7 +521,7 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
2518731
```
Answer: 2518731
Question: 37593^(1/5)
```text
37593**(1/5)
@ -537,31 +531,31 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
8.222831614237718
```
Answer: 8.222831614237718
Question: (2023 - 1970) * 365
> Finished chain.
```text
(2023 - 1970) * 365
```
...numexpr.evaluate("(2023 - 1970) * 365")...
Answer: 19345
> Finished chain.
Observation: Answer: 19345
Thought:
> Entering new LLMChain chain...
Prompt after formatting:
Answer the following questions as best you can. You have access to the following tools:
duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
Calculator: Useful for when you need to answer questions about math.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [duckduckgo_search, Calculator]
@ -570,9 +564,9 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?
Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.
Action: duckduckgo_search
@ -587,16 +581,16 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
Action Input: (2023 - 1970) * 365
Observation: Answer: 19345
Thought:
> Finished chain.
I now know the final answer
Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days.
> Finished chain.
'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days.'
```
````
</CodeOutputBlock>
@ -606,12 +600,11 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
You can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object).
```python
# Passing verbose=True to initialize_agent will pass that along to the AgentExecutor (which is a Chain).
agent = initialize_agent(
tools,
llm,
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
@ -643,7 +636,7 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
Observation: Answer: 19345
Thought:I now know the final answer
Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days.
> Finished chain.

@ -3,10 +3,10 @@
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it is crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
- **Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)**
In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
- **Case 2: Self-hosted Open-Source Models**
Alternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.
Alternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.
Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It's vital to understand the trade-offs and key considerations when evaluating serving frameworks.
@ -48,28 +48,23 @@ Monitoring forms an integral part of any system running in a production environm
Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren't the only potential points of failure. It's essential to build resilience against various failures that could occur at any point in your stack.
### Zero down time upgrade
System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process.
### Load balancing
Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested.
There are several strategies for load balancing. For example, one common method is the *Round Robin* strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a *Weighted Round Robin* or *Least Connections* strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let's imagine you're running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable.
There are several strategies for load balancing. For example, one common method is the _Round Robin_ strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a _Weighted Round Robin_ or _Least Connections_ strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let's imagine you're running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable.
## Maintaining Cost-Efficiency and Scalability
Deploying LLM services can be costly, especially when you're handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service.
### Self-hosting models
Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines.
Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines.
### Resource Management and Auto-Scaling
@ -87,10 +82,7 @@ When self-hosting your models, you should consider independent scaling. For exam
In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it's only working on a single task at a time. On the other hand, by batching requests together, you're allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service.
In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities.
In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities.
## Ensuring Rapid Iteration
@ -104,12 +96,10 @@ Deploying systems like LangChain demands the ability to piece together different
Many hosted solutions are restricted to a single cloud provider, which can limit your options in today's multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider.
## Infrastructure as Code (IaC)
Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations.
## CI/CD
In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.

@ -1,7 +1,7 @@
# LangChain Templates
For more information on LangChain Templates, visit
For more information on LangChain Templates, visit
- [LangChain Templates Quickstart](https://github.com/langchain-ai/langchain/blob/master/templates/README.md)
- [LangChain Templates Index](https://github.com/langchain-ai/langchain/blob/master/templates/docs/INDEX.md)
- [Full List of Templates](https://github.com/langchain-ai/langchain/blob/master/templates/)
- [Full List of Templates](https://github.com/langchain-ai/langchain/blob/master/templates/)

@ -1,6 +1,7 @@
---
sidebar_position: 3
sidebar_position: 3
---
# Comparison Evaluators
Comparison evaluators in LangChain help measure two different chains or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.
@ -25,4 +26,3 @@ Detailed information about creating custom evaluators and the available built-in
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -1,6 +1,7 @@
---
sidebar_position: 5
---
# Examples
🚧 _Docs under construction_ 🚧
@ -9,4 +10,4 @@ Below are some examples for inspecting and checking different chains.
import DocCardList from "@theme/DocCardList";
<DocCardList />
<DocCardList />

@ -2,7 +2,7 @@ import DocCardList from "@theme/DocCardList";
# Evaluation
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
The guides in this section review the APIs and functionality LangChain provides to help you better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
@ -20,7 +20,6 @@ We also are working to share guides and cookbooks that demonstrate how to use th
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
## LangSmith Evaluation
LangSmith provides an integrated evaluation and tracing framework that allows you to check for regressions, compare systems, and easily identify and fix any sources of errors and performance issues. Check out the docs on [LangSmith Evaluation](https://docs.smith.langchain.com/evaluation) and additional [cookbooks](https://docs.smith.langchain.com/cookbook) for more detailed information on evaluating your applications.
@ -31,7 +30,7 @@ Your application quality is a function both of the LLM you choose and the prompt
- Agent tool use
- Retrieval-augmented question-answering
- Structured Extraction
- Structured Extraction
Check out the docs for examples and leaderboard information.

@ -1,6 +1,7 @@
---
sidebar_position: 2
sidebar_position: 2
---
# String Evaluators
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.

@ -1,6 +1,7 @@
---
sidebar_position: 4
---
# Trajectory Evaluators
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.
@ -25,4 +26,3 @@ For a deeper dive into the implementation and use of Trajectory Evaluators, refe
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -6,10 +6,11 @@
## LangChain Pydantic migration plan
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
* Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
* During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial
migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
- Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
- During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial
migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).
User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.
@ -18,7 +19,7 @@ the case of inheritance and in the case of passing objects to LangChain.
**Example 1: Extending via inheritance**
**YES**
**YES**
```python
from pydantic.v1 import root_validator, validator
@ -33,7 +34,7 @@ class CustomTool(BaseTool): # BaseTool is v1 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
name='custom_tool',
@ -44,7 +45,7 @@ CustomTool(
Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors
**NO**
**NO**
```python
from pydantic import Field, field_validator # pydantic v2
@ -59,9 +60,9 @@ class CustomTool(BaseTool): # BaseTool is v1 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
CustomTool(
name='custom_tool',
description="hello",
x=1,
@ -102,4 +103,4 @@ Tool.from_function( # <-- tool uses v1 namespace
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```
```

@ -2,14 +2,13 @@
This example shows the Self-critique chain with `Constitutional AI`.
The `ConstitutionalChain` is a chain that ensures the output of a language model adheres
to a predefined set of constitutional principles. By incorporating specific rules and guidelines,
the `ConstitutionalChain` filters and modifies the generated content to align with these principles,
thus providing more controlled, ethical, and contextually appropriate responses.
This mechanism helps maintain the integrity of the output while minimizing the risk of generating
The `ConstitutionalChain` is a chain that ensures the output of a language model adheres
to a predefined set of constitutional principles. By incorporating specific rules and guidelines,
the `ConstitutionalChain` filters and modifies the generated content to align with these principles,
thus providing more controlled, ethical, and contextually appropriate responses.
This mechanism helps maintain the integrity of the output while minimizing the risk of generating
content that may violate guidelines, be offensive, or deviate from the desired context.
```python
# Imports
from langchain_openai import OpenAI
@ -47,7 +46,6 @@ evil_qa_chain.run(question="How can I steal kittens?")
</CodeOutputBlock>
```python
principles = ConstitutionalChain.get_principles(["illegal"])
constitutional_chain = ConstitutionalChain.from_llm(
@ -92,7 +90,6 @@ We also have built-in support for the Unified Objectives proposed in this paper:
Some of these are useful for the same idea of correcting ethical issues.
```python
principles = ConstitutionalChain.get_principles(["uo-ethics-1"])
constitutional_chain = ConstitutionalChain.from_llm(
@ -133,7 +130,6 @@ constitutional_chain.run(question="How can I steal kittens?")
But they can also be used for a wide variety of tasks, including encouraging the LLM to list out supporting evidence
```python
qa_prompt = PromptTemplate(
template="""Question: {question}
@ -158,7 +154,6 @@ qa_chain.run(question=query)
</CodeOutputBlock>
```python
principles = ConstitutionalChain.get_principles(["uo-implications-1"])
constitutional_chain = ConstitutionalChain.from_llm(
@ -201,7 +196,6 @@ constitutional_chain.run(query)
We can easily add in custom principles.
```python
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
@ -249,7 +243,6 @@ constitutional_chain.run(question="How can I steal kittens?")
We can also run multiple principles sequentially. Let's make the model talk like Master Yoda.
```python
master_yoda_principle = ConstitutionalPrinciple(
name='Master Yoda Principle',
@ -303,7 +296,6 @@ constitutional_chain.run(question="How can I steal kittens?")
You can also get the constitutional chain to return it's intermediate steps.
```python
constitutional_chain = ConstitutionalChain.from_llm(
chain=evil_qa_chain,
@ -350,7 +342,6 @@ constitutional_chain({"question":"How can I steal kittens?"})
We can also see that the chain recognizes when no revision is necessary.
```python
good_qa_prompt = PromptTemplate(
template="""You are good and must only give good and ethical answers. If someone asks you how to do something illegal, make sure you respond in an ethical and legal way.
@ -376,7 +367,6 @@ good_qa_chain.run(question="How can I steal kittens?")
</CodeOutputBlock>
```python
constitutional_chain = ConstitutionalChain.from_llm(
chain=good_qa_chain,
@ -417,12 +407,10 @@ constitutional_chain({"question":"How can I steal kittens?"})
For a list of all principles, see:
```python
from langchain.chains.constitutional_ai.principles import PRINCIPLES
```
```python
PRINCIPLES
```

@ -4,6 +4,6 @@ One of the key concerns with using LLMs is that they may generate harmful or une
- [Amazon Comprehend moderation chain](/docs/guides/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle Personally Identifiable Information (PII) and toxicity.
- [Constitutional chain](/docs/guides/safety/constitutional_chain): Prompt the model with a set of principles which should guide the model behavior.
- [Hugging Face prompt injection identification](/docs/guides/safety/hugging_face_prompt_injection): Detect and handle prompt injection attacks.
- [Hugging Face prompt injection identification](/docs/guides/safety/hugging_face_prompt_injection): Detect and handle prompt injection attacks.
- [Logical Fallacy chain](/docs/guides/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation.
- [Moderation chain](/docs/guides/safety/moderation): Check if any output text is harmful and flag it.

@ -4,18 +4,17 @@ This example shows how to remove logical fallacies from model output.
## Logical Fallacies
`Logical fallacies` are flawed reasoning or false arguments that can undermine the validity of a model's outputs.
`Logical fallacies` are flawed reasoning or false arguments that can undermine the validity of a model's outputs.
Examples include circular reasoning, false
dichotomies, ad hominem attacks, etc. Machine learning models are optimized to perform well on specific metrics like accuracy, perplexity, or loss. However,
dichotomies, ad hominem attacks, etc. Machine learning models are optimized to perform well on specific metrics like accuracy, perplexity, or loss. However,
optimizing for metrics alone does not guarantee logically sound reasoning.
Language models can learn to exploit flaws in reasoning to generate plausible-sounding but logically invalid arguments. When models rely on fallacies, their outputs become unreliable and untrustworthy, even if they achieve high scores on metrics. Users cannot depend on such outputs. Propagating logical fallacies can spread misinformation, confuse users, and lead to harmful real-world consequences when models are deployed in products or services.
Language models can learn to exploit flaws in reasoning to generate plausible-sounding but logically invalid arguments. When models rely on fallacies, their outputs become unreliable and untrustworthy, even if they achieve high scores on metrics. Users cannot depend on such outputs. Propagating logical fallacies can spread misinformation, confuse users, and lead to harmful real-world consequences when models are deployed in products or services.
Monitoring and testing specifically for logical flaws is challenging unlike other quality issues. It requires reasoning about arguments rather than pattern matching.
Therefore, it is crucial that model developers proactively address logical fallacies after optimizing metrics. Specialized techniques like causal modeling, robustness testing, and bias mitigation can help avoid flawed reasoning. Overall, allowing logical flaws to persist makes models less safe and ethical. Eliminating fallacies ensures model outputs remain logically valid and aligned with human reasoning. This maintains user trust and mitigates risks.
Therefore, it is crucial that model developers proactively address logical fallacies after optimizing metrics. Specialized techniques like causal modeling, robustness testing, and bias mitigation can help avoid flawed reasoning. Overall, allowing logical flaws to persist makes models less safe and ethical. Eliminating fallacies ensures model outputs remain logically valid and aligned with human reasoning. This maintains user trust and mitigates risks.
## Example
@ -51,7 +50,6 @@ misleading_chain.run(question="How do I know the earth is round?")
</CodeOutputBlock>
```python
fallacies = FallacyChain.get_fallacies(["correction"])
fallacy_chain = FallacyChain.from_llm(

@ -1,15 +1,15 @@
# Moderation chain
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model.
Some API providers, like OpenAI, [specifically prohibit](https://beta.openai.com/docs/usage-policies/use-case-policy) you, or your end users, from generating some
types of harmful content. To comply with this (and to just generally prevent your application from being harmful)
you may often want to append a moderation chain to any LLMChains, in order to make sure any output
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model.
Some API providers, like OpenAI, [specifically prohibit](https://beta.openai.com/docs/usage-policies/use-case-policy) you, or your end users, from generating some
types of harmful content. To comply with this (and to just generally prevent your application from being harmful)
you may often want to append a moderation chain to any LLMChains, in order to make sure any output
the LLM generates is not harmful.
If the content passed into the moderation chain is harmful, there is not one best way to handle it,
it probably depends on your application. Sometimes you may want to throw an error in the Chain
(and have your application handle that). Other times, you may want to return something to
If the content passed into the moderation chain is harmful, there is not one best way to handle it,
it probably depends on your application. Sometimes you may want to throw an error in the Chain
(and have your application handle that). Other times, you may want to return something to
the user explaining that the text was harmful. There could be other ways to handle it.
We will cover all these ways in this walkthrough.
@ -18,9 +18,6 @@ We'll show:
1. How to run any piece of text through a moderation chain.
2. How to append a Moderation chain to an LLMChain.
```python
from langchain_openai import OpenAI
from langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChain
@ -29,10 +26,9 @@ from langchain.prompts import PromptTemplate
## How to use the moderation chain
Here's an example of using the moderation chain with default settings (will return a string
Here's an example of using the moderation chain with default settings (will return a string
explaining stuff was flagged).
```python
moderation_chain = OpenAIModerationChain()
@ -47,7 +43,6 @@ moderation_chain.run("This is okay")
</CodeOutputBlock>
```python
moderation_chain.run("I will kill you")
```
@ -62,7 +57,6 @@ moderation_chain.run("I will kill you")
Here's an example of using the moderation chain to throw an error.
```python
moderation_chain_error = OpenAIModerationChain(error=True)
@ -77,7 +71,6 @@ moderation_chain_error.run("This is okay")
</CodeOutputBlock>
```python
moderation_chain_error.run("I will kill you")
```
@ -133,10 +126,9 @@ moderation_chain_error.run("I will kill you")
## How to create a custom Moderation chain
Here's an example of creating a custom moderation chain with a custom error message.
Here's an example of creating a custom moderation chain with a custom error message.
It requires some knowledge of OpenAI's moderation endpoint results. See [docs here](https://beta.openai.com/docs/api-reference/moderations).
```python
class CustomModeration(OpenAIModerationChain):
def _moderate(self, text: str, results: dict) -> str:
@ -158,7 +150,6 @@ custom_moderation.run("This is okay")
</CodeOutputBlock>
```python
custom_moderation.run("I will kill you")
```
@ -175,10 +166,9 @@ custom_moderation.run("I will kill you")
To easily combine a moderation chain with an LLMChain, you can use the `SequentialChain` abstraction.
Let's start with a simple example of where the `LLMChain` only has a single input. For this purpose,
Let's start with a simple example of where the `LLMChain` only has a single input. For this purpose,
we will prompt the model, so it says something harmful.
```python
prompt = PromptTemplate.from_template("{text}")
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo-instruct"), prompt=prompt)
@ -204,7 +194,6 @@ llm_chain.run(text)
</CodeOutputBlock>
```python
chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])
@ -221,7 +210,6 @@ chain.run(text)
Now let's walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can't use the SimpleSequentialChain)
```python
prompt = PromptTemplate.from_template("{setup}{new_input}Person2:")
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo-instruct"), prompt=prompt)
@ -248,7 +236,6 @@ llm_chain(inputs, return_only_outputs=True)
</CodeOutputBlock>
```python
# Setting the input/output keys so it lines up
moderation_chain.input_key = "text"

@ -1,6 +1,6 @@
# LLMonitor
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
> [LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
<video controls width='100%' >
<source src='https://llmonitor.com/videos/demo-annotated.mp4'/>
@ -101,6 +101,7 @@ agent.run(
```
## User Tracking
User tracking allows you to identify your users, track their cost, conversations and more.
```python
@ -112,6 +113,7 @@ with identify("user-123"):
with identify("user-456", user_props={"email": "user456@test.com"}):
agen.run("Who is Leo DiCaprio's girlfriend?")
```
## Support
For any question or issue with integration you can reach out to the LLMonitor team on [Discord](http://discord.com/invite/8PafSG58kK) or via [email](mailto:vince@llmonitor.com).

@ -33,11 +33,11 @@ Read more in the [ChatAnthropic documentation](/docs/integrations/chat/anthropic
## `AnthropicLLM`
`AnthropicLLM` is a subclass of LangChain's `LLM`. It is a wrapper around Anthropic's
`AnthropicLLM` is a subclass of LangChain's `LLM`. It is a wrapper around Anthropic's
text-based completion endpoints.
```python
from langchain_anthropic import AnthropicLLM
model = AnthropicLLM(model='claude-2.1')
```
```

@ -6,17 +6,16 @@ The `LangChain` integrations related to [Amazon AWS](https://aws.amazon.com/) pl
### Bedrock
>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of
> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`,
> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to
> build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`,
> you can easily experiment with and evaluate top FMs for your use case, privately customize them with
> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build
> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is
> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy
> [Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of
> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`,
> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to
> build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`,
> you can easily experiment with and evaluate top FMs for your use case, privately customize them with
> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build
> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is
> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy
> generative AI capabilities into your applications using the AWS services you are already familiar with.
See a [usage example](/docs/integrations/llms/bedrock).
```python
@ -25,16 +24,16 @@ from langchain_community.llms.bedrock import Bedrock
### Amazon API Gateway
>[Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for
> developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door"
> for applications to access data, business logic, or functionality from your backend services. Using
> `API Gateway`, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication
> [Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for
> developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door"
> for applications to access data, business logic, or functionality from your backend services. Using
> `API Gateway`, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication
> applications. `API Gateway` supports containerized and serverless workloads, as well as web applications.
>
> `API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of
> concurrent API calls, including traffic management, CORS support, authorization and access control,
> throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs.
> You pay for the API calls you receive and the amount of data transferred out and, with the `API Gateway`
>
> `API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of
> concurrent API calls, including traffic management, CORS support, authorization and access control,
> throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs.
> You pay for the API calls you receive and the amount of data transferred out and, with the `API Gateway`
> tiered pricing model, you can reduce your cost as your API usage scales.
See a [usage example](/docs/integrations/llms/amazon_api_gateway).
@ -45,7 +44,7 @@ from langchain_community.llms import AmazonAPIGateway
### SageMaker Endpoint
>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy
> [Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy
> machine learning (ML) models with fully managed infrastructure, tools, and workflows.
We use `SageMaker` to host our model and expose it as the `SageMaker Endpoint`.
@ -72,6 +71,7 @@ from langchain_community.chat_models import BedrockChat
### Bedrock
See a [usage example](/docs/integrations/text_embedding/bedrock).
```python
from langchain_community.embeddings import BedrockEmbeddings
```
@ -79,6 +79,7 @@ from langchain_community.embeddings import BedrockEmbeddings
### SageMaker Endpoint
See a [usage example](/docs/integrations/text_embedding/sagemaker-endpoint).
```python
from langchain_community.embeddings import SagemakerEndpointEmbeddings
from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
@ -88,10 +89,9 @@ from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
### AWS S3 Directory and File
>[Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
> [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
> is an object storage service.
>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
> [AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html) >[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory).
@ -103,7 +103,7 @@ from langchain_community.document_loaders import S3DirectoryLoader, S3FileLoader
### Amazon Textract
>[Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine
> [Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine
> learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
See a [usage example](/docs/integrations/document_loaders/amazon_textract).
@ -116,11 +116,11 @@ from langchain_community.document_loaders import AmazonTextractPDFLoader
### Amazon OpenSearch Service
> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
> an open source,
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
> an open source,
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.
We need to install several python libraries.
@ -137,7 +137,7 @@ from langchain_community.vectorstores import OpenSearchVectorSearch
### Amazon DocumentDB Vector Search
>[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
> [Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
> With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
> Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search.
@ -167,14 +167,14 @@ from langchain.vectorstores import DocumentDBVectorSearch
### Amazon Kendra
> [Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html) is an intelligent search service
> provided by `Amazon Web Services` (`AWS`). It utilizes advanced natural language processing (NLP) and machine
> learning algorithms to enable powerful search capabilities across various data sources within an organization.
> `Kendra` is designed to help users find the information they need quickly and accurately,
> [Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html) is an intelligent search service
> provided by `Amazon Web Services` (`AWS`). It utilizes advanced natural language processing (NLP) and machine
> learning algorithms to enable powerful search capabilities across various data sources within an organization.
> `Kendra` is designed to help users find the information they need quickly and accurately,
> improving productivity and decision-making.
> With `Kendra`, we can search across a wide range of content types, including documents, FAQs, knowledge bases,
> manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and
> With `Kendra`, we can search across a wide range of content types, including documents, FAQs, knowledge bases,
> manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and
> contextual meanings to provide highly relevant search results.
We need to install the `boto3` library.
@ -191,8 +191,8 @@ from langchain.retrievers import AmazonKendraRetriever
### Amazon Bedrock (Knowledge Bases)
> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an
> `Amazon Web Services` (`AWS`) offering which lets you quickly build RAG applications by using your
> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an
> `Amazon Web Services` (`AWS`) offering which lets you quickly build RAG applications by using your
> private data to customize foundation model response.
We need to install the `boto3` library.
@ -211,10 +211,10 @@ from langchain.retrievers import AmazonKnowledgeBasesRetriever
### AWS Lambda
>[`Amazon AWS Lambda`](https://aws.amazon.com/pm/lambda/) is a serverless computing service provided by
> `Amazon Web Services` (`AWS`). It helps developers to build and run applications and services without
> provisioning or managing servers. This serverless architecture enables you to focus on writing and
> deploying code, while AWS automatically takes care of scaling, patching, and managing the
> [`Amazon AWS Lambda`](https://aws.amazon.com/pm/lambda/) is a serverless computing service provided by
> `Amazon Web Services` (`AWS`). It helps developers to build and run applications and services without
> provisioning or managing servers. This serverless architecture enables you to focus on writing and
> deploying code, while AWS automatically takes care of scaling, patching, and managing the
> infrastructure required to run your applications.
We need to install `boto3` python library.
@ -229,10 +229,10 @@ See a [usage example](/docs/integrations/tools/awslambda).
### AWS DynamoDB
>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
> [AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
We need to install the `boto3` library.
@ -250,13 +250,13 @@ from langchain.memory import DynamoDBChatMessageHistory
### SageMaker Tracking
>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly
> [Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly
> and easily build, train and deploy machine learning (ML) models.
>[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability
> of `Amazon SageMaker` that lets you organize, track,
> [Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability
> of `Amazon SageMaker` that lets you organize, track,
> compare and evaluate ML experiments and model versions.
We need to install several python libraries.
```bash
@ -273,10 +273,9 @@ from langchain.callbacks import SageMakerCallbackHandler
### Amazon Comprehend Moderation Chain
>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
> [Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
> uses machine learning to uncover valuable insights and connections in text.
We need to install the `boto3` and `nltk` libraries.
```bash

@ -36,7 +36,6 @@ See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model
from langchain_google_vertexai import VertexAIModelGarden
```
## Chat models
### Google Generative AI
@ -175,7 +174,7 @@ from langchain_community.document_loaders import BigQueryLoader
### Bigtable
> [Google Cloud Bigtable](https://cloud.google.com/bigtable/docs) is Google's fully managed NoSQL Big Data database service in Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-bigtable
@ -190,7 +189,7 @@ from langchain_google_bigtable import BigtableLoader
### Cloud SQL for MySQL
> [Google Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-cloud-sql-mysql
@ -205,7 +204,7 @@ from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLDocumentLoader
### Cloud SQL for SQL Server
> [Google Cloud SQL for SQL Server](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your SQL Server databases on Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-cloud-sql-mssql
@ -220,7 +219,7 @@ from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLLoader
### Cloud SQL for PostgreSQL
> [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-cloud-sql-pg
@ -234,7 +233,7 @@ from langchain_google_cloud_sql_pg import PostgreSQLEngine, PostgreSQLLoader
### Cloud Storage
>[Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data in Google Cloud.
> [Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data in Google Cloud.
We need to install `google-cloud-storage` python package.
@ -249,6 +248,7 @@ See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_d
```python
from langchain_community.document_loaders import GCSDirectoryLoader
```
See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_file).
```python
@ -258,8 +258,8 @@ from langchain_community.document_loaders import GCSFileLoader
### El Carro for Oracle Workloads
> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
offers a way to run Oracle databases in Kubernetes as a portable, open source,
community driven, no vendor lock-in container orchestration system.
> offers a way to run Oracle databases in Kubernetes as a portable, open source,
> community driven, no vendor lock-in container orchestration system.
```bash
pip install langchain-google-el-carro
@ -273,7 +273,7 @@ from langchain_google_el_carro import ElCarroLoader
### Google Drive
>[Google Drive](https://en.wikipedia.org/wiki/Google_Drive) is a file storage and synchronization service developed by Google.
> [Google Drive](https://en.wikipedia.org/wiki/Google_Drive) is a file storage and synchronization service developed by Google.
Currently, only `Google Docs` are supported.
@ -292,7 +292,7 @@ from langchain_community.document_loaders import GoogleDriveLoader
### Firestore (Native Mode)
> [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-firestore
@ -308,7 +308,7 @@ from langchain_google_firestore import FirestoreLoader
> [Google Cloud Firestore in Datastore mode](https://cloud.google.com/datastore/docs) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
> Firestore is the newest version of Datastore and introduces several improvements over Datastore.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-datastore
@ -323,7 +323,7 @@ from langchain_google_datastore import DatastoreLoader
### Memorystore for Redis
> [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-memorystore-redis
@ -338,7 +338,7 @@ from langchain_google_memorystore_redis import MemorystoreLoader
### Spanner
> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-spanner
@ -372,7 +372,7 @@ from langchain_community.document_loaders import GoogleSpeechToTextLoader
### Document AI
>[Google Cloud Document AI](https://cloud.google.com/document-ai/docs/overview) is a Google Cloud
> [Google Cloud Document AI](https://cloud.google.com/document-ai/docs/overview) is a Google Cloud
> service that transforms unstructured data from documents into structured data, making it easier
> to understand, analyze, and consume.
@ -459,7 +459,7 @@ from langchain.vectorstores import BigQueryVectorSearch
### Memorystore for Redis
> [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-memorystore-redis
@ -474,7 +474,7 @@ from langchain_google_memorystore_redis import RedisVectorStore
### Spanner
> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-spanner
@ -489,7 +489,7 @@ from langchain_google_spanner import SpannerVectorStore
### Cloud SQL for PostgreSQL
> [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-cloud-sql-pg
@ -522,12 +522,12 @@ from langchain_google_vertexai import VectorSearchVectorStore
### ScaNN
>[Google ScaNN](https://github.com/google-research/google-research/tree/master/scann)
> [Google ScaNN](https://github.com/google-research/google-research/tree/master/scann)
> (Scalable Nearest Neighbors) is a python package.
>
>`ScaNN` is a method for efficient vector similarity search at scale.
> `ScaNN` is a method for efficient vector similarity search at scale.
>`ScaNN` includes search space pruning and quantization for Maximum Inner
> `ScaNN` includes search space pruning and quantization for Maximum Inner
> Product Search and also supports other distance functions such as
> Euclidean distance. The implementation is optimized for x86 processors
> with AVX2 support. See its [Google Research github](https://github.com/google-research/google-research/tree/master/scann)
@ -599,7 +599,7 @@ documents = docai_wh_retriever.get_relevant_documents(
### Text-to-Speech
>[Google Cloud Text-to-Speech](https://cloud.google.com/text-to-speech) is a Google Cloud service that enables developers to
> [Google Cloud Text-to-Speech](https://cloud.google.com/text-to-speech) is a Google Cloud service that enables developers to
> synthesize natural-sounding speech with 100+ voices, available in multiple languages and variants.
> It applies DeepMinds groundbreaking research in WaveNet and Googles powerful neural networks
> to deliver the highest fidelity possible.
@ -703,7 +703,7 @@ from langchain_community.utilities.google_scholar import GoogleScholarAPIWrapper
- Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)
- Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables
`GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively.
`GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively.
```python
from langchain_community.utilities import GoogleSearchAPIWrapper
@ -738,7 +738,7 @@ from langchain_community.utilities.google_trends import GoogleTrendsAPIWrapper
### GMail
> [Google Gmail](https://en.wikipedia.org/wiki/Gmail) is a free email service provided by Google.
This toolkit works with emails through the `Gmail API`.
> This toolkit works with emails through the `Gmail API`.
We need to install several python packages.
@ -773,7 +773,7 @@ from langchain_google_alloydb_pg import AlloyDBEngine, AlloyDBChatMessageHistory
### Cloud SQL for PostgreSQL
> [Cloud SQL for PostgreSQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-cloud-sql-pg
@ -781,7 +781,6 @@ pip install langchain-google-cloud-sql-pg
See [usage example](/docs/integrations/memory/google_sql_pg).
```python
from langchain_google_cloud_sql_pg import PostgreSQLEngine, PostgreSQLChatMessageHistory
```
@ -789,7 +788,7 @@ from langchain_google_cloud_sql_pg import PostgreSQLEngine, PostgreSQLChatMessag
### Cloud SQL for MySQL
> [Cloud SQL for MySQL](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-cloud-sql-mysql
@ -804,7 +803,7 @@ from langchain_google_cloud_sql_mysql import MySQLEngine, MySQLChatMessageHistor
### Cloud SQL for SQL Server
> [Cloud SQL for SQL Server](https://cloud.google.com/sql) is a fully-managed database service that helps you set up, maintain, manage, and administer your SQL Server databases on Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-cloud-sql-mssql
@ -819,7 +818,7 @@ from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLChatMessageHistor
### Spanner
> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-spanner
@ -834,7 +833,7 @@ from langchain_google_spanner import SpannerChatMessageHistory
### Memorystore for Redis
> [Google Cloud Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis) is a fully managed Redis service for Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-memorystore-redis
@ -849,7 +848,7 @@ from langchain_google_memorystore_redis import MemorystoreChatMessageHistory
### Bigtable
> [Google Cloud Bigtable](https://cloud.google.com/bigtable/docs) is Google's fully managed NoSQL Big Data database service in Google Cloud.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-bigtable
@ -864,7 +863,7 @@ from langchain_google_bigtable import BigtableChatMessageHistory
### Firestore (Native Mode)
> [Google Cloud Firestore](https://cloud.google.com/firestore/docs/) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-firestore
@ -880,7 +879,7 @@ from langchain_google_firestore import FirestoreChatMessageHistory
> [Google Cloud Firestore in Datastore mode](https://cloud.google.com/datastore/docs) is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
> Firestore is the newest version of Datastore and introduces several improvements over Datastore.
Install the python package:
> Install the python package:
```bash
pip install langchain-google-datastore
@ -895,8 +894,8 @@ from langchain_google_datastore import DatastoreChatMessageHistory
### El Carro: The Oracle Operator for Kubernetes
> Google [El Carro Oracle Operator for Kubernetes](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
offers a way to run `Oracle` databases in `Kubernetes` as a portable, open source,
community driven, no vendor lock-in container orchestration system.
> offers a way to run `Oracle` databases in `Kubernetes` as a portable, open source,
> community driven, no vendor lock-in container orchestration system.
```bash
pip install langchain-google-el-carro
@ -913,7 +912,7 @@ from langchain_google_el_carro import ElCarroChatMessageHistory
### GMail
> [Gmail](https://en.wikipedia.org/wiki/Gmail) is a free email service provided by Google.
This loader works with emails through the `Gmail API`.
> This loader works with emails through the `Gmail API`.
We need to install several python packages.
@ -931,7 +930,7 @@ from langchain_community.chat_loaders.gmail import GMailLoader
### SearchApi
>[SearchApi](https://www.searchapi.io/) provides a 3rd-party API to access Google search results, YouTube search & transcripts, and other Google-related engines.
> [SearchApi](https://www.searchapi.io/) provides a 3rd-party API to access Google search results, YouTube search & transcripts, and other Google-related engines.
See [usage examples and authorization instructions](/docs/integrations/tools/searchapi).
@ -941,7 +940,7 @@ from langchain_community.utilities import SearchApiAPIWrapper
### SerpApi
>[SerpApi](https://serpapi.com/) provides a 3rd-party API to access Google search results.
> [SerpApi](https://serpapi.com/) provides a 3rd-party API to access Google search results.
See a [usage example and authorization instructions](/docs/integrations/tools/serpapi).
@ -959,9 +958,9 @@ from langchain_community.utilities import GoogleSerperAPIWrapper
### YouTube
>[YouTube Search](https://github.com/joetats/youtube_search) package searches `YouTube` videos avoiding using their heavily rate-limited API.
> [YouTube Search](https://github.com/joetats/youtube_search) package searches `YouTube` videos avoiding using their heavily rate-limited API.
>
>It uses the form on the YouTube homepage and scrapes the resulting page.
> It uses the form on the YouTube homepage and scrapes the resulting page.
We need to install a python package.
@ -977,7 +976,7 @@ from langchain.tools import YouTubeSearchTool
### YouTube audio
>[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`.
> [YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`.
Use `YoutubeAudioLoader` to fetch / download the audio files.
@ -998,7 +997,7 @@ from langchain_community.document_loaders.parsers import OpenAIWhisperParser, Op
### YouTube transcripts
>[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`.
> [YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by `Google`.
We need to install `youtube-transcript-api` python package.

@ -14,6 +14,7 @@ We need to install several python packages.
pip install huggingface_hub
pip install transformers
```
See a [usage example](/docs/integrations/chat/huggingface).
```python
@ -54,15 +55,14 @@ optimum-cli export openvino --model gpt2 ov_model
To apply [weight-only quantization](https://github.com/huggingface/optimum-intel?tab=readme-ov-file#export) when exporting your model.
## Embedding Models
### Hugging Face Hub
>The [Hugging Face Hub](https://huggingface.co/docs/hub/index) is a platform
> with over 350k models, 75k datasets, and 150k demo apps (Spaces), all open source
> and publicly available, in an online platform where people can easily
> collaborate and build ML together. The Hub works as a central place where anyone
> The [Hugging Face Hub](https://huggingface.co/docs/hub/index) is a platform
> with over 350k models, 75k datasets, and 150k demo apps (Spaces), all open source
> and publicly available, in an online platform where people can easily
> collaborate and build ML together. The Hub works as a central place where anyone
> can explore, experiment, collaborate, and build technology with Machine Learning.
We need to install the `sentence_transformers` python package.
@ -71,7 +71,6 @@ We need to install the `sentence_transformers` python package.
pip install sentence_transformers
```
#### HuggingFaceEmbeddings
See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
@ -79,6 +78,7 @@ See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
```python
from langchain_community.embeddings import HuggingFaceEmbeddings
```
#### HuggingFaceInstructEmbeddings
See a [usage example](/docs/integrations/text_embedding/instruct_embeddings).
@ -89,8 +89,8 @@ from langchain_community.embeddings import HuggingFaceInstructEmbeddings
#### HuggingFaceBgeEmbeddings
>[BGE models on the HuggingFace](https://huggingface.co/BAAI/bge-large-en) are [the best open-source embedding models](https://huggingface.co/spaces/mteb/leaderboard).
>BGE model is created by the [Beijing Academy of Artificial Intelligence (BAAI)](https://www.baai.ac.cn/english.html). `BAAI` is a private non-profit organization engaged in AI research and development.
> [BGE models on the HuggingFace](https://huggingface.co/BAAI/bge-large-en) are [the best open-source embedding models](https://huggingface.co/spaces/mteb/leaderboard).
> BGE model is created by the [Beijing Academy of Artificial Intelligence (BAAI)](https://www.baai.ac.cn/english.html). `BAAI` is a private non-profit organization engaged in AI research and development.
See a [usage example](/docs/integrations/text_embedding/bge_huggingface).
@ -100,9 +100,9 @@ from langchain_community.embeddings import HuggingFaceBgeEmbeddings
### Hugging Face Text Embeddings Inference (TEI)
>[Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co/docs/text-generation-inference/index) is a toolkit for deploying and serving open-source
> [Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co/docs/text-generation-inference/index) is a toolkit for deploying and serving open-source
> text embeddings and sequence classification models. `TEI` enables high-performance extraction for the most popular models,
>including `FlagEmbedding`, `Ember`, `GTE` and `E5`.
> including `FlagEmbedding`, `Ember`, `GTE` and `E5`.
We need to install `huggingface-hub` python package.
@ -116,15 +116,14 @@ See a [usage example](/docs/integrations/text_embedding/text_embeddings_inferenc
from langchain_community.embeddings import HuggingFaceHubEmbeddings
```
## Document Loaders
### Hugging Face dataset
>[Hugging Face Hub](https://huggingface.co/docs/hub/index) is home to over 75,000
> [datasets](https://huggingface.co/docs/hub/index#datasets) in more than 100 languages
> [Hugging Face Hub](https://huggingface.co/docs/hub/index) is home to over 75,000
> [datasets](https://huggingface.co/docs/hub/index#datasets) in more than 100 languages
> that can be used for a broad range of tasks across NLP, Computer Vision, and Audio.
> They used for a diverse range of tasks such as translation, automatic speech
> They used for a diverse range of tasks such as translation, automatic speech
> recognition, and image classification.
We need to install `datasets` python package.
@ -139,13 +138,11 @@ See a [usage example](/docs/integrations/document_loaders/hugging_face_dataset).
from langchain_community.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
```
## Tools
### Hugging Face Hub Tools
>[Hugging Face Tools](https://huggingface.co/docs/transformers/v4.29.0/en/custom_tools)
> [Hugging Face Tools](https://huggingface.co/docs/transformers/v4.29.0/en/custom_tools)
> support text I/O and are loaded using the `load_huggingface_tool` function.
We need to install several python packages.

@ -32,7 +32,6 @@ These providers have standalone `langchain-{provider}` packages for improved ver
- [Together AI](/docs/integrations/providers/together)
- [Voyage AI](/docs/integrations/providers/voyageai)
## Featured Community Providers
- [AWS](/docs/integrations/platforms/aws)

@ -21,11 +21,12 @@ from langchain_openai import AzureOpenAI
```
## Chat Models
### Azure OpenAI
>[Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure), often referred to as `Azure` is a cloud computing platform run by `Microsoft`, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). `Microsoft Azure` supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.
> [Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure), often referred to as `Azure` is a cloud computing platform run by `Microsoft`, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). `Microsoft Azure` supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.
>[Azure OpenAI](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) is an `Azure` service with powerful language models from `OpenAI` including the `GPT-3`, `Codex` and `Embeddings model` series for content generation, summarization, semantic search, and natural language to code translation.
> [Azure OpenAI](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) is an `Azure` service with powerful language models from `OpenAI` including the `GPT-3`, `Codex` and `Embeddings model` series for content generation, summarization, semantic search, and natural language to code translation.
```bash
pip install langchain-openai
@ -42,12 +43,12 @@ os.environ["AZURE_OPENAI_API_KEY"] = "your AzureOpenAI key"
See a [usage example](/docs/integrations/chat/azure_chat_openai)
```python
from langchain_openai import AzureChatOpenAI
```
## Embedding Models
### Azure OpenAI
See a [usage example](/docs/integrations/text_embedding/azureopenai)
@ -60,12 +61,12 @@ from langchain_openai import AzureOpenAIEmbeddings
### Azure AI Data
>[Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets
> [Azure AI Studio](https://ai.azure.com/) provides the capability to upload data assets
> to cloud storage and register existing data assets from the following sources:
>
>- `Microsoft OneLake`
>- `Azure Blob Storage`
>- `Azure Data Lake gen 2`
> - `Microsoft OneLake`
> - `Azure Blob Storage`
> - `Azure Data Lake gen 2`
First, you need to install several python packages.
@ -79,15 +80,14 @@ See a [usage example](/docs/integrations/document_loaders/azure_ai_data).
from langchain.document_loaders import AzureAIDataLoader
```
### Azure AI Document Intelligence
>[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known
> [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known
> as `Azure Form Recognizer`) is machine-learning
> based service that extracts text (including handwriting), tables or key-value-pairs
> from scanned documents or images.
>
>Document Intelligence supports `PDF`, `JPEG`, `PNG`, `BMP`, or `TIFF`.
> Document Intelligence supports `PDF`, `JPEG`, `PNG`, `BMP`, or `TIFF`.
First, you need to install a python package.
@ -101,16 +101,16 @@ See a [usage example](/docs/integrations/document_loaders/azure_document_intelli
from langchain.document_loaders import AzureAIDocumentIntelligenceLoader
```
### Azure Blob Storage
>[Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.
> [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.
>[Azure Files](https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction) offers fully managed
> [Azure Files](https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction) offers fully managed
> file shares in the cloud that are accessible via the industry standard Server Message Block (`SMB`) protocol,
> Network File System (`NFS`) protocol, and `Azure Files REST API`. `Azure Files` are based on the `Azure Blob Storage`.
`Azure Blob Storage` is designed for:
- Serving images or documents directly to a browser.
- Storing files for distributed access.
- Streaming video and audio.
@ -134,10 +134,9 @@ See a [usage example for the Azure Files](/docs/integrations/document_loaders/az
from langchain_community.document_loaders import AzureBlobStorageFileLoader
```
### Microsoft OneDrive
>[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file-hosting service operated by Microsoft.
> [Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file-hosting service operated by Microsoft.
First, you need to install a python package.
@ -151,10 +150,9 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_onedrive).
from langchain_community.document_loaders import OneDriveLoader
```
### Microsoft Word
>[Microsoft Word](https://www.microsoft.com/en-us/microsoft-365/word) is a word processor developed by Microsoft.
> [Microsoft Word](https://www.microsoft.com/en-us/microsoft-365/word) is a word processor developed by Microsoft.
See a [usage example](/docs/integrations/document_loaders/microsoft_word).
@ -162,16 +160,15 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_word).
from langchain_community.document_loaders import UnstructuredWordDocumentLoader
```
### Microsoft Excel
>[Microsoft Excel](https://en.wikipedia.org/wiki/Microsoft_Excel) is a spreadsheet editor developed by
> Microsoft for Windows, macOS, Android, iOS and iPadOS.
> It features calculation or computation capabilities, graphing tools, pivot tables, and a macro programming
> [Microsoft Excel](https://en.wikipedia.org/wiki/Microsoft_Excel) is a spreadsheet editor developed by
> Microsoft for Windows, macOS, Android, iOS and iPadOS.
> It features calculation or computation capabilities, graphing tools, pivot tables, and a macro programming
> language called Visual Basic for Applications (VBA). Excel forms part of the Microsoft 365 suite of software.
The `UnstructuredExcelLoader` is used to load `Microsoft Excel` files. The loader works with both `.xlsx` and `.xls` files.
The page content will be the raw text of the Excel file. If you use the loader in `"elements"` mode, an HTML
The `UnstructuredExcelLoader` is used to load `Microsoft Excel` files. The loader works with both `.xlsx` and `.xls` files.
The page content will be the raw text of the Excel file. If you use the loader in `"elements"` mode, an HTML
representation of the Excel file will be available in the document metadata under the `text_as_html` key.
See a [usage example](/docs/integrations/document_loaders/microsoft_excel).
@ -180,11 +177,10 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_excel).
from langchain_community.document_loaders import UnstructuredExcelLoader
```
### Microsoft SharePoint
>[Microsoft SharePoint](https://en.wikipedia.org/wiki/SharePoint) is a website-based collaboration system
> that uses workflow applications, “list” databases, and other web parts and security features to
> [Microsoft SharePoint](https://en.wikipedia.org/wiki/SharePoint) is a website-based collaboration system
> that uses workflow applications, “list” databases, and other web parts and security features to
> empower business teams to work together developed by Microsoft.
See a [usage example](/docs/integrations/document_loaders/microsoft_sharepoint).
@ -193,10 +189,9 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_sharepoint).
from langchain_community.document_loaders.sharepoint import SharePointLoader
```
### Microsoft PowerPoint
>[Microsoft PowerPoint](https://en.wikipedia.org/wiki/Microsoft_PowerPoint) is a presentation program by Microsoft.
> [Microsoft PowerPoint](https://en.wikipedia.org/wiki/Microsoft_PowerPoint) is a presentation program by Microsoft.
See a [usage example](/docs/integrations/document_loaders/microsoft_powerpoint).
@ -222,7 +217,7 @@ from langchain_community.document_loaders.onenote import OneNoteLoader
### Azure Cosmos DB
>[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support.
> [Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support.
> You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string.
> Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB.
@ -251,16 +246,18 @@ from langchain_community.vectorstores import AzureCosmosDBVectorSearch
```
## Retrievers
### Azure Cognitive Search
>[Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
> [Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
>Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:
>- A search engine for full text search over a search index containing user-owned content
>- Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
>- Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
>- Programmability through REST APIs and client libraries in Azure SDKs
>- Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
> Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:
>
> - A search engine for full text search over a search index containing user-owned content
> - Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
> - Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
> - Programmability through REST APIs and client libraries in Azure SDKs
> - Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
See [set up instructions](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).
@ -285,6 +282,7 @@ See a [usage example](/docs/integrations/toolkits/azure_cognitive_services).
```python
from langchain_community.agent_toolkits import O365Toolkit
```
### Microsoft Office 365 email and calendar
We need to install `O365` python package.
@ -293,7 +291,6 @@ We need to install `O365` python package.
pip install O365
```
See a [usage example](/docs/integrations/toolkits/office365).
```python
@ -319,7 +316,7 @@ from langchain_community.utilities.powerbi import PowerBIDataset
### Bing Search API
>[Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`,
> [Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`,
> is a web search engine owned and operated by `Microsoft`.
See a [usage example](/docs/integrations/tools/bing_search).
@ -332,9 +329,9 @@ from langchain_community.utilities import BingSearchAPIWrapper
### Microsoft Presidio
>[Presidio](https://microsoft.github.io/presidio/) (Origin from Latin praesidium protection, garrison)
> helps to ensure sensitive data is properly managed and governed. It provides fast identification and
> anonymization modules for private entities in text and images such as credit card numbers, names,
> [Presidio](https://microsoft.github.io/presidio/) (Origin from Latin praesidium protection, garrison)
> helps to ensure sensitive data is properly managed and governed. It provides fast identification and
> anonymization modules for private entities in text and images such as credit card numbers, names,
> locations, social security numbers, bitcoin wallets, US phone numbers, financial data and more.
First, you need to install several python packages and download a `SpaCy` model.
@ -349,4 +346,3 @@ See [usage examples](/docs/guides/privacy/presidio_data_anonymization/).
```python
from langchain_experimental.data_anonymizer import PresidioAnonymizer, PresidioReversibleAnonymizer
```

@ -2,26 +2,26 @@
All functionality related to OpenAI
>[OpenAI](https://en.wikipedia.org/wiki/OpenAI) is American artificial intelligence (AI) research laboratory
> [OpenAI](https://en.wikipedia.org/wiki/OpenAI) is American artificial intelligence (AI) research laboratory
> consisting of the non-profit `OpenAI Incorporated`
> and its for-profit subsidiary corporation `OpenAI Limited Partnership`.
> `OpenAI` conducts AI research with the declared intention of promoting and developing a friendly AI.
> and its for-profit subsidiary corporation `OpenAI Limited Partnership`.
> `OpenAI` conducts AI research with the declared intention of promoting and developing a friendly AI.
> `OpenAI` systems run on an `Azure`-based supercomputing platform from `Microsoft`.
>The [OpenAI API](https://platform.openai.com/docs/models) is powered by a diverse set of models with different capabilities and price points.
>
>[ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`.
> The [OpenAI API](https://platform.openai.com/docs/models) is powered by a diverse set of models with different capabilities and price points.
>
> [ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`.
## Installation and Setup
Install the integration package with
```bash
pip install langchain-openai
```
Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
## LLM
See a [usage example](/docs/integrations/llms/openai).
@ -31,9 +31,11 @@ from langchain_openai import OpenAI
```
If you are using a model hosted on `Azure`, you should use different wrapper for that:
```python
from langchain_openai import AzureOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai)
## Chat model
@ -45,9 +47,11 @@ from langchain_openai import ChatOpenAI
```
If you are using a model hosted on `Azure`, you should use different wrapper for that:
```python
from langchain_openai import AzureChatOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat/azure_chat_openai)
## Embedding Model
@ -78,11 +82,10 @@ from langchain.retrievers import ChatGPTPluginRetriever
### Dall-E Image Generator
>[OpenAI Dall-E](https://openai.com/dall-e-3) are text-to-image models developed by `OpenAI`
> using deep learning methodologies to generate digital images from natural language descriptions,
> [OpenAI Dall-E](https://openai.com/dall-e-3) are text-to-image models developed by `OpenAI`
> using deep learning methodologies to generate digital images from natural language descriptions,
> called "prompts".
See a [usage example](/docs/integrations/tools/dalle_image_generator).
```python
@ -102,11 +105,13 @@ from langchain.adapters import openai as lc_openai
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
for OpenAI LLMs.
You can also use it to count tokens when splitting documents with
You can also use it to count tokens when splitting documents with
```python
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken)
## Chain
@ -116,5 +121,3 @@ See a [usage example](/docs/guides/safety/moderation).
```python
from langchain.chains import OpenAIModerationChain
```

@ -1,6 +1,6 @@
# Activeloop Deep Lake
>[Activeloop Deep Lake](https://docs.activeloop.ai/) is a data lake for Deep Learning applications, allowing you to use it
> [Activeloop Deep Lake](https://docs.activeloop.ai/) is a data lake for Deep Learning applications, allowing you to use it
> as a vector store.
## Why Deep Lake?
@ -12,7 +12,6 @@
`Activeloop Deep Lake` supports `SelfQuery Retrieval`:
[Activeloop Deep Lake Self Query Retrieval](/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query)
## More Resources
1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/)
@ -28,7 +27,6 @@ Install the Python package:
pip install deeplake
```
## VectorStore
```python

@ -1,7 +1,7 @@
# AI21 Labs
>[AI21 Labs](https://www.ai21.com/about) is a company specializing in Natural
> Language Processing (NLP), which develops AI systems
> [AI21 Labs](https://www.ai21.com/about) is a company specializing in Natural
> Language Processing (NLP), which develops AI systems
> that can understand and generate natural language.
This page covers how to use the `AI21` ecosystem within `LangChain`.
@ -23,7 +23,6 @@ See a [usage example](/docs/integrations/llms/ai21).
from langchain_community.llms import AI21
```
## Chat models
See a [usage example](/docs/integrations/chat/ai21).
@ -39,4 +38,3 @@ See a [usage example](/docs/integrations/text_embedding/ai21).
```python
from langchain_ai21 import AI21Embeddings
```

@ -1,10 +1,9 @@
# AINetwork
>[AI Network](https://www.ainetwork.ai/build-on-ain) is a layer 1 blockchain designed to accommodate
> large-scale AI models, utilizing a decentralized GPU network powered by the
> [AI Network](https://www.ainetwork.ai/build-on-ain) is a layer 1 blockchain designed to accommodate
> large-scale AI models, utilizing a decentralized GPU network powered by the
> [$AIN token](https://www.ainetwork.ai/token), enriching AI-driven `NFTs` (`AINFTs`).
## Installation and Setup
You need to install `ain-py` python package.
@ -12,7 +11,9 @@ You need to install `ain-py` python package.
```bash
pip install ain-py
```
You need to set the `AIN_BLOCKCHAIN_ACCOUNT_PRIVATE_KEY` environmental variable to your AIN Blockchain Account Private Key.
## Toolkit
See a [usage example](/docs/integrations/toolkits/ainetwork).
@ -20,4 +21,3 @@ See a [usage example](/docs/integrations/toolkits/ainetwork).
```python
from langchain_community.agent_toolkits.ainetwork.toolkit import AINetworkToolkit
```

@ -1,6 +1,6 @@
# Airbyte
>[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs,
> [Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs,
> databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
## [AirbyteLoader](/docs/integrations/document_loaders/airbyte)
@ -54,6 +54,7 @@ This instruction shows how to load any source from `Airbyte` into a local `JSON`
Have `docker desktop` installed.
**Steps:**
1. Clone Airbyte from GitHub - `git clone https://github.com/airbytehq/airbyte.git`.
2. Switch into Airbyte directory - `cd airbyte`.
3. Start Airbyte - `docker compose up`.

@ -1,11 +1,11 @@
# Airtable
>[Airtable](https://en.wikipedia.org/wiki/Airtable) is a cloud collaboration service.
`Airtable` is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet.
> The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox',
> [Airtable](https://en.wikipedia.org/wiki/Airtable) is a cloud collaboration service.
> `Airtable` is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet.
> The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox',
> 'phone number', and 'drop-down list', and can reference file attachments like images.
>Users can create a database, set up column types, add records, link tables to one another, collaborate, sort records
> Users can create a database, set up column types, add records, link tables to one another, collaborate, sort records
> and publish views to external websites.
## Installation and Setup
@ -14,13 +14,12 @@
pip install pyairtable
```
* Get your [API key](https://support.airtable.com/docs/creating-and-using-api-keys-and-access-tokens).
* Get the [ID of your base](https://airtable.com/developers/web/api/introduction).
* Get the [table ID from the table url](https://www.highviewapps.com/kb/where-can-i-find-the-airtable-base-id-and-table-id/#:~:text=Both%20the%20Airtable%20Base%20ID,URL%20that%20begins%20with%20tbl).
- Get your [API key](https://support.airtable.com/docs/creating-and-using-api-keys-and-access-tokens).
- Get the [ID of your base](https://airtable.com/developers/web/api/introduction).
- Get the [table ID from the table url](https://www.highviewapps.com/kb/where-can-i-find-the-airtable-base-id-and-table-id/#:~:text=Both%20the%20Airtable%20Base%20ID,URL%20that%20begins%20with%20tbl).
## Document Loader
```python
from langchain_community.document_loaders import AirtableLoader
```

@ -1,8 +1,8 @@
# Aleph Alpha
>[Aleph Alpha](https://docs.aleph-alpha.com/) was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.
> [Aleph Alpha](https://docs.aleph-alpha.com/) was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.
>[The Luminous series](https://docs.aleph-alpha.com/docs/introduction/luminous/) is a family of large language models.
> [The Luminous series](https://docs.aleph-alpha.com/docs/introduction/luminous/) is a family of large language models.
## Installation and Setup
@ -18,7 +18,6 @@ from getpass import getpass
ALEPH_ALPHA_API_KEY = getpass()
```
## LLM
See a [usage example](/docs/integrations/llms/aleph_alpha).

@ -1,15 +1,14 @@
# Alibaba Cloud
>[Alibaba Group Holding Limited (Wikipedia)](https://en.wikipedia.org/wiki/Alibaba_Group), or `Alibaba`
> (Chinese: 阿里巴巴), is a Chinese multinational technology company specializing in e-commerce, retail,
> [Alibaba Group Holding Limited (Wikipedia)](https://en.wikipedia.org/wiki/Alibaba_Group), or `Alibaba`
> (Chinese: 阿里巴巴), is a Chinese multinational technology company specializing in e-commerce, retail,
> Internet, and technology.
>
>
> [Alibaba Cloud (Wikipedia)](https://en.wikipedia.org/wiki/Alibaba_Cloud), also known as `Aliyun`
> (Chinese: 阿里云; pinyin: Ālǐyún; lit. 'Ali Cloud'), is a cloud computing company, a subsidiary
> of `Alibaba Group`. `Alibaba Cloud` provides cloud computing services to online businesses and
> (Chinese: 阿里云; pinyin: Ālǐyún; lit. 'Ali Cloud'), is a cloud computing company, a subsidiary
> of `Alibaba Group`. `Alibaba Cloud` provides cloud computing services to online businesses and
> Alibaba's own e-commerce ecosystem.
## Chat Model
See [installation instructions and a usage example](/docs/integrations/chat/alibaba_cloud_pai_eas).

@ -1,15 +1,15 @@
# AnalyticDB
>[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview)
> is a massively parallel processing (MPP) data warehousing service
> [AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview)
> is a massively parallel processing (MPP) data warehousing service
> from [Alibaba Cloud](https://www.alibabacloud.com/)
>that is designed to analyze large volumes of data online.
> that is designed to analyze large volumes of data online.
>`AnalyticDB for PostgreSQL` is developed based on the open-source `Greenplum Database`
> project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB
> for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and
> Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and
> column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a
> `AnalyticDB for PostgreSQL` is developed based on the open-source `Greenplum Database`
> project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB
> for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and
> Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and
> column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a
> high performance level and supports highly concurrent.
This page covers how to use the AnalyticDB ecosystem within LangChain.

@ -1,9 +1,9 @@
# Annoy
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`)
> is a C++ library with Python bindings to search for points in space that are
> close to a given query point. It also creates large read-only file-based data
> structures that are mapped into memory so that many processes may share the same data.
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`)
> is a C++ library with Python bindings to search for points in space that are
> close to a given query point. It also creates large read-only file-based data
> structures that are mapped into memory so that many processes may share the same data.
## Installation and Setup
@ -11,7 +11,6 @@
pip install annoy
```
## Vectorstore
See a [usage example](/docs/integrations/vectorstores/annoy).

@ -1,14 +1,14 @@
# Anyscale
>[Anyscale](https://www.anyscale.com) is a platform to run, fine tune and scale LLMs via production-ready APIs.
> [Anyscale](https://www.anyscale.com) is a platform to run, fine tune and scale LLMs via production-ready APIs.
> [Anyscale Endpoints](https://docs.anyscale.com/endpoints/overview) serve many open-source models in a cost-effective way.
`Anyscale` also provides [an example](https://docs.anyscale.com/endpoints/model-serving/examples/langchain-integration)
`Anyscale` also provides [an example](https://docs.anyscale.com/endpoints/model-serving/examples/langchain-integration)
how to setup `LangChain` with `Anyscale` for advanced chat agents.
## Installation and Setup
- Get an Anyscale Service URL, route and API key and set them as environment variables (`ANYSCALE_SERVICE_URL`,`ANYSCALE_SERVICE_ROUTE`, `ANYSCALE_SERVICE_TOKEN`).
- Get an Anyscale Service URL, route and API key and set them as environment variables (`ANYSCALE_SERVICE_URL`,`ANYSCALE_SERVICE_ROUTE`, `ANYSCALE_SERVICE_TOKEN`).
- Please see [the Anyscale docs](https://www.anyscale.com/get-started) for more details.
We have to install the `openai` package:

@ -1,10 +1,10 @@
# Apache Doris
>[Apache Doris](https://doris.apache.org/) is a modern data warehouse for real-time analytics.
It delivers lightning-fast analytics on real-time data at scale.
> [Apache Doris](https://doris.apache.org/) is a modern data warehouse for real-time analytics.
> It delivers lightning-fast analytics on real-time data at scale.
>Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance
> in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/).
> Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance
> in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/).
> Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
## Installation and Setup

@ -1,9 +1,8 @@
# Apify
>[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,
>which provides an [ecosystem](https://apify.com/store) of more than a thousand
>ready-made apps called *Actors* for various scraping, crawling, and extraction use cases.
> [Apify](https://apify.com) is a cloud platform for web scraping and data extraction,
> which provides an [ecosystem](https://apify.com/store) of more than a thousand
> ready-made apps called _Actors_ for various scraping, crawling, and extraction use cases.
[![Apify Actors](/img/ApifyActors.png)](https://apify.com/store)
@ -11,14 +10,12 @@ This integration enables you run Actors on the `Apify` platform and load their r
indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
blogs, or knowledge bases.
## Installation and Setup
- Install the Apify API client for Python with `pip install apify-client`
- Get your [Apify API token](https://console.apify.com/account/integrations) and either set it as
an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
## Utility
You can use the `ApifyWrapper` to run Actors on the Apify platform.
@ -29,7 +26,6 @@ from langchain_community.utilities import ApifyWrapper
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify).
## Document loader
You can also use our `ApifyDatasetLoader` to get data from Apify dataset.

@ -1,6 +1,6 @@
# ArangoDB
>[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to
> [ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to
> drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud anywhere.
## Installation and Setup
@ -13,7 +13,7 @@ pip install python-arango
## Graph QA Chain
Connect your `ArangoDB` Database with a chat model to get insights on your data.
Connect your `ArangoDB` Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/graph/integrations/graph_arangodb_qa).

@ -1,18 +1,16 @@
# Arcee
>[Arcee](https://www.arcee.ai/about/about-us) enables the development and advancement
> of what we coin as SLMs—small, specialized, secure, and scalable language models.
> By offering a SLM Adaptation System and a seamless, secure integration,
> `Arcee` empowers enterprises to harness the full potential of
> domain-adapted language models, driving the transformative
> [Arcee](https://www.arcee.ai/about/about-us) enables the development and advancement
> of what we coin as SLMs—small, specialized, secure, and scalable language models.
> By offering a SLM Adaptation System and a seamless, secure integration,
> `Arcee` empowers enterprises to harness the full potential of
> domain-adapted language models, driving the transformative
> innovation in operations.
## Installation and Setup
Get your `Arcee API` key.
## LLMs
See a [usage example](/docs/integrations/llms/arcee).

@ -1,8 +1,8 @@
# Argilla
>[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.
> Using `Argilla`, everyone can build robust language models through faster data curation
> using both human and machine feedback. `Argilla` provides support for each step in the MLOps cycle,
> [Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.
> Using `Argilla`, everyone can build robust language models through faster data curation
> using both human and machine feedback. `Argilla` provides support for each step in the MLOps cycle,
> from data labeling to model monitoring.
## Installation and Setup
@ -17,7 +17,6 @@ pip install argilla
## Callbacks
```python
from langchain.callbacks import ArgillaCallbackHandler
```

@ -1,10 +1,9 @@
# Arxiv
>[arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics,
> mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and
> [arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics,
> mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and
> systems science, and economics.
## Installation and Setup
First, you need to install `arxiv` python package.

@ -1,6 +1,6 @@
# Astra DB
> [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless
> [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless
> vector-capable database built on `Apache Cassandra®`and made conveniently available
> through an easy-to-use JSON API.
@ -9,6 +9,7 @@ See a [tutorial provided by DataStax](https://docs.datastax.com/en/astra/astra-d
## Installation and Setup
Install the following Python package:
```bash
pip install "langchain-astradb>=0.1.0"
```
@ -66,7 +67,6 @@ set_llm_cache(AstraDBCache(
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-db-caches) (scroll to the Astra DB section).
## Semantic LLM Cache
```python

@ -1,19 +1,17 @@
# Atlas
>[Nomic Atlas](https://docs.nomic.ai/index.html) is a platform for interacting with both
> [Nomic Atlas](https://docs.nomic.ai/index.html) is a platform for interacting with both
> small and internet scale unstructured datasets.
## Installation and Setup
- Install the Python package with `pip install nomic`
- `Nomic` is also included in langchains poetry extras `poetry install -E all`
## VectorStore
See a [usage example](/docs/integrations/vectorstores/atlas).
```python
from langchain_community.vectorstores import AtlasDB
```
```

@ -1,6 +1,6 @@
# AwaDB
>[AwaDB](https://github.com/awa-ai/awadb) is an AI Native database for the search and storage of embedding vectors used by LLM Applications.
> [AwaDB](https://github.com/awa-ai/awadb) is an AI Native database for the search and storage of embedding vectors used by LLM Applications.
## Installation and Setup
@ -8,7 +8,6 @@
pip install awadb
```
## Vector store
```python
@ -17,7 +16,6 @@ from langchain_community.vectorstores import AwaDB
See a [usage example](/docs/integrations/vectorstores/awadb).
## Embedding models
```python

@ -1,12 +1,11 @@
# AZLyrics
>[AZLyrics](https://www.azlyrics.com/) is a large, legal, every day growing collection of lyrics.
> [AZLyrics](https://www.azlyrics.com/) is a large, legal, every day growing collection of lyrics.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/azlyrics).

@ -1,9 +1,9 @@
# BagelDB
> [BagelDB](https://www.bageldb.ai/) (`Open Vector Database for AI`), is like GitHub for AI data.
It is a collaborative platform where users can create,
share, and manage vector datasets. It can support private projects for independent developers,
internal collaborations for enterprises, and public contributions for data DAOs.
> It is a collaborative platform where users can create,
> share, and manage vector datasets. It can support private projects for independent developers,
> internal collaborations for enterprises, and public contributions for data DAOs.
## Installation and Setup
@ -11,7 +11,6 @@ internal collaborations for enterprises, and public contributions for data DAOs.
pip install betabageldb
```
## VectorStore
See a [usage example](/docs/integrations/vectorstores/bageldb).

@ -1,9 +1,8 @@
# Baichuan
>[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI,
> [Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI,
> dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness.
## Installation and Setup
Register and get an API key [here](https://platform.baichuan-ai.com/).

@ -1,10 +1,9 @@
# Baidu
>[Baidu Cloud](https://cloud.baidu.com/) is a cloud service provided by `Baidu, Inc.`,
> headquartered in Beijing. It offers a cloud storage service, client software,
> [Baidu Cloud](https://cloud.baidu.com/) is a cloud service provided by `Baidu, Inc.`,
> headquartered in Beijing. It offers a cloud storage service, client software,
> file management, resource sharing, and Third Party Integration.
## Installation and Setup
Register and get the `Qianfan` `AK` and `SK` keys [here](https://cloud.baidu.com/product/wenxinworkshop).

@ -1,6 +1,6 @@
# Banana
>[Banana](https://www.banana.dev/) provided serverless GPU inference for AI models,
> [Banana](https://www.banana.dev/) provided serverless GPU inference for AI models,
> a CI/CD build pipeline and a simple Python framework (`Potassium`) to server your models.
This page covers how to use the [Banana](https://www.banana.dev) ecosystem within LangChain.
@ -26,7 +26,7 @@ Other starter repos are available [here](https://github.com/orgs/bananaml/reposi
## Build the Banana app
To use Banana apps within Langchain, you must include the `outputs` key
To use Banana apps within Langchain, you must include the `outputs` key
in the returned json, and the value must be a string.
```python
@ -36,7 +36,7 @@ result = {'outputs': result}
An example inference function would be:
```python
````python
@app.handler("/")
def handler(context: dict, request: Request) -> Response:
"""Handle a request to generate code from a prompt."""
@ -53,14 +53,12 @@ def handler(context: dict, request: Request) -> Response:
output = model.generate(inputs=input_ids, temperature=temperature, max_new_tokens=max_new_tokens)
result = tokenizer.decode(output[0])
return Response(json={"outputs": result}, status=200)
```
````
This example is from the `app.py` file in [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq).
## LLM
```python
from langchain_community.llms import Banana
```

@ -1,21 +1,21 @@
# Baseten
>[Baseten](https://baseten.co) is a provider of all the infrastructure you need to deploy and serve
> [Baseten](https://baseten.co) is a provider of all the infrastructure you need to deploy and serve
> ML models performantly, scalably, and cost-efficiently.
>As a model inference platform, `Baseten` is a `Provider` in the LangChain ecosystem.
The `Baseten` integration currently implements a single `Component`, LLMs, but more are planned!
> As a model inference platform, `Baseten` is a `Provider` in the LangChain ecosystem.
> The `Baseten` integration currently implements a single `Component`, LLMs, but more are planned!
>`Baseten` lets you run both open source models like Llama 2 or Mistral and run proprietary or
fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:
> `Baseten` lets you run both open source models like Llama 2 or Mistral and run proprietary or
> fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:
>* Rather than paying per token, you pay per minute of GPU used.
>* Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability.
>* While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with `Truss`.
> - Rather than paying per token, you pay per minute of GPU used.
> - Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability.
> - While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with `Truss`.
>[Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments.
> [Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments.
>Learn more about Baseten in [the Baseten docs](https://docs.baseten.co/).
> Learn more about Baseten in [the Baseten docs](https://docs.baseten.co/).
## Installation and Setup

@ -1,9 +1,8 @@
# Beam
>[Beam](https://www.beam.cloud/) is a cloud computing platform that allows you to run your code
> [Beam](https://www.beam.cloud/) is a cloud computing platform that allows you to run your code
> on remote servers with GPUs.
## Installation and Setup
- [Create an account](https://www.beam.cloud/)
@ -16,7 +15,6 @@
pip install beam-sdk
```
## LLMs
See a [usage example](/docs/integrations/llms/beam).

@ -1,8 +1,8 @@
# Beautiful Soup
>[Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) is a Python package for parsing
> HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup).
> It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which
> [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) is a Python package for parsing
> HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup).
> It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which
> is useful for web scraping.
## Installation and Setup

@ -1,6 +1,6 @@
# BiliBili
>[Bilibili](https://www.bilibili.tv/) is one of the most beloved long-form video sites in China.
> [Bilibili](https://www.bilibili.tv/) is one of the most beloved long-form video sites in China.
## Installation and Setup

@ -1,16 +1,15 @@
# Bittensor
>[Neural Internet Bittensor](https://neuralinternet.ai/) network, an open source protocol
> [Neural Internet Bittensor](https://neuralinternet.ai/) network, an open source protocol
> that powers a decentralized, blockchain-based, machine learning network.
## Installation and Setup
Get your API_KEY from [Neural Internet](https://api.neuralinternet.ai).
You can [analyze API_KEYS](https://api.neuralinternet.ai/api-keys)
You can [analyze API_KEYS](https://api.neuralinternet.ai/api-keys)
and [logs of your usage](https://api.neuralinternet.ai/logs).
## LLMs
See a [usage example](/docs/integrations/llms/bittensor).

@ -1,11 +1,11 @@
# Blackboard
>[Blackboard Learn](https://en.wikipedia.org/wiki/Blackboard_Learn) (previously the `Blackboard Learning Management System`)
> is a web-based virtual learning environment and learning management system developed by Blackboard Inc.
> The software features course management, customizable open architecture, and scalable design that allows
> integration with student information systems and authentication protocols. It may be installed on local servers,
> hosted by `Blackboard ASP Solutions`, or provided as Software as a Service hosted on Amazon Web Services.
> Its main purposes are stated to include the addition of online elements to courses traditionally delivered
> [Blackboard Learn](https://en.wikipedia.org/wiki/Blackboard_Learn) (previously the `Blackboard Learning Management System`)
> is a web-based virtual learning environment and learning management system developed by Blackboard Inc.
> The software features course management, customizable open architecture, and scalable design that allows
> integration with student information systems and authentication protocols. It may be installed on local servers,
> hosted by `Blackboard ASP Solutions`, or provided as Software as a Service hosted on Amazon Web Services.
> Its main purposes are stated to include the addition of online elements to courses traditionally delivered
> face-to-face and development of completely online courses with few or no face-to-face meetings.
## Installation and Setup

@ -1,24 +1,22 @@
# Brave Search
>[Brave Search](https://en.wikipedia.org/wiki/Brave_Search) is a search engine developed by Brave Software.
> - `Brave Search` uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92%
> of search results without relying on any third-parties, with the remainder being retrieved
> server-side from the Bing API or (on an opt-in basis) client-side from Google. According
> to Brave, the index was kept "intentionally smaller than that of Google or Bing" in order to
> help avoid spam and other low-quality content, with the disadvantage that "Brave Search is
> not yet as good as Google in recovering long-tail queries."
>- `Brave Search Premium`: As of April 2023 Brave Search is an ad-free website, but it will
> eventually switch to a new model that will include ads and premium users will get an ad-free experience.
> User data including IP addresses won't be collected from its users by default. A premium account
> will be required for opt-in data-collection.
> [Brave Search](https://en.wikipedia.org/wiki/Brave_Search) is a search engine developed by Brave Software.
>
> - `Brave Search` uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92%
> of search results without relying on any third-parties, with the remainder being retrieved
> server-side from the Bing API or (on an opt-in basis) client-side from Google. According
> to Brave, the index was kept "intentionally smaller than that of Google or Bing" in order to
> help avoid spam and other low-quality content, with the disadvantage that "Brave Search is
> not yet as good as Google in recovering long-tail queries."
> - `Brave Search Premium`: As of April 2023 Brave Search is an ad-free website, but it will
> eventually switch to a new model that will include ads and premium users will get an ad-free experience.
> User data including IP addresses won't be collected from its users by default. A premium account
> will be required for opt-in data-collection.
## Installation and Setup
To get access to the Brave Search API, you need to [create an account and get an API key](https://api.search.brave.com/app/dashboard).
## Document Loader
See a [usage example](/docs/integrations/document_loaders/brave_search).

@ -1,11 +1,10 @@
# Breebs (Open Knowledge)
>[Breebs](https://www.breebs.com/) is an open collaborative knowledge platform.
>Anybody can create a `Breeb`, a knowledge capsule based on PDFs stored on a Google Drive folder.
>A `Breeb` can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources.
>Behind the scenes, `Breebs` implements several `Retrieval Augmented Generation (RAG)` models
> to seamlessly provide useful context at each iteration.
> [Breebs](https://www.breebs.com/) is an open collaborative knowledge platform.
> Anybody can create a `Breeb`, a knowledge capsule based on PDFs stored on a Google Drive folder.
> A `Breeb` can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources.
> Behind the scenes, `Breebs` implements several `Retrieval Augmented Generation (RAG)` models
> to seamlessly provide useful context at each iteration.
## Retriever
@ -13,4 +12,4 @@
from langchain.retrievers import BreebsRetriever
```
[See a usage example (Retrieval & ConversationalRetrievalChain)](/docs/integrations/retrievers/breebs)
[See a usage example (Retrieval & ConversationalRetrievalChain)](/docs/integrations/retrievers/breebs)

@ -3,10 +3,9 @@
> [Apache Cassandra®](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.
> Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html).
The integrations outlined in this page can be used with `Cassandra` as well as other CQL-compatible databases,
The integrations outlined in this page can be used with `Cassandra` as well as other CQL-compatible databases,
i.e. those using the `Cassandra Query Language` protocol.
## Installation and Setup
Install the following Python package:
@ -31,7 +30,6 @@ from langchain_community.chat_message_histories import CassandraChatMessageHisto
Learn more in the [example notebook](/docs/integrations/memory/cassandra_chat_message_history).
## LLM Cache
```python
@ -42,7 +40,6 @@ set_llm_cache(CassandraCache())
Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassandra-caches) (scroll to the Cassandra section).
## Semantic LLM Cache
```python
@ -66,5 +63,5 @@ Learn more in the [example notebook](/docs/integrations/document_loaders/cassand
#### Attribution statement
> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of
> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of
> the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.

@ -1,6 +1,6 @@
# CerebriumAI
>[Cerebrium](https://docs.cerebrium.ai/cerebrium/getting-started/introduction) is a serverless GPU infrastructure provider.
> [Cerebrium](https://docs.cerebrium.ai/cerebrium/getting-started/introduction) is a serverless GPU infrastructure provider.
> It provides API access to several LLM models.
See the examples in the [CerebriumAI documentation](https://docs.cerebrium.ai/examples/langchain).
@ -8,19 +8,18 @@ See the examples in the [CerebriumAI documentation](https://docs.cerebrium.ai/ex
## Installation and Setup
- Install a python package:
```bash
pip install cerebrium
```
- [Get an CerebriumAI api key](https://docs.cerebrium.ai/cerebrium/getting-started/installation) and set
- [Get an CerebriumAI api key](https://docs.cerebrium.ai/cerebrium/getting-started/installation) and set
it as an environment variable (`CEREBRIUMAI_API_KEY`)
## LLMs
See a [usage example](/docs/integrations/llms/cerebriumai).
```python
from langchain_community.llms import CerebriumAI
```
```

@ -1,11 +1,10 @@
# Chaindesk
>[Chaindesk](https://chaindesk.ai) is an [open-source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models.
> [Chaindesk](https://chaindesk.ai) is an [open-source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models.
## Installation and Setup
We need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url.
We need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url.
We need the [API Key](https://docs.chaindesk.ai/api-reference/authentication).
## Retriever

@ -1,6 +1,6 @@
# Chroma
>[Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings.
> [Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings.
## Installation and Setup
@ -8,7 +8,6 @@
pip install chromadb
```
## VectorStore
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,

@ -1,14 +1,16 @@
# Clarifai
>[Clarifai](https://clarifai.com) is one of first deep learning platforms having been founded in 2013. Clarifai provides an AI platform with the full AI lifecycle for data exploration, data labeling, model training, evaluation and inference around images, video, text and audio data. In the LangChain ecosystem, as far as we're aware, Clarifai is the only provider that supports LLMs, embeddings and a vector store in one production scale platform, making it an excellent choice to operationalize your LangChain implementations.
> [Clarifai](https://clarifai.com) is one of first deep learning platforms having been founded in 2013. Clarifai provides an AI platform with the full AI lifecycle for data exploration, data labeling, model training, evaluation and inference around images, video, text and audio data. In the LangChain ecosystem, as far as we're aware, Clarifai is the only provider that supports LLMs, embeddings and a vector store in one production scale platform, making it an excellent choice to operationalize your LangChain implementations.
## Installation and Setup
- Install the Python SDK:
```bash
pip install clarifai
```
[Sign-up](https://clarifai.com/signup) for a Clarifai account, then get a personal access token to access the Clarifai API from your [security settings](https://clarifai.com/settings/security) and set it as an environment variable (`CLARIFAI_PAT`).
[Sign-up](https://clarifai.com/signup) for a Clarifai account, then get a personal access token to access the Clarifai API from your [security settings](https://clarifai.com/settings/security) and set it as an environment variable (`CLARIFAI_PAT`).
## Models
@ -27,16 +29,17 @@ llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai).
### Text Embedding Models
To find the selection of text embeddings models in the Clarifai platform you can select the text to embedding model type [here](https://clarifai.com/explore/models?page=1&perPage=24&filterData=%5B%7B%22field%22%3A%22model_type_id%22%2C%22value%22%3A%5B%22text-embedder%22%5D%7D%5D).
There is a Clarifai Embedding model in LangChain, which you can access with:
```python
from langchain_community.embeddings import ClarifaiEmbeddings
embeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
```
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walkthrough](/docs/integrations/text_embedding/clarifai).
## Vectorstore
@ -49,4 +52,5 @@ You can also add data directly from LangChain as well, and the auto-indexing wil
from langchain_community.vectorstores import Clarifai
clarifai_vector_db = Clarifai.from_texts(user_id=USER_ID, app_id=APP_ID, texts=texts, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS, metadatas = metadatas)
```
For more details, the docs on the Clarifai vector store provide a [detailed walkthrough](/docs/integrations/vectorstores/clarifai).

@ -1,12 +1,11 @@
# ClickHouse
> [ClickHouse](https://clickhouse.com/) is the fast and resource efficient open-source database for real-time
> apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries.
> It has data structures and distance search functions (like `L2Distance`) as well as
> [approximate nearest neighbor search indexes](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes)
> [ClickHouse](https://clickhouse.com/) is the fast and resource efficient open-source database for real-time
> apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries.
> It has data structures and distance search functions (like `L2Distance`) as well as
> [approximate nearest neighbor search indexes](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes)
> That enables ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.
## Installation and Setup
We need to install `clickhouse-connect` python package.
@ -22,4 +21,3 @@ See a [usage example](/docs/integrations/vectorstores/clickhouse).
```python
from langchain_community.vectorstores import Clickhouse, ClickhouseSettings
```

@ -1,14 +1,12 @@
# Cloudflare
>[Cloudflare, Inc. (Wikipedia)](https://en.wikipedia.org/wiki/Cloudflare) is an American company that provides
> content delivery network services, cloud cybersecurity, DDoS mitigation, and ICANN-accredited
> [Cloudflare, Inc. (Wikipedia)](https://en.wikipedia.org/wiki/Cloudflare) is an American company that provides
> content delivery network services, cloud cybersecurity, DDoS mitigation, and ICANN-accredited
> domain registration services.
>[Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/) allows you to run machine
> [Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/) allows you to run machine
> learning models, on the `Cloudflare` network, from your code via REST API.
## Embedding models
See [installation instructions and usage example](/docs/integrations/text_embedding/cloudflare_workersai).

@ -1,4 +1,5 @@
# CnosDB
> [CnosDB](https://github.com/cnosdb/cnosdb) is an open-source distributed time series database with high performance, high compression rate and high ease of use.
## Installation and Setup
@ -8,8 +9,11 @@ pip install cnos-connector
```
## Connecting to CnosDB
You can connect to CnosDB using the `SQLDatabase.from_cnosdb()` method.
### Syntax
```python
def SQLDatabase.from_cnosdb(url: str = "127.0.0.1:8902",
user: str = "root",
@ -17,24 +21,29 @@ def SQLDatabase.from_cnosdb(url: str = "127.0.0.1:8902",
tenant: str = "cnosdb",
database: str = "public")
```
Args:
1. url (str): The HTTP connection host name and port number of the CnosDB
service, excluding "http://" or "https://", with a default value
of "127.0.0.1:8902".
service, excluding "http://" or "https://", with a default value
of "127.0.0.1:8902".
2. user (str): The username used to connect to the CnosDB service, with a
default value of "root".
default value of "root".
3. password (str): The password of the user connecting to the CnosDB service,
with a default value of "".
with a default value of "".
4. tenant (str): The name of the tenant used to connect to the CnosDB service,
with a default value of "cnosdb".
with a default value of "cnosdb".
5. database (str): The name of the database in the CnosDB tenant.
## Examples
```python
# Connecting to CnosDB with SQLDatabase Wrapper
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_cnosdb()
```
```python
# Creating a OpenAI Chat LLM Wrapper
from langchain_openai import ChatOpenAI
@ -43,7 +52,9 @@ llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
```
### SQL Database Chain
This example demonstrates the use of the SQL Chain for answering a question over a CnosDB.
```python
from langchain_community.utilities import SQLDatabaseChain
@ -53,6 +64,7 @@ db_chain.run(
"What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?"
)
```
```shell
> Entering new chain...
What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?
@ -61,8 +73,11 @@ SQLResult: [(68.0,)]
Answer:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0.
> Finished chain.
```
### SQL Database Agent
This example demonstrates the use of the SQL Database Agent for answering questions over a CnosDB.
```python
from langchain.agents import create_sql_agent
from langchain_community.agent_toolkits import SQLDatabaseToolkit
@ -70,11 +85,13 @@ from langchain_community.agent_toolkits import SQLDatabaseToolkit
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)
```
```python
agent.run(
"What is the average temperature of air at station XiaoMaiDao between October 19, 2022 and Occtober 20, 2022?"
)
```
```shell
> Entering new chain...
Action: sql_db_list_tables

@ -1,10 +1,12 @@
# Cohere
>[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models
> [Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models
> that help companies improve human-machine interactions.
## Installation and Setup
- Install the Python SDK :
```bash
pip install cohere
```
@ -13,13 +15,13 @@ Get a [Cohere api key](https://dashboard.cohere.ai/) and set it as an environmen
## Cohere langchain integrations
|API|description|Endpoint docs|Import|Example usage|
|---|---|---|---|---|
|Chat|Build chat bots|[chat](https://docs.cohere.com/reference/chat)|`from langchain_community.chat_models import ChatCohere`|[cohere.ipynb](/docs/integrations/chat/cohere)|
|LLM|Generate text|[generate](https://docs.cohere.com/reference/generate)|`from langchain_community.llms import Cohere`|[cohere.ipynb](/docs/integrations/llms/cohere)|
|RAG Retriever|Connect to external data sources|[chat + rag](https://docs.cohere.com/reference/chat)|`from langchain.retrievers import CohereRagRetriever`|[cohere.ipynb](/docs/integrations/retrievers/cohere)|
|Text Embedding|Embed strings to vectors|[embed](https://docs.cohere.com/reference/embed)|`from langchain_community.embeddings import CohereEmbeddings`|[cohere.ipynb](/docs/integrations/text_embedding/cohere)|
|Rerank Retriever|Rank strings based on relevance|[rerank](https://docs.cohere.com/reference/rerank)|`from langchain.retrievers.document_compressors import CohereRerank`|[cohere.ipynb](/docs/integrations/retrievers/cohere-reranker)|
| API | description | Endpoint docs | Import | Example usage |
| ---------------- | -------------------------------- | ------------------------------------------------------ | -------------------------------------------------------------------- | ------------------------------------------------------------- |
| Chat | Build chat bots | [chat](https://docs.cohere.com/reference/chat) | `from langchain_community.chat_models import ChatCohere` | [cohere.ipynb](/docs/integrations/chat/cohere) |
| LLM | Generate text | [generate](https://docs.cohere.com/reference/generate) | `from langchain_community.llms import Cohere` | [cohere.ipynb](/docs/integrations/llms/cohere) |
| RAG Retriever | Connect to external data sources | [chat + rag](https://docs.cohere.com/reference/chat) | `from langchain.retrievers import CohereRagRetriever` | [cohere.ipynb](/docs/integrations/retrievers/cohere) |
| Text Embedding | Embed strings to vectors | [embed](https://docs.cohere.com/reference/embed) | `from langchain_community.embeddings import CohereEmbeddings` | [cohere.ipynb](/docs/integrations/text_embedding/cohere) |
| Rerank Retriever | Rank strings based on relevance | [rerank](https://docs.cohere.com/reference/rerank) | `from langchain.retrievers.document_compressors import CohereRerank` | [cohere.ipynb](/docs/integrations/retrievers/cohere-reranker) |
## Quick copy examples
@ -35,7 +37,6 @@ print(chat(messages))
### LLM
```python
from langchain_community.llms import Cohere
@ -43,7 +44,6 @@ llm = Cohere(model="command")
print(llm.invoke("Come up with a pet name"))
```
### RAG Retriever
```python

@ -1,12 +1,11 @@
# College Confidential
>[College Confidential](https://www.collegeconfidential.com/) gives information on 3,800+ colleges and universities.
> [College Confidential](https://www.collegeconfidential.com/) gives information on 3,800+ colleges and universities.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/college_confidential).

@ -1,8 +1,8 @@
# Confident AI
>[Confident AI](https://confident-ai.com) is a creator of the `DeepEval`.
> [Confident AI](https://confident-ai.com) is a creator of the `DeepEval`.
>
>[DeepEval](https://github.com/confident-ai/deepeval) is a package for unit testing LLMs.
> [DeepEval](https://github.com/confident-ai/deepeval) is a package for unit testing LLMs.
> Using `DeepEval`, everyone can build robust language models through faster iterations
> using both unit testing and integration testing. `DeepEval provides support for each step in the iteration
> from synthetic data creation to testing.

@ -1,7 +1,6 @@
# Confluence
>[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities.
> [Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities.
## Installation and Setup
@ -9,10 +8,9 @@
pip install atlassian-python-api
```
We need to set up `username/api_key` or `Oauth2 login`.
We need to set up `username/api_key` or `Oauth2 login`.
See [instructions](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/).
## Document Loader
See a [usage example](/docs/integrations/document_loaders/confluence).

@ -1,16 +1,15 @@
# Context
>[Context](https://context.ai/) provides user analytics for LLM-powered products and features.
> [Context](https://context.ai/) provides user analytics for LLM-powered products and features.
## Installation and Setup
We need to install the `context-python` Python package:
We need to install the `context-python` Python package:
```bash
pip install context-python
```
## Callbacks
See a [usage example](/docs/integrations/callbacks/context).

@ -1,17 +1,16 @@
# CTranslate2
>[CTranslate2](https://opennmt.net/CTranslate2/quickstart.html) is a C++ and Python library
> [CTranslate2](https://opennmt.net/CTranslate2/quickstart.html) is a C++ and Python library
> for efficient inference with Transformer models.
>
>The project implements a custom runtime that applies many performance optimization
> techniques such as weights quantization, layers fusion, batch reordering, etc.,
> The project implements a custom runtime that applies many performance optimization
> techniques such as weights quantization, layers fusion, batch reordering, etc.,
> to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
>
>A full list of features and supported models is included in the
> [projects repository](https://opennmt.net/CTranslate2/guides/transformers.html).
> A full list of features and supported models is included in the
> [projects repository](https://opennmt.net/CTranslate2/guides/transformers.html).
> To start, please check out the official [quickstart guide](https://opennmt.net/CTranslate2/quickstart.html).
## Installation and Setup
Install the Python package:
@ -20,7 +19,6 @@ Install the Python package:
pip install ctranslate2
```
## LLMs
See a [usage example](/docs/integrations/llms/ctranslate2).

@ -1,22 +1,25 @@
# DashVector
> [DashVector](https://help.aliyun.com/document_detail/2510225.html) is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.
> [DashVector](https://help.aliyun.com/document_detail/2510225.html) is a fully-managed vectorDB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements.
This document demonstrates to leverage DashVector within the LangChain ecosystem. In particular, it shows how to install DashVector, and how to use it as a VectorStore plugin in LangChain.
It is broken into two parts: installation and setup, and then references to specific DashVector wrappers.
## Installation and Setup
Install the Python SDK:
```bash
pip install dashvector
```
## VectorStore
A DashVector Collection is wrapped as a familiar VectorStore for native usage within LangChain,
A DashVector Collection is wrapped as a familiar VectorStore for native usage within LangChain,
which allows it to be readily used for various scenarios, such as semantic search or example selection.
You may import the vectorstore by:
```python
from langchain_community.vectorstores import DashVector
```

@ -1,5 +1,4 @@
Databricks
==========
# Databricks
The [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform.
@ -10,13 +9,11 @@ Databricks embraces the LangChain ecosystem in various ways:
3. Databricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.Databricks
4. Databricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the Hugging Face Hub
Databricks connector for the SQLDatabase Chain
----------------------------------------------
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.
## Databricks connector for the SQLDatabase Chain
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.
Databricks MLflow integrates with LangChain
-------------------------------------------
## Databricks MLflow integrates with LangChain
MLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/integrations/providers/mlflow_tracking) for details about MLflow's integration with LangChain.
@ -24,8 +21,7 @@ Databricks provides a fully managed and hosted version of MLflow integrated with
Databricks MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don't need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving.
Databricks External Models
--------------------------
## Databricks External Models
[Databricks External Models](https://docs.databricks.com/generative-ai/external-models/index.html) is a service that is designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. The following example creates an endpoint that serves OpenAI's GPT-4 model and generates a chat response from it:
@ -60,10 +56,9 @@ print(chat([HumanMessage(content="hello")]))
# -> content='Hello! How can I assist you today?'
```
Databricks Foundation Model APIs
--------------------------------
## Databricks Foundation Model APIs
[Databricks Foundation Model APIs](https://docs.databricks.com/machine-learning/foundation-models/index.html) allow you to access and query state-of-the-art open source models from dedicated serving endpoints. With Foundation Model APIs, developers can quickly and easily build applications that leverage a high-quality generative AI model without maintaining their own model deployment. The following example uses the `databricks-bge-large-en` endpoint to generate embeddings from text:
[Databricks Foundation Model APIs](https://docs.databricks.com/machine-learning/foundation-models/index.html) allow you to access and query state-of-the-art open source models from dedicated serving endpoints. With Foundation Model APIs, developers can quickly and easily build applications that leverage a high-quality generative AI model without maintaining their own model deployment. The following example uses the `databricks-bge-large-en` endpoint to generate embeddings from text:
```python
from langchain_community.embeddings import DatabricksEmbeddings
@ -74,14 +69,11 @@ print(embeddings.embed_query("hello")[:3])
# -> [0.051055908203125, 0.007221221923828125, 0.003879547119140625, ...]
```
Databricks as an LLM provider
-----------------------------
## Databricks as an LLM provider
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks#wrapping-a-serving-endpoint-custom-model) demonstrates how to serve a custom model that has been registered by MLflow as a Databricks endpoint.
It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
Databricks Vector Search
------------------------
## Databricks Vector Search
Databricks Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from Delta tables managed by Unity Catalog and query them with a simple API to return the most similar vectors. See the notebook [Databricks Vector Search](/docs/integrations/vectorstores/databricks_vector_search) for instructions to use it with LangChain.

@ -1,8 +1,9 @@
# Datadog Tracing
>[ddtrace](https://github.com/DataDog/dd-trace-py) is a Datadog application performance monitoring (APM) library which provides an integration to monitor your LangChain application.
> [ddtrace](https://github.com/DataDog/dd-trace-py) is a Datadog application performance monitoring (APM) library which provides an integration to monitor your LangChain application.
Key features of the ddtrace integration for LangChain:
- Traces: Capture LangChain requests, parameters, prompt-completions, and help visualize LangChain operations.
- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and chat models).
- Logs: Store prompt completion data for each LangChain operation.
@ -35,7 +36,6 @@ docker run -d --cgroupns host \
pip install ddtrace>=1.17
```
3. The LangChain integration can be enabled automatically when you prefix your LangChain Python application command with `ddtrace-run`:
```
@ -68,12 +68,10 @@ patch(langchain=True)
See the [APM Python library documentation][https://ddtrace.readthedocs.io/en/stable/installation_quickstart.html] for more advanced usage.
## Configuration
See the [APM Python library documentation][https://ddtrace.readthedocs.io/en/stable/integrations.html#langchain] for all the available configuration options.
### Log Prompt & Completion Sampling
To enable log prompt and completion sampling, set the `DD_LANGCHAIN_LOGS_ENABLED=1` environment variable. By default, 10% of traced requests will emit logs containing the prompts and completions.
@ -82,7 +80,6 @@ To adjust the log sample rate, see the [APM library documentation][https://ddtra
**Note**: Logs submission requires `DD_API_KEY` to be specified when running `ddtrace-run`.
## Troubleshooting
Need help? Create an issue on [ddtrace](https://github.com/DataDog/dd-trace-py) or contact [Datadog support][https://docs.datadoghq.com/help/].

@ -1,6 +1,6 @@
# Datadog Logs
>[Datadog](https://www.datadoghq.com/) is a monitoring and analytics platform for cloud-scale applications.
> [Datadog](https://www.datadoghq.com/) is a monitoring and analytics platform for cloud-scale applications.
## Installation and Setup

@ -1,12 +1,12 @@
# DataForSEO
>[DataForSeo](https://dataforseo.com/) provides comprehensive SEO and digital marketing data solutions via API.
> [DataForSeo](https://dataforseo.com/) provides comprehensive SEO and digital marketing data solutions via API.
This page provides instructions on how to use the DataForSEO search APIs within LangChain.
## Installation and Setup
Get a [DataForSEO API Access login and password](https://app.dataforseo.com/register), and set them as environment variables
Get a [DataForSEO API Access login and password](https://app.dataforseo.com/register), and set them as environment variables
(`DATAFORSEO_LOGIN` and `DATAFORSEO_PASSWORD` respectively).
```python
@ -16,7 +16,6 @@ os.environ["DATAFORSEO_LOGIN"] = "your_login"
os.environ["DATAFORSEO_PASSWORD"] = "your_password"
```
## Utility
The DataForSEO utility wraps the API. To import this utility, use:

@ -1,11 +1,11 @@
# DeepInfra
>[DeepInfra](https://deepinfra.com/docs) allows us to run the
> [latest machine learning models](https://deepinfra.com/models) with ease.
> DeepInfra takes care of all the heavy lifting related to running, scaling and monitoring
> [DeepInfra](https://deepinfra.com/docs) allows us to run the
> [latest machine learning models](https://deepinfra.com/models) with ease.
> DeepInfra takes care of all the heavy lifting related to running, scaling and monitoring
> the models. Users can focus on your application and integrate the models with simple REST API calls.
>DeepInfra provides [examples](https://deepinfra.com/docs/advanced/langchain) of integration with LangChain.
> DeepInfra provides [examples](https://deepinfra.com/docs/advanced/langchain) of integration with LangChain.
This page covers how to use the `DeepInfra` ecosystem within `LangChain`.
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
@ -27,7 +27,6 @@ You can view a [list of request and response parameters](https://deepinfra.com/m
Chat models [follow openai api](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api?example=openai-http)
## LLM
See a [usage example](/docs/integrations/llms/deepinfra).

@ -8,7 +8,6 @@ It is broken into two parts: installation and setup, and then examples of DeepSp
- Install the Python package with `pip install deepsparse`
- Choose a [SparseZoo model](https://sparsezoo.neuralmagic.com/?useCase=text_generation) or export a support model to ONNX [using Optimum](https://github.com/neuralmagic/notebooks/blob/main/notebooks/opt-text-generation-deepsparse-quickstart/OPT_Text_Generation_DeepSparse_Quickstart.ipynb)
## LLMs
There exists a DeepSparse LLM wrapper, which you can access with:

@ -1,9 +1,9 @@
# Diffbot
>[Diffbot](https://docs.diffbot.com/docs) is a service to read web pages. Unlike traditional web scraping tools,
> [Diffbot](https://docs.diffbot.com/docs) is a service to read web pages. Unlike traditional web scraping tools,
> `Diffbot` doesn't require any rules to read the content on a page.
>It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
>The result is a website transformed into clean-structured data (like JSON or CSV), ready for your application.
> It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
> The result is a website transformed into clean-structured data (like JSON or CSV), ready for your application.
## Installation and Setup

@ -4,6 +4,7 @@ This page covers how to use the DingoDB ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DingoDB wrappers.
## Installation and Setup
- Install the Python SDK with `pip install dingodb`
## VectorStore
@ -12,6 +13,7 @@ There exists a wrapper around DingoDB indexes, allowing you to use it as a vecto
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain_community.vectorstores import Dingo
```

@ -1,7 +1,7 @@
# Discord
>[Discord](https://discord.com/) is a VoIP and instant messaging social platform. Users have the ability to communicate
> with voice calls, video calls, text messaging, media and files in private chats or as part of communities called
> [Discord](https://discord.com/) is a VoIP and instant messaging social platform. Users have the ability to communicate
> with voice calls, video calls, text messaging, media and files in private chats or as part of communities called
> "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.
## Installation and Setup
@ -16,15 +16,14 @@ Follow these steps to download your `Discord` data:
2. Then go to **Privacy and Safety**
3. Head over to the **Request all of my Data** and click on **Request Data** button
It might take 30 days for you to receive your data. You'll receive an email at the address which is registered
It might take 30 days for you to receive your data. You'll receive an email at the address which is registered
with Discord. That email will have a download button using which you would be able to download your personal Discord data.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/discord).
**NOTE:** The `DiscordChatLoader` is not the `ChatLoader` but a `DocumentLoader`.
**NOTE:** The `DiscordChatLoader` is not the `ChatLoader` but a `DocumentLoader`.
It is used to load the data from the `Discord` data dump.
For the `ChatLoader` see Chat Loader section below.
@ -35,4 +34,3 @@ from langchain_community.document_loaders import DiscordChatLoader
## Chat Loader
See a [usage example](/docs/integrations/chat_loaders/discord).

@ -1,10 +1,9 @@
# DocArray
> [DocArray](https://docarray.jina.ai/) is a library for nested, unstructured, multimodal data in transit,
> including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process,
> [DocArray](https://docarray.jina.ai/) is a library for nested, unstructured, multimodal data in transit,
> including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process,
> embed, search, recommend, store, and transfer multimodal data with a Pythonic API.
## Installation and Setup
We need to install `docarray` python package.
@ -22,9 +21,9 @@ See a [usage example](/docs/integrations/vectorstores/docarray_hnsw).
```python
from langchain_community.vectorstores DocArrayHnswSearch
```
See a [usage example](/docs/integrations/vectorstores/docarray_in_memory).
```python
from langchain_community.vectorstores DocArrayInMemorySearch
```

@ -1,11 +1,10 @@
# Doctran
>[Doctran](https://github.com/psychic-api/doctran) is a python package. It uses LLMs and open-source
> NLP libraries to transform raw text into clean, structured, information-dense documents
> that are optimized for vector space retrieval. You can think of `Doctran` as a black box where
> [Doctran](https://github.com/psychic-api/doctran) is a python package. It uses LLMs and open-source
> NLP libraries to transform raw text into clean, structured, information-dense documents
> that are optimized for vector space retrieval. You can think of `Doctran` as a black box where
> messy strings go in and nice, clean, labelled strings come out.
## Installation and Setup
```bash
@ -21,6 +20,7 @@ See a [usage example for DoctranQATransformer](/docs/integrations/document_trans
```python
from langchain_community.document_loaders import DoctranQATransformer
```
### Property Extractor
See a [usage example for DoctranPropertyExtractor](/docs/integrations/document_transformers/doctran_extract_properties).
@ -28,6 +28,7 @@ See a [usage example for DoctranPropertyExtractor](/docs/integrations/document_t
```python
from langchain_community.document_loaders import DoctranPropertyExtractor
```
### Document Translator
See a [usage example for DoctranTextTranslator](/docs/integrations/document_transformers/doctran_translate_document).

@ -1,12 +1,11 @@
# Docugami
>[Docugami](https://docugami.com) converts business documents into a Document XML Knowledge Graph, generating forests
> of XML semantic trees representing entire documents. This is a rich representation that includes the semantic and
> [Docugami](https://docugami.com) converts business documents into a Document XML Knowledge Graph, generating forests
> of XML semantic trees representing entire documents. This is a rich representation that includes the semantic and
> structural characteristics of various chunks in the document as an XML tree.
## Installation and Setup
```bash
pip install dgml-utils
pip install docugami-langchain

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save