Merge pull request #4 from hwchase17/readme-update

update readmes
pull/5/head
Harrison Chase 1 year ago committed by GitHub
commit 68d02d425b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -2,53 +2,31 @@
[Warning: very beta, may change drastically]
Taking inspiration from Hugging Face Hub, this is collection of all artifacts useful for working with LangChain chains.
To start, this focuses on prompts.
This is intended to be a central place to upload and share prompts.
It is intended to be very community driven and we hope that people will contribute their own prompts.
Please see below for instructions on loading and uploading prompts.
Taking inspiration from Hugging Face Hub, LangChainHub is collection of all artifacts useful for working with LangChain primitives such as prompts, chains and agents.
The goal of this repository is to be a central resource for sharing and discovering high quality prompts, chains and agents that combine together to form complex LLM applications.
## Loading
We are starting off the hub with a collection of prompts, and we look forward to the LangChain community adding to this collection. We hope to expand to chains and agents shortly.
All prompts can be loaded from LangChain by specifying the desired path.
The path should be relative to the `prompts` folder here.
For example, if there is a file at `prompts/qa/stuff/basic/prompt.yaml`, the path you want to specify is `qa/stuff/basic/prompt.yaml`
## 📖 Prompts
Once you have that path, you can load it in the following manner:
At a high level, prompts are organized by use case inside the `prompts` directory.
To load a prompt in LangChain, you should use the following code snippet:
```python
from langchain.prompts import load_from_hub
prompt = load_from_hub("qa/stuff/basic/prompt.yaml")
```
from langchain.prompts import load_prompt
## Uploading
prompt = load_prompt('lc://prompts/path/to/file.json')
```
There are three methods for uploading prompts: `json`, `yaml`, and `python`.
The suggested options are `json` and `yaml`, but we provide `python` as an option for more flexibility.
Please see the below sections for instructions for uploading each format.
In addition to prompt files themselves, each sub-directory also contains a README explaining how best to use that prompt in the appropriate LangChain chain.
You should upload your prompt file to a folder in the appropriate use case section.
In addition to the prompt file, you should also add a README.md for that folder, including any relevant information about prompt.
This can be how it was created, what it should be used for, what the variables mean, etc.
For more detailed information on how prompts are organized in the Hub, and how best to upload one, please see the documentation [here](./prompts/README.md)
### `json`
To get a properly formatted json file, if you have prompt in memory in Python you can run:
```python
prompt.save("file_name.json")
```
## 🔗 Chains
Replace `"file_name"` with the desired name of the file.
### `yaml`
To get a properly formatted yaml file, if you have prompt in memory in Python you can run:
```python
prompt.save("file_name.yaml")
```
Coming soon!
Replace `"file_name"` with the desired name of the file.
## 🤖 Agents
### `python`
To get a properly formatted Python file, you should upload a Python file that exposes a `PROMPT` variable.
This is the variable that will be loaded.
This variable should be an instance of a subclass of BasePromptTemplate in LangChain.
Coming soon!

@ -0,0 +1,60 @@
# Prompts
This directory covers loading and uploading of prompts.
Each sub-directory covers a different use case, and has not only relevant prompts for that use case but also a README file describing how to best use that prompt.
## Loading
All prompts can be loaded from LangChain by specifying the desired path, and adding the `lc://` prefix. The path should be relative to the `langchain-hub` repo.
For example, to load the prompt at the path `langchain-hub/prompts/qa/stuff/basic/prompt.yaml`, the path you want to specify is `lc://prompts/qa/stuff/basic/prompt.yaml`
Once you have that path, you can load it in the following manner:
```python
from langchain.prompts import load_prompt
prompt = load_prompt('lc://prompts/qa/stuff/basic/prompt.yaml')
```
## Uploading
To upload a prompt to the LangChainHub, you must upload 2 files:
1. The prompt. There are 3 supported file formats for prompts: `json`, `yaml`, and `python`. The suggested options are `json` and `yaml`, but we provide `python` as an option for more flexibility. Please see the below sections for instructions for uploading each format.
2. Associated README file for the prompt. This provides a high level description of the prompt, usage patterns of the prompt and chains that the prompt is compatible with. For more details, check out langchain-hub/readme_template.
If you are uploading a prompt to an existing directory, it should already have a README file and so this should not be necessary.
The prompts on the hub are organized by use case. The use cases are reflected in the directory structure and names, and each separate directory represents a different use case. You should upload your prompt file to a folder in the appropriate use case section.
If adding a prompt to an existing use case folder, then make sure that the prompt:
1. services the same use case as the existing prompt(s) in that folder, and
2. has the same inputs as the existing prompt(s).
A litmus test to make sure that multiple prompts belong in the same folder: the existing README file for that folder should also apply to the new prompt being added.
### Supported file formats
#### `json`
To get a properly formatted json file, if you have prompt in memory in Python you can run:
```python
prompt.save("file_name.json")
```
Replace `"file_name"` with the desired name of the file.
#### `yaml`
To get a properly formatted yaml file, if you have prompt in memory in Python you can run:
```python
prompt.save("file_name.yaml")
```
Replace `"file_name"` with the desired name of the file.
#### `python`
To get a properly formatted Python file, you should upload a Python file that exposes a `PROMPT` variable.
This is the variable that will be loaded.
This variable should be an instance of a subclass of BasePromptTemplate in LangChain.

@ -17,12 +17,12 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import APIChain
llm = ...
api_docs = ...
prompt = load_from_hub('api/api_response/<file-name>')
prompt = load_prompt('lc://prompts/api/api_response/<file-name>')
chain = APIChain.from_llm_and_api_docs(llm, api_docs, api_response_prompt=prompt)
```

@ -15,12 +15,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import APIChain
llm = ...
api_docs = ...
prompt = load_from_hub('api/api_url/<file-name>')
prompt = load_prompt('lc://prompts/api/api_url/<file-name>')
chain = APIChain.from_llm_and_api_docs(llm, api_docs, api_url_prompt=prompt)
```

@ -16,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import ConversationChain
llm = ...
prompt = load_from_hub('conversation/<file-name>')
prompt = load_prompt('lc://prompts/conversation/<file-name>')
chain = ConversationChain(llm=llm, prompt=prompt)
```

@ -3,16 +3,21 @@
Basic prompt designed to be use as a test case, will just instruct the LLM to say "Hello World".
## Inputs
This prompt doesn't have any inputs.
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import LLMChain
llm = ...
prompt = load_from_hub('hello-world/<file-name>')
prompt = load_prompt('lc://prompts/hello-world/<file-name>')
chain = LLMChain(llm=llm, prompt=prompt)
```

@ -14,11 +14,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import LLMBashChain
llm = ...
prompt = load_from_hub('llm_bash/<file-name>')
prompt = load_prompt('lc://prompts/llm_bash/<file-name>')
chain = LLMBashChain(llm=llm, prompt=prompt)
```

@ -15,11 +15,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import LLMMathChain
llm = ...
prompt = load_from_hub('llm_math/<file-name>')
prompt = load_prompt('lc://prompts/llm_math/<file-name>')
chain = LLMMathChain(llm=llm, prompt=prompt)
```

@ -16,12 +16,12 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import ConversationChain
from langchain.chains.conversation.memory import ConversationSummaryMemory
llm = ...
prompt = load_from_hub('memory/summarize/<file-name>')
prompt = load_prompt('lc://prompts/memory/summarize/<file-name>')
memory = ConversationSummaryMemory(llm=llm, prompt=prompt)
chain = ConversationChain(llm=llm, memory=memory)
```

@ -16,13 +16,13 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import PALChain
llm = ...
stop = ...
get_answer_expr = ...
prompt = load_from_hub('pal/<file-name>')
prompt = load_prompt('lc://prompts/pal/<file-name>')
chain = PALChain(llm=llm, prompt=prompt, stop=stop, get_answer_expr=get_answer_expr)
```

@ -16,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/map_reduce/question/<file-name>')
prompt = load_prompt('lc://prompts/qa/map_reduce/question/<file-name>')
chain = load_qa_chain(llm, chain_type="map_reduce", question_prompt=prompt)
```

@ -16,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/map_reduce/reduce/<file-name>')
prompt = load_prompt('lc://prompts/qa/map_reduce/reduce/<file-name>')
chain = load_qa_chain(llm, chain_type="map_reduce", combine_prompt=prompt)
```

@ -17,11 +17,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/refine/<file-name>')
prompt = load_prompt('lc://prompts/qa/refine/<file-name>')
chain = load_qa_chain(llm, chain_type="refine", refine_prompt=prompt)
```

@ -16,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/stuff/<file-name>')
prompt = load_prompt('lc://prompts/qa/stuff/<file-name>')
chain = load_qa_chain(llm, chain_type="stuff", prompt=prompt)
```

@ -1,20 +1,14 @@
# Description of QA with Sources Map Reduce Prompts
Prompt
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
This prompt enables the user to perform question answering while providing sources.
It uses the map reduce chain for doing QA. This specific prompt reduces the answer generated during the question stage.
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
1. `summaries`: Summaries generated during the map step.
2. `question`: Original question to be answered.
## Usage
@ -22,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
llm = ...
prompt = load_from_hub('qa_with_sources/map_reduce/reduce/<file-name>')
prompt = load_prompt('lc://prompts/qa_with_sources/map_reduce/reduce/<file-name>')
chain = load_qa_with_sources_chain(llm, chain_type="map_reduce", combine_prompt=prompt)
```

@ -16,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
llm = ...
prompt = load_from_hub('qa_with_sources/refine/<file-name>')
prompt = load_prompt('lc://prompts/qa_with_sources/refine/<file-name>')
chain = load_qa_with_sources_chain(llm, chain_type="refine", refine_prompt=prompt)
```

@ -16,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
llm = ...
prompt = load_from_hub('qa_with_sources/stuff/<file-name>')
prompt = load_prompt('lc://prompts/qa_with_sources/stuff/<file-name>')
chain = load_qa_with_sources_chain(llm, chain_type="stuff", prompt=prompt)
```

@ -18,12 +18,12 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import SQLDatabaseChain
llm = ...
database = ...
prompt = load_from_hub('sql_query/language_to_sql_output/<file-name>')
prompt = load_prompt('lc://prompts/sql_query/language_to_sql_output/<file-name>')
chain = SQLDatabaseChain(llm=llm, database=database, prompt=prompt)
```

@ -16,12 +16,12 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import SQLDatabaseSequentialChain
llm = ...
database = ...
prompt = load_from_hub('sql_query/relevant_tables/<file-name>')
prompt = load_prompt('lc://prompts/sql_query/relevant_tables/<file-name>')
chain = SQLDatabaseSequentialChain.from_llm(llm, database, decider_prompt=prompt)
```

@ -15,11 +15,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.summarize import load_summarize_chain
llm = ...
prompt = load_from_hub('summarize/map_reduce/map/<file-name>')
prompt = load_prompt('lc://prompts/summarize/map_reduce/map/<file-name>')
chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=prompt)
```

@ -16,11 +16,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.summarize import load_summarize_chain
llm = ...
prompt = load_from_hub('summarize/refine/<file-name>')
prompt = load_prompt('lc://prompts/summarize/refine/<file-name>')
chain = load_summarize_chain(llm, chain_type="refine", refine_prompt=prompt)
```

@ -14,11 +14,11 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains.summarize import load_summarize_chain
llm = ...
prompt = load_from_hub('summarize/stuff/<file-name>')
prompt = load_prompt('lc://prompts/summarize/stuff/<file-name>')
chain = load_summarize_chain(llm, chain_type="stuff", prompt=prompt)
```

@ -16,12 +16,12 @@ This is a description of the inputs that the prompt expects.
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.prompts import load_prompt
from langchain.chains import VectorDBQA
llm = ...
vectorstore = ...
prompt = load_from_hub('vector_db_qa/<file-name>')
prompt = load_prompt('lc://prompts/vector_db_qa/<file-name>')
chain = VectorDBQA.from_llm(llm, prompt=prompt, vectorstore=vectorstore)
```

Loading…
Cancel
Save