A chat interface over up-to-date Python library documentation.
Project description
๐ฉ๏ธ Fleet Context
A CLI tool over the top 1218 Python libraries.
Used for library q/a & code generation with all available OpenAI models
Website
ย ย ย ย ย |ย ย ย ย ย
Data Visualizer
ย ย ย ย ย |ย ย ย ย ย
PyPI
ย ย ย ย ย |ย ย ย ย ย
@fleet_aiโ
https://github.com/fleet-ai/context/assets/44193474/80381b25-551e-4602-8987-071e92354f6f
Quick Start
Install the package and run context
to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.
pip install fleet-context
context
If you'd like to run the CLI tool locally, you can clone this repository, cd into it, then run:
pip install -e .
context
If you have an existing package that already uses the keyword context
, you can also activate Fleet Context by running:
fleet-context
API
You can download any library's embeddings and load it up into a dataframe by running:
from context import download_embeddings
df = download_embeddings("langchain")
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 901k/901k [00:00<00:00, 2.64MiB/s]
id dense_embeddings metadata sparse_values
0 91cd9f22-b3b6-49e1-8672-e1e42a1cf766 [-0.014795871, -0.013938751, 0.02374646, -0.02... {'id': '91cd9f22-b3b6-49e1-8672-e1e42a1cf766',... {'indices': [4279915734, 3106554626, 771291085...
1 80cd620e-7408-4649-aaa7-3fe3c719b4ed [-0.0027519625, 0.013772411, 0.0019546314, -0.... {'id': '80cd620e-7408-4649-aaa7-3fe3c719b4ed',... {'indices': [1497795724, 573857107, 2203090375...
2 87a406ad-e413-42fc-8813-6fa042f80f6a [-0.022883521, -0.0036436971, 0.0026068306, 0.... {'id': '87a406ad-e413-42fc-8813-6fa042f80f6a',... {'indices': [1558403699, 640376310, 358389376,...
3 8bdd8dae-8384-414d-87d2-4390ca29d857 [-0.024882555, -0.0041470923, -0.011419726, -0... {'id': '8bdd8dae-8384-414d-87d2-4390ca29d857',... {'indices': [1558403699, 3778951566, 274301652...
4 8cc5eb61-317a-4196-8099-51c47ef70406 [-0.036361936, 0.0027855083, -0.013214805, -0.... {'id': '8cc5eb61-317a-4196-8099-51c47ef70406',... {'indices': [3586802366, 1110127215, 161253108...
You can see a full list of supported libraries & search through them on our website at the bottom of the page.
How to use the embeddings
You can loop through the embeddings like so:
for index, row in df.iterrows():
print(index, row)
Using Fleet Context's rich metadata
One of the biggest advantages of using Fleet Context's embeddings is the amount of information preserved throughout the chunking and embeddings process. You can take advantage of the metadata to improve the quality of your retrievals significantly.
Here's a full list of metadata that we support.
IDs:
library_id
: the uuid of the library referencedpage_id
: the uuid of the page the chunk was retrieved fromparent
: the uuid of the section the chunk was retrieved from (not to be confused with section_id)
Page/section information:
url
: the url of the section or page the chunk was retrieved from, formatted asf"{page_url}#{section_id}
section_id
: the section'sid
field from the htmlsection_index
: the ordering of the chunk within the section. If there are 2 chunks that have the same parent, this will tell you which one was presented first.
Chunk information:
title
: the title of the section or of the page (if section title does not exist)text
: the text, formatted in markdown. Note that markdown is removed from the embeddings for better retrieval results.type
: the type of the chunk. Can beNone
(most common) or a defined value likeclass
,function
,attribute
,data
,exception
, and more.
Improving retrievals with Fleet Context
Re-ranking with section_index
Re-ranking is commonly known to improve results pretty dramatically. We can take that a step further and take advantage of the fact that the ordering within each section/page is preserved, because it follows that ordering content in the order of which it is presented to the reader will likely derive the best results.
Use section_index
to do a smart reranking of your chunks.
Parent/child retrieval with parent
If you notice 2 or more chunks with the same parent
field and are relatively similar in position on the page via section_index
, you can go up one level and query all chunks with the same parent
uuid and pass in the entire document.
Better filtering and prompt construction with type
On retrieval, you can map intent and filter via type
. If the user intends to generate code, you can pre-filter your retrieval to filter type
to just class
or function
. You can use this in creative ways. We've found that pairing it with OpenAI's function calling works really well.
Also, type
allow you to construct your prompt with more clarity, and display more rich information to the user. For example, adding the type to the prompt followed by the chunk will produce better results, because it allows the language model to understand what the chunk is trying to say.
Note that type
is not guaranteed to be present and defined for all libraries โ only the ones that have had their documentation generated by Sphinx/readthedocs.
Rich prompt construction & information presentation with text
Our text
field preserves all information from the HTML elements by converting it to Markdown. This allows for two big advantages:
- From our tests, we've discovered that language models perform better with markdown formatting than without
- You're able to display rich information (titles, urls, images) to the user if you're sourcing a chunk
Precise sourcing with url
and section_id
You can link the user to the exact section with url
(if supported, it's already pre-loaded with the section within the page).
CLI Tool
Limit libraries
You can use the -l
or --libraries
followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all. View a list of all supported libraries on our website.
context -l langchain pydantic openai
Use a different OpenAI model
You can select a different OpenAI model by using -m
or --model
. Defaults to gpt-4
. You can set your model to gpt-4-1106-preview
(gpt-4-turbo), gpt-3.5-turbo
, or gpt-3.5-turbo-16k
.
context -m gpt-4-1106-preview
Using local models
Local model support is powered by LM Studio. To use local models, you can use --local
or -n
:
context --local
You need to download your local model through LM Studio. To do that:
- Download LM Studio. You can find the download link here: https://lmstudio.ai
- Open LM Studio and download your model of choice.
- Click the โ icon on the very left sidebar
- Select your model and click "Start Server"
The context window is defaulted to 3000. You can change this by using --context_window
or -w
:
context --local --context_window 4096
Advanced settings
You can control the number of retrieved chunks by using -k
or --k_value
(defaulted to 15), and you can toggle whether the model cites its source by using -c
or --cite_sources
(defaults to true).
context -k 25 -c false
Evaluations
Results
Sampled libraries
We saw a 37-point improvement for gpt-4
generation scores and a 34-point improvement for gpt-4-turbo
generation scores amongst a randomly sampled set of 50 libraries.
We attribute this to a lack of knowledge for the most up-to-date versions of libraries for gpt-4
, and a combination of relevant up-to-date information to generate with and relevance of information for gpt-4-turbo
.
Embeddings
Check out our visualized data here.
You can download all embeddings here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.