A chat interface over up-to-date Python library documentation.
Project description
🛩️ Fleet Context
Code generation with up-to-date Python libraries.
WIP View the site | WIP API waitlist
Quick Start
Install the package and run context
to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.
pip install fleet-libraries
context
Limit libraries
You can use the -l
or --libraries
followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all.
context -l langchain pydantic openai
Use a different OpenAI model
You can select a different OpenAI model by using -m
or --model
. Defaults to gpt-4
. You can set your model to gpt-4-32k
(if your organization has access), gpt-3.5-turbo
, or gpt-3.5-turbo-16k
.
context -m gpt-3.5-turbo
Advanced settings
You can control the number of retrieved chunks by using -k
or --k_value
(defaulted to 10), and you can toggle whether the model cites its source by using -c
or --cite_sources
(defaults to true).
context -k 15 -c false
Evaluations
Results
Sampled libraries
We saw a 37-point improvement for gpt-4 generation scores across the board. We attribute this to a lack of knowledge for the most up-to-date versions of libraries.
Langchain
We saw a 48-point improvement for gpt-3.5 and a 58-point improvement for gpt-4. We hypothesize that the reason the "before" score for gpt-4 is lower is because it's better at mentioning what it doesn't know.
The drastic jump makes sense, given the entire Langchain documentation was built after gpt-4's knowledge cutoff.
Pydantic
We saw a 34-point improvement for gpt-3.5 and a 38-point improvement for gpt-4. This is because Pydantic v1 was launched before gpt-4's knowledge cutoff, but Pydantic v2 was launched in 2022. The improvement was not as sharp, but it was still significant.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for fleet_context-1.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 64272d15dedce8cf4b054568cc851c472d61587bde5da15ba93d46261622fb0c |
|
MD5 | 9b80a7c046f355e3f275fbaa91fdbb33 |
|
BLAKE2b-256 | 24bfa78e25e26845165ee68dd46ad5067621a0174f46e6d0bda58ac708517c56 |