Extensible API and framework to build your Retrieval Augmented Generation (RAG) and Information Extraction (IE) applications with LLMs
Project description
Brevia
Brevia is an extensible API and framework to build your Retrieval Augmented Generation (RAG) and Information Extraction (IE) applications with LLMs.
Ouf of the box Brevia provides:
- a complete API for RAG applications
- an API with Information extractions capabilities like summarization, with the possibility to create your own analysis logic with asycnhronous background operations support
Brevia uses:
- the popular LangChain framework that you can use to create your custom AI tools and logic
- the FastAPI framework to easily extend your application with new endpoints
- PostgreSQL with
pg_vector
extension as vector database
Documentation
Brevia documentation is available at docs.brevia.app.
Admin UI
An official UI is now available via Brevia App. It is a webapp with which you can:
- create and configure new RAG collections
- add files, questions or links to each collection
- test the collection with a chat UI
- analyze the chat history for each collection
- perform some Information Extraction actions like summarization, audio transcription or custom analysis
Requirements
A version of Python 3.10 or higher and Poetry is required.
A PostgreSQL database version 14 or higher with pg_vector
, but you can use the provided docker image for a quicker setup.
Quick try
The easiest and fastest way to try Brevia is through Docker. By launching docker compose with the following command you will have a working Brevia system without any setup or configuration:
docker compose --profile fullstack up
At this point you will have:
- Brevia API on http://localhost:8000
- Brevia App UI on http://localhost:3000
To use ports other than 8000 or 3000 just uncomment the variables BREVIA_API_PORT
or BREVIA_APP_PORT
in the .env file.
You can also use --profile api
option to just start Brevia API and not Brevia App.
Create a Brevia Project
Quick start
The quickest way to create a new Brevia project is using the cookiecutter template project like this:
pip install cookiecutter
cookiecutter gh:brevia-ai/brevia-cookiecutter
Simply answer few simple questions and you're ready to go.
Manual setup
To manually create a project instead follow these steps:
- create a new project with
poetry new {your-brevia-project}
- install brevia and its dependencies by running
poetry add brevia
, a virtualenv will automatically be created - create a new
main.py
starting with a copy - activate the virtualenv by running the
poetry shell
command - copy the file
.env.sample
to.env
and value the environment variables, especiallyOPENAI_API_KEY
with the secret key of OpenAI andPGVECTOR_*
see the Database section
Custom Model
In addition to using the default OpenAI models, you have the option to integrate other Large Language Models, such as the Cohere model or any other model of your choice. Follow the steps below to set up and use a custom model in your Brevia project.
Cohere Model Integration
Once you have the Cohere API key and model name, update your Brevia project to include these credentials.
-
Prerequisites: install cohere lib:
poetry add cohere
-
Open the
.env
file in your project directory. -
Set the
COHERE_API_KEY
variable with your Cohere API key:COHERE_API_KEY=your-cohere-api-key
-
Set the cohere variables with the desired Cohere parameter:
For QA/RAG application, set in QA_COMPLETION_LLM
and QA_FOLLOWUP_LLM
the json as follow:
```bash
QA_COMPLETION_LLM='{
"_type": "cohere-chat",
"model_name": "command",
"temperature": 0,
"max_tokens": 200,
"verbose": true
}'
QA_FOLLOWUP_LLM='{
"_type": "cohere-chat",
"model_name": "command",
"temperature": 0,
"max_tokens": 200,
"verbose": true
}'
```
With relatives embeddings:
```bash
EMBEDDINGS='{
"_type": "cohere-embeddings"
}'
```
-
For Summarization module, set the
SUMMARIZE_LLM
var:SUMMARIZE_LLM='{ "_type": "cohere-chat", "model_name": "command", "temperature": 0, "max_tokens": 2000 }'
Database
If you have a PostgreSQL instance with pg_vector
extension available you are ready to go, otherwise you can use the provided docker compose file.
In this case you can simply run docker compose
to run a PostgreSQL database with pg_vector.
You can also run the embedded pgAdmin
admin tool running docker compose --profile admin up
to run postgres+pgvector and pgadmin docker images at the same time.
With your browser, open pgadmin
at http://localhost:4000
The 4000
port is configurable with the PGADMIN_PORT
environment var in the .env
file.
Migrations
Before using Brevia you need to launch the migrations script in order to create or update the initial schema. This is done with Alembic by using this command
db_upgrade
Launch
You are now ready to go, simply run
uvicorn --env-file .env main:app`
and your Brevia API project is ready to accept calls!
Test your API
While we are working on a simple reference frontend to test and use Brevia API the simplest way to test your new API is by using the provided OpenAPI Swagger UI and ReDoc UI or by using the official Postman files.
Simply point to /docs
for your Swagger UI and to /redoc
for ReDoc.
If you prefer to use Postman you can start by importing the collection file and an environment sample
Add documents via CLI
You can also quickly create collections and add documents via CLI using the import_file
command.
Just run:
import_file --file-path /path/to/file --collection my-collection
Where
/path/to/file
is the path to a local PDF or txt filemy-collection
is the unique name of the collection that will be created if not exists
Import/export of collections
To import/export collections via CLI we take advantage of the PostgreSQL COPY command in the import_collection
and export_collection
scripts.
A psql
client is required for these scripts, connection parameters will be read from environment variables (via .env
file).
Two PostgreSQL CSV files will be created during export and imported in the import operation.
One file, named {collection-name}-collection.csv
, contains collection data and the other, named {collection-name}-embedding.csv
, contains documents data and embeddings.
To export a collection use:
export_collection --folder-path /path/to/folder --collection collection-name
To import a collection use:
import_collection --folder-path /path/to/folder --collection collection-name
Where
/path/to/folder
is the path where the 2 CSV files will be created in export or searched in importcollection-name
is the name of the collection
LangSmith support
LangSmith is a platform to monitor, test and debug LLM apps built with LangChain. To use it in Brevia, if you have an account, you should export these environment variables when runnnin Brevia:
LANGCHAIN_TRACING_V2=True
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="########"
LANGCHAIN_PROJECT="My Project"
If you are using a .env
file you should use BREVIA_ENV_SECRETS
var like this:
BREVIA_ENV_SECRETS='{
"LANGCHAIN_TRACING_V2": "True"
"LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
"LANGCHAIN_API_KEY": "########",
"LANGCHAIN_PROJECT": "My Project"
}'
This way Brevia will make sure that this variables will be available as environment variables.
Edit LANGCHAIN_API_KEY
with your LangSmith API Key and set your project name in LANGCHAIN_PROJECT
var.
Access Tokens
There is a built-in basic support for access tokens for API security.
Access tokens are actively checked via Authoritazion: Bearer <token>
header if a TOKENS_SECRET
env variable has been set.
You may then generate a new access token using:
poetry run create_token --user {user} --duration {minutes}
If the env TOKENS_SECRET
variable is set token verification is automatically performed on every endpoint using brevia.dependencies.get_dependencies
in its dependencies.
The recommended way yo generate TOKENS_SECRET
is by using openssl via cli like
openssl rand -hex 32
You can also define a list of valid users as a comma separated string in the TOKENS_USERS
env variable.
Setting it like TOKENS_USERS="brevia,gustavo"
means that only brevia
and gustavo
are considered valid users names. Remember to use double quotes in a .env
file.
Brevia developer quick reference
Here some brief notes if you want to help develop Brevia.
Unit tests
To launch unit tests make sure to have dev
dependencies installed. This is done with:
poetry install --with dev
A tests/.env
file must be present where test environment variables are set, you can start with a copy of tests/.env.sample
.
Please make sure that PGVECTOR_*
variables point to a unit test database that will be continuously dropped and recreated. Also make sure to set USE_TEST_MODELS=True
in order to use fake LLM instances.
To launch unit tests, type from virtualenv:
pytest tests/
To create coverage in HTML format:
pytest --cov-report html --cov=brevia tests/
Covreage report is created using pytest-cov
Update documentation
Install mkdocs-material
using pip
without altering pyproject.toml
pip install mkdocs-material
When working on documentation files you can use a live preview server with:
mkdocs serve
Or you can build the documentation in site/
folder using
mkdocs build --clean
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file brevia-0.2.1.tar.gz
.
File metadata
- Download URL: brevia-0.2.1.tar.gz
- Upload date:
- Size: 61.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.11.0 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 015a73e7bc58d79f995573e9f0321a8d55d1d48b620f3d0ddbce2082d415bce8 |
|
MD5 | e48f876f7faabb2c45a378b4d6c75346 |
|
BLAKE2b-256 | 93bb2111b1d8c496bfa2e34dba3eb84acdbc34819242013be90b2733e0ec665a |
File details
Details for the file brevia-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: brevia-0.2.1-py3-none-any.whl
- Upload date:
- Size: 76.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.11.0 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a22c0ff9d1b291cec4c6f8e7c7ffeebdf169a28a73195a19a5e9305b714c72b3 |
|
MD5 | 59bd3b5f1b1105215e1fe6a7bd34e511 |
|
BLAKE2b-256 | db9f61162a73b32c12112c60c9e9b1b943c3045830b1cddcea5a7045136b50ee |