Thin porcelain around the FAISS vector database.
Project description
fvdb - thin porcelain around FAISS
fvdb
is a simple, minimal wrapper around the FAISS vector database.
It uses a L2 index with normalised vectors.
It uses the faiss-cpu
package and sentence-transformers
for embeddings.
If you need the GPU version of FAISS (very probably not), you can just manually
install faiss-gpu
and use GPUIndexFlatL2
instead of IndexFlatL2
in fvdb/db.hy
.
You can still use a GPU text embedding model even while using faiss-cpu
.
If summaries are enabled (not the default, see configuration section below), a summary of the extract will be stored alongside the extract.
It matches well with trag.
Features
- similarity search with score
- choice of sentence-transformer embeddings
- useful formatting of results (json, tabulated...)
- cli access
- extract summaries
Any input other than plain text (markdown, asciidoc, rst, source code etc.) is out of scope. You should one of the many available packages for that (unstructured, trafiltura, docling, etc.) to convert to plaintext in a separate step.
Usage
import hy # fvdb is written in Hy, but you can use it from python too
from fvdb import faiss, ingest, similar, sources, write
# data ingestion
v = faiss()
ingest(v, "doc.md")
ingest(v, "docs-dir")
write(v, "/tmp/test.fvdb") # defaults to $XDG_DATA_HOME/fvdb (~/.local/share/fvdb/ on Linux)
# search
results = similar(v, "some query text")
results = marginal(v, "some query text") # not yet implemented
# information, management
sources(v)
{ ...
'docs-dir/Once More to the Lake.txt',
'docs-dir/Politics and the English Language.txt',
'docs-dir/Reflections on Gandhi.txt',
'docs-dir/Shooting an elephant.txt',
'docs-dir/The death of the moth.txt',
... }
info(v)
{ 'records': 42,
'embeddings': 42,
'embedding_dimension': 1024,
'is_trained': True,
'path': '/tmp/test-vdb',
'sources': 24,
'embedding_model': 'Alibaba-NLP/gte-large-en-v1.5'}
nuke(v)
These are also available from the command line.
$ # defaults to $XDG_DATA_HOME/fvdb (~/.local/share/fvdb/ on Linux)
# data ingestion (saves on exit)
$ fvdb ingest doc.md
Adding 2 records
$ fvdb ingest docs-dir
Adding 42 records
$ # search
$ fvdb similar -j "some query text" > results.json # --json / -j gives json output
$ fvdb similar -r 2 "George Orwell's formative experience as a policeman in colonial Burma"
# defaults to tabulated output (not all fields will be shown)
score source added page length
-------- ---------------------------------- -------------------------------- ------ --------
0.579925 docs-dir/A hanging.txt 2024-11-05T11:37:26.232773+00:00 0 2582
0.526988 docs-dir/Shooting an elephant.txt 2024-11-05T11:37:43.891659+00:00 0 3889
$ fvdb marginal "some query text" # not yet implemented
$ # information, management
$ fvdb sources
...
docs-dir/Once More to the Lake.txt
docs-dir/Politics and the English Language.txt
docs-dir/Reflections on Gandhi.txt
docs-dir/Shooting an elephant.txt
docs-dir/The death of the moth.txt
...
$ fvdb info
------------------- -----------------------------
records 44
embeddings 44
embedding_dimension 1024
is_trained True
path /tmp/test
sources 24
embedding_model Alibaba-NLP/gte-large-en-v1.5
------------------- -----------------------------
$ fvdb nuke
Configuration
Looks for $XDG_CONFIG_HOME/fvdb/conf.toml
, otherwise uses defaults.
You cannot mix embeddings models in a single fvdb.
Here is an example.
# Sets the default path to something other than $XDG_CONFIG_HOME/fvdb/conf.toml
path = "/tmp/test.fvdb"
# Summaries are useful if you use an embedding model with large maximum sequence length,
# for example, gte-large-en-v1.5 has maximum sequence length of 8192.
summary = true
# A conservative default model, maximum sequence length of 512,
# so no point using summaries.
embeddings.model = "all-mpnet-base-v2"
## Some models need extra options
#embeddings.model = "Alibaba-NLP/gte-large-en-v1.5"
#embeddings.trust_remote_code = true
## You can put some smaller models on a cpu, but larger models will be slow
#embeddings.device = "cpu"
Installation
First install pytorch, which is used by sentence-transformers
.
You must decide if you want the CPU or CUDA (nvidia GPU) version of pytorch.
For just text embeddings for fvdb
, CPU is sufficient, with the default model.
Then,
pip install fvdb
and that's it.
Planned
- optional progress bars for long jobs
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file fvdb-0.1.4.tar.gz
.
File metadata
- Download URL: fvdb-0.1.4.tar.gz
- Upload date:
- Size: 50.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e4de966d00be689840afac2aa502a4693e879adc79addfb1ca7f38f045f85f87 |
|
MD5 | d13cbc84e61281f59c4f5203e11ab22b |
|
BLAKE2b-256 | fd4807abfd3d66bde9ed388deaac50f3284d1cbf357a498a3c0583ce5aa6b2f1 |
File details
Details for the file fvdb-0.1.4-py3-none-any.whl
.
File metadata
- Download URL: fvdb-0.1.4-py3-none-any.whl
- Upload date:
- Size: 38.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1cdd5d0e8b1233cf1690e01d33336204f7952ed91f6203526366adc5afebb3ac |
|
MD5 | ec23f89604b20d0d891d06e4fd8d9957 |
|
BLAKE2b-256 | 1323d51df05d342ec82a507ddf5814e9fbbe3aaa6620081d31fc647c86cb0a01 |