Skip to main content

An in memory vector index project, simliar to Pinecone DB.

Project description

vectra-py

This is a faithful port of Steven Ickman's Vectra in memory vector index project. Only modifications were to port into python, adjust for format, and generate some python friendly example code. Below readme follows on from his, with similar pythonic adjustments.

Thanks for the inspiriation Steve!

Vectra-py is a local vector database for Python with features similar to Pinecone or Qdrant but built using local files. Each Vectra index is a folder on disk. There's an index.json file in the folder that contains all the vectors for the index along with any indexed metadata. When you create an index you can specify which metadata properties to index and only those fields will be stored in the index.json file. All of the other metadata for an item will be stored on disk in a separate file keyed by a GUID.

When queryng Vectra you'll be able to use the same subset of Mongo DB query operators that Pinecone supports and the results will be returned sorted by similarity. Every item in the index will first be filtered by metadata and then ranked for similarity. Even though every item is evaluated its all in memory so it should by nearly instantanious. Likely 1ms - 2ms for even a rather large index. Smaller indexes should be <1ms.

Keep in mind that your entire Vectra index is loaded into memory so it's not well suited for scenarios like long term chat bot memory. Use a real vector DB for that. Vectra is intended to be used in scenarios where you have a small corpus of mostly static data that you'd like to include in your prompt. Infinite few shot examples would be a great use case for Vectra or even just a single document you want to ask questions over.

Pinecone style namespaces aren't directly supported but you could easily mimic them by creating a separate Vectra index (and folder) for each namespace.

Installation

WIP, run via example.py for the moment.

Eventually:

$ pip install vectra-py

Prep

Use dotenv or set env var to store your openAI API Key.

Usage

First create an instance of LocalIndex with the path to the folder where you want you're items stored:

from src.local_index import LocalIndex

index = LocalIndex(os.path.join(os.getcwd(), 'index'))

Next, from inside an async function, create your index:

if not index.isIndexCreated():
        index.createIndex()

Add some items to your index:

openai.api_key = os.environ.get("OPENAI_APIKEY")

async def get_vector(text: str):
    print(text)
    model = "text-embedding-ada-002"
    response = await openai_async.embeddings(
                                            openai.api_key,
                                            timeout=2,
                                            payload={"model": model,
                                                     "input": [text]},
                                        )
    return response.json()['data'][0]['embedding']


async def add_item(text: str):
    vector = await get_vector(text)
    metadata = {'text': text}
    print(vector, metadata)
    await index.insertItem({'vector': vector,
                            'metadata': metadata})

// Add items
await add_item('apple');
await add_item('oranges');
await add_item('red');
await add_item('blue');

Then query for items:

async def query(text: str):
    vector = await get_vector(text)
    results = await index.queryItems(vector, 3)
    if len(results) > 0:
        for result in results:
            print(f"[{result['score']}] \
                  {result.get('item')['metadata']['text']}")
    else:
        print("No results found.")

await query('green')
/*
[0.9036569942401076] blue
[0.8758153664568566] red
[0.8323828606103998] apple
*/

await query('banana')
/*
[0.9033128691220631] apple
[0.8493374123092652] oranges
[0.8415324469533297] blue
*/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vectra_py-0.0.5.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vectra_py-0.0.5-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file vectra_py-0.0.5.tar.gz.

File metadata

  • Download URL: vectra_py-0.0.5.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for vectra_py-0.0.5.tar.gz
Algorithm Hash digest
SHA256 bb30302c70dd555b2c96806d9c7934eb731ac496d95d1cbdb39a54320351a9ac
MD5 c4b37de59f3da5248ef54d8348510636
BLAKE2b-256 bc7ff1089789c60210e941eab94a23c9752d4197ccab954383807f2ffa71111a

See more details on using hashes here.

File details

Details for the file vectra_py-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: vectra_py-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 8.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.11

File hashes

Hashes for vectra_py-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 a6ccc0d63e39c8d8c5677a2b0eb1df8446e1ca87c41af62fa997ddf18a6471a4
MD5 18a114e58ae987e566513703756a56fe
BLAKE2b-256 9c7bb78d443c4e14b980b9813188843e2fbdda54eabebf439363e78d0674288f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page