Skip to main content

LlamaIndex x LanceDB MultiModal AI Lakehouse

Project description

LlamaIndex x LanceDB MultiModal AI LakeHouse

This package integrates the multi-modal functionalities of LanceDB with LlamaIndex.

To install it, you can run:

pip install llama-index-indices-managed-lancedb

And you can then use it in your scripts as an index!

You can use it for text or images, and you can also employ it as a base for a retriever and a query engine.

Text

You can use LanceDB with text in the following way:

from llama_index.indices.managed.lancedb import LanceDBMultiModalIndex

# use it with a local database
local_index = LanceDBMultiModalIndex(
    uri="lancedb/data",
    text_embedding_model="sentence-transformers",
    embedding_model_kwargs={"name": "all-MiniLM-L6-v2"},
    table_name="documents",
)
# use a remote connection
remote_index = LanceDBMUltiModalIndex(
    uri="db://***",
    region="us-east-1",
    api_key="***",
    text_embedding_model="sentence-transformers",
    embedding_model_kwargs={"name": "all-MiniLM-L6-v2"},
    table_name="remote_documents",
)


# You always have to connect the index once you instantiated it with the primary constructor (__init__):
## 1. If you set use_async = True:
async def connect_lancedb_index():
    await documents_index.acreate_index()


## 2. If you set use_async = False (this is the default behavior):
local_index.create_index()

# load it from documents (async constructor)
from llama_index.core.schema import Document

document_data = [
    Document(text="This is an example document"),
    Document(text="This is as example document 1"),
]
documents_index = await LanceDBMUltiModalIndex.from_documents(
    documents=document_data,
    uri="lancedb/documents",
    text_embedding_model="sentence-transformers",
    embedding_model_kwargs={"name": "all-MiniLM-L6-v2"},
    table_name="from_documents",
    indexing="NO_INDEXING",
    use_async=True,
)
## load it from different type of data, e.g. PyArrow tables, Pandas/Polars DataFrames or list of dictionaries (async constructor)
import pandas as pd
import numpy as np

data = pd.DataFrame(
    {
        "text": ["## Hello world", "This is a test"],
        "id": ["1", "2"],
        "metadata": ['{"type": "text/markdown"}', '{"type": "text/plain"}'],
        "vector": [
            np.random.random(384).to_list(),
            np.random.random(384).to_list(),
        ],
    }
)
data_index = await LanceDBMUltiModalIndex.from_documents(
    documents=document_data,
    uri="lancedb/documents",
    text_embedding_model="sentence-transformers",
    embedding_model_kwargs={"name": "all-MiniLM-L6-v2"},
    table_name="from_data",
    indexing="HNSW_PQ",
    use_async=True,
)

We should notice three things here:

  1. You can choose your own text embedding model among the ones supported by LanceDB
  2. The schema for a text table is defined as followed:
class TextSchema(LanceModel):
    id: str
    metadata: str  # deserializable
    text: str
    vector: List[List[float]]

In this schema, the text field is the source field for the embedding model to produce a vector, whereas the vector field must comply with the expected dimensions of the vectors produced by the embedding model. 3. You can define whether or not you want to index your table, and how to index it. Take a look at LancDB docs to see what indexing strategies are available.

[!IMPORTANT]

In the following examples, we will be using only sync methods. It is nevertheless important to stress that, if you set use_async = True, you need to use the async corresponding methods.

Once you instantiated and connected the LanceDB index, you can:

Add or delete nodes

local_index.insert_nodes(
    documents=[
        Document(text="Hello world", id_="1"),
        Document(text="How are you?", id_="2"),
    ],
)

# add from data
local_index.insert_data(
    data=pd.DataFrame(
        {
            "text": ["Hello world", "How are you?"],
            "id": ["1", "2"],
            "metadata": [
                '{"type": "text/markdown"}',
                '{"type": "text/plain"}',
            ],
        }
    ),
)

local_index.delete_nodes(["1", "2"])

Retrieve

retriever = local_index.as_retriever()
nodes = retriever.retrieve(query_str="Hello world!")
print(nodes)

Query

query_engine = local_index.as_query_engine()
response = query_engine.query(query_str="Hello world!")
print(response.response)

Images

images_index = LanceDBMultiModalIndex(
    uri="lancedb/images",
    multi_modal_embedding_model="open-clip",
    table_name="images",
)

# initialize from documents
from llama_index.core.schema import ImageDocument

images_index = await LanceDBMultiModalIndex.from_documents(
    documents=[
        ImageDocument(
            image_url="http://farm1.staticflickr.com/53/167798175_7c7845bbbd_z.jpg",
            metadata={"label": "cat"},
        ),
        ImageDocument(
            image_url="http://farm1.staticflickr.com/134/332220238_da527d8140_z.jpg",
            metadata={"label": "cat"},
        ),
        ImageDocument(
            image_url="http://farm9.staticflickr.com/8387/8602747737_2e5c2a45d4_z.jpg",
            metadata={"label": "dog"},
        ),
    ],
    uri="lancedb/images",
    multi_modal_embedding_model="open-clip",
    table_name="images",
)

# initialize from data
labels = ["dog", "horse", "horse"]
uris = [
    "http://farm5.staticflickr.com/4092/5017326486_1f46057f5f_z.jpg",
    "http://farm9.staticflickr.com/8216/8434969557_d37882c42d_z.jpg",
    "http://farm6.staticflickr.com/5142/5835678453_4f3a4edb45_z.jpg",
]
ids = [
    "1",
    "2",
    "3",
]
metadata = (
    [
        '{"mimetype": "image/jpeg"}',
        '{"mimetype": "image/jpeg"}',
        '{"mimetype": "image/jpeg"}',
    ],
)
image_bytes = [requests.get(uri).content for uri in uris]

data = pd.DataFrame(
    {
        "id": ids,
        "label": labels,
        "image_uri": uris,
        "image_bytes": image_bytes,
        "metadata": metadata,
    }
)

images_index = await LanceDBMultiModalIndex.from_data(
    data=data,
    uri="lancedb/images",
    multi_modal_embedding_model="open-clip",
    table_name="images",
)

As for before, you can choose your multi-modal embedding model and the index strategy, but this time the schema is a little bit different:

class MultiModalSchema(LanceModel):
    id: str
    metadata: str  # deserializable
    label: str
    image_uri: str  # image uri as the source
    image_bytes: bytes  # image bytes as the source
    vector: List[List[float]]  # vector column
    vec_from_bytes: List[
        List[float]
    ]  # Another vector column (uses only bytes as source)

In this case, the source fields for the embedding model are image_uri and image_bytes.

You can use the index as for the text, but with a key difference in retrieving/querying: you use images!

query_engine = images_index.as_query_engine()
# query_image can be a URL, an ImageBlock, an ImageDocument and a PIL Image
response = query_engine.query(
    query_image="http://farm6.staticflickr.com/5142/5835678453_4f3a4edb45_z.jpg"
)
# you can also use an image path
response = query_engine.query(
    query_image_path="/Users/user/images/hello_world.jpg"
)

Extra features

  1. You can initialize the index from an existing table, setting table_exists = True in the constructor methods.
  2. There are methods (such as insert or delete_ref_doc_id) that work only for adding/deleting one node
  3. If you set use_async = True you cannot use synchronous methods, and vice versa!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_indices_managed_lancedb-0.3.1.tar.gz (12.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_indices_managed_lancedb-0.3.1.tar.gz.

File metadata

  • Download URL: llama_index_indices_managed_lancedb-0.3.1.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_indices_managed_lancedb-0.3.1.tar.gz
Algorithm Hash digest
SHA256 5431384069ad818a2e303a63abe1a7478e0b5fd5196cd8b52532aceb83cae4c9
MD5 edfecf331ebe149667e8171ac3d6fee4
BLAKE2b-256 86f0effb529fdadb6491da72a947772d14a8df63b483bf79d1d1f2c0d67f44ac

See more details on using hashes here.

File details

Details for the file llama_index_indices_managed_lancedb-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: llama_index_indices_managed_lancedb-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 13.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_indices_managed_lancedb-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 13ef99b3c77a8c5f4f69068f0445d9c37656166b82cbe3acd8065bed6f84f6ac
MD5 f2e8224ebf4a3b8a256a3255b1f2c71d
BLAKE2b-256 ca4e0edf9b0b3e2f0ddfd18a554cbf3e6474e05d4d1cb148dd3fe2744c666603

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page