Skip to main content

The data structure for unstructured data

Project description

DocArray logo: The data structure for unstructured data
The data structure for unstructured multimodal data

PyPI Codecov branch PyPI - Downloads from official pypistats

DocArray is a library for nested, unstructured, multimodal data in transit, including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer multimodal data with a Pythonic API.

🚪 Door to multimodal world: super-expressive data structure for representing complicated/mixed/nested text, image, video, audio, 3D mesh data. The foundation data structure of Jina, CLIP-as-service, DALL·E Flow, DiscoArt etc.

🧑‍🔬 Data science powerhouse: greatly accelerate data scientists' work on embedding, k-NN matching, querying, visualizing, evaluating via Torch/TensorFlow/ONNX/PaddlePaddle on CPU/GPU.

🚡 Data in transit: optimized for network communication, ready-to-wire at anytime with fast and compressed serialization in Protobuf, bytes, base64, JSON, CSV, DataFrame. Perfect for streaming and out-of-memory data.

🔎 One-stop k-NN: Unified and consistent API for mainstream vector databases that allows nearest neighbor search including Elasticsearch, Redis, AnnLite, Qdrant, Weaviate.

👒 For modern apps: GraphQL support makes your server versatile on request and response; built-in data validation and JSON Schema (OpenAPI) help you build reliable web services.

🐍 Pythonic experience: as easy as a Python list. If you can Python, you can DocArray. Intuitive idioms and type annotation simplify the code you write.

🛸 IDE integration: pretty-print and visualization on Jupyter notebook and Google Colab; comprehensive autocomplete and type hints in PyCharm and VS Code.

Read more on why should you use DocArray and comparison to alternatives.

DocArray was released under the open-source Apache License 2.0 in January 2022. It is currently a sandbox project under LF AI & Data Foundation.

Documentation

Install

Requires Python 3.7+

pip install docarray

or via Conda:

conda install -c conda-forge docarray

Commonly used features can be enabled via pip install "docarray[common]".

Get Started

DocArray consists of three simple concepts:

  • Document: a data structure for easily representing nested, unstructured data.
  • DocumentArray: a container for efficiently accessing, manipulating, and understanding multiple Documents.
  • Dataclass: a high-level API for intuitively representing multimodal data.

Let's see DocArray in action with some examples.

Example 1: represent multimodal data in a dataclass

You can easily represent the following news article card with docarray.dataclass and type annotation:

A example multimodal document
from docarray import dataclass, Document
from docarray.typing import Image, Text, JSON


@dataclass
class WPArticle:
    banner: Image
    headline: Text
    meta: JSON


a = WPArticle(
    banner='https://.../cat-dog-flight.png',
    headline='Everything to know about flying with pets, ...',
    meta={
        'author': 'Nathan Diller',
        'Column': 'By the Way - A Post Travel Destination',
    },
)

d = Document(a)

Example 2: text matching in 10 lines

Let's search for top-5 similar sentences of she smiled too much in "Pride and Prejudice":

from docarray import Document, DocumentArray

d = Document(uri='https://www.gutenberg.org/files/1342/1342-0.txt').load_uri_to_text()
da = DocumentArray(Document(text=s.strip()) for s in d.text.split('\n') if s.strip())
da.apply(Document.embed_feature_hashing, backend='process')

q = (
    Document(text='she smiled too much')
    .embed_feature_hashing()
    .match(da, metric='jaccard', use_scipy=True)
)

print(q.matches[:5, ('text', 'scores__jaccard__value')])
[['but she smiled too much.', 
  '_little_, she might have fancied too _much_.', 
  'She perfectly remembered everything that had passed in', 
  'tolerably detached tone. While she spoke, an involuntary glance', 
  'much as she chooses.”'], 
  [0.3333333333333333, 0.6666666666666666, 0.7, 0.7272727272727273, 0.75]]

Here the feature embedding is done by simple feature hashing and distance metric is Jaccard distance. You have better embeddings? Of course you do! We look forward to seeing your results!

Example 3: external storage for out-of-memory data

When your data is too big, storing in memory is not the best idea. DocArray supports multiple storage backends such as SQLite, Weaviate, Qdrant and AnnLite. They're all unified under the exact same user experience and API. Take the above snippet: you only need to change one line to use SQLite:

da = DocumentArray(
    (Document(text=s.strip()) for s in d.text.split('\n') if s.strip()),
    storage='sqlite',
)

The code snippet can still run as-is. All APIs remain the same, the subsequent code then runs in an "in-database" manner.

Besides saving memory, you can leverage storage backends for persistence and faster retrieval (e.g. on nearest-neighbor queries).

Example 4: complete workflow of visual search

Let's use DocArray and the Totally Looks Like dataset to build a simple meme image search. The dataset contains 6,016 image-pairs stored in /left and /right. Images that share the same filename appear similar to the human eye. For example:

left/00018.jpg right/00018.jpg left/00131.jpg right/00131.jpg
Visualizing top-9 matches using DocArray API Visualizing top-9 matches using DocArray API Visualizing top-9 matches using DocArray API Visualizing top-9 matches using DocArray API

Given an image from /left, can we find the most-similar image to it in /right? (without looking at the filename).

Load images

First we load images. You can go to Totally Looks Like's website, unzip and load images as below:

from docarray import DocumentArray

left_da = DocumentArray.from_files('left/*.jpg')

Or you can simply pull it from Jina AI Cloud:

left_da = DocumentArray.pull('jina-ai/demo-leftda', show_progress=True)

Note If you have more than 15GB of RAM and want to try using the whole dataset instead of just the first 1,000 images, remove [:1000] when loading the files into the DocumentArrays left_da and right_da.

You'll see a progress bar to indicate how much has downloaded.

To get a feeling of the data, we can plot them in one sprite image. You need matplotlib and torch installed to run this snippet:

left_da.plot_image_sprites()

Load totally looks like dataset with docarray API

Apply preprocessing

Let's do some standard computer vision pre-processing:

from docarray import Document


def preproc(d: Document):
    return (
        d.load_uri_to_image_tensor()  # load
        .set_image_tensor_normalization()  # normalize color
        .set_image_tensor_channel_axis(-1, 0)
    )  # switch color axis for the PyTorch model later


left_da.apply(preproc)

Did I mention apply works in parallel?

Embed images

Now let's convert images into embeddings using a pretrained ResNet50:

import torchvision

model = torchvision.models.resnet50(pretrained=True)  # load ResNet50
left_da.embed(model, device='cuda')  # embed via GPU to speed up

This step takes ~30 seconds on GPU. Beside PyTorch, you can also use TensorFlow, PaddlePaddle, or ONNX models in .embed(...).

Visualize embeddings

You can visualize the embeddings via tSNE in an interactive embedding projector. You will need to have pydantic, uvicorn and FastAPI installed to run this snippet:

left_da.plot_embeddings(image_sprites=True)

Visualizing embedding via tSNE and embedding projector

Fun is fun, but our goal is to match left images against right images, and so far we have only handled the left. Let's repeat the same procedure for the right:

Pull from Cloud Download, unzip, load from local
right_da = (
    DocumentArray.pull('jina-ai/demo-rightda', show_progress=True)
    .apply(preproc)
    .embed(model, device='cuda')[:1000]
)
right_da = (
    DocumentArray.from_files('right/*.jpg')[:1000]
    .apply(preproc)
    .embed(model, device='cuda')
)

Match nearest neighbors

Now we can match the left to the right and take the top-9 results.

left_da.match(right_da, limit=9)

Let's inspect what's inside left_da matches now:

for m in left_da[0].matches:
    print(d.uri, m.uri, m.scores['cosine'].value)
left/02262.jpg right/03459.jpg 0.21102
left/02262.jpg right/02964.jpg 0.13871843
left/02262.jpg right/02103.jpg 0.18265384
left/02262.jpg right/04520.jpg 0.16477376
...

Or shorten the loop to a one-liner using the element and attribute selector:

print(left_da['@m', ('uri', 'scores__cosine__value')])

Better see it.

(
    DocumentArray(left_da[8].matches, copy=True)
    .apply(
        lambda d: d.set_image_tensor_channel_axis(
            0, -1
        ).set_image_tensor_inv_normalization()
    )
    .plot_image_sprites()
)

Visualizing top-9 matches using DocArray API Visualizing top-9 matches using DocArray API

Here we reversed the preprocessing steps (i.e. switching axis and normalizing) on the copied matches, so you can visualize them using image sprites.

Quantitative evaluation

Serious as you are, visual inspection is surely not enough. Let's calculate the recall@K. First we construct the groundtruth matches:

groundtruth = DocumentArray(
    Document(uri=d.uri, matches=[Document(uri=d.uri.replace('left', 'right'))])
    for d in left_da
)

Here we created a new DocumentArray with real matches by simply replacing the filename, e.g. left/00001.jpg to right/00001.jpg. That's all we need: if the predicted match has the identical uri as the groundtruth match, then it is correct.

Now let's check recall rate from 1 to 5 over the full dataset:

for k in range(1, 6):
    print(
        f'recall@{k}',
        left_da.evaluate(
            groundtruth, hash_fn=lambda d: d.uri, metric='recall_at_k', k=k, max_rel=1
        ),
    )
recall@1 0.02726063829787234
recall@2 0.03873005319148936
recall@3 0.04670877659574468
recall@4 0.052194148936170214
recall@5 0.0573470744680851

You can also use other metrics like precision_at_k, ndcg_at_k, hit_at_k.

If you think a pretrained ResNet50 is good enough, let me tell you with Finetuner you can do much better with just another ten lines of code.

Save results

You can save a DocumentArray to binary, JSON, dict, DataFrame, CSV or Protobuf message with/without compression. In its simplest form:

left_da.save('left_da.bin')

To reuse that DocumentArray's data, use left_da = DocumentArray.load('left_da.bin').

If you want to transfer a DocumentArray from one machine to another or share it with your colleagues, you can do:

left_da.push('my_shared_da')

Now anyone who knows the token my_shared_da can pull and work on it.

left_da = DocumentArray.pull('<username>/my_shared_da')

Intrigued? That's only scratching the surface of what DocArray is capable of. Read our docs to learn more.

Support

  • Join our Slack community and chat with other community members about ideas.

DocArray is a trademark of LF AI Projects, LLC

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

docarray-0.20.1.dev3.tar.gz (663.9 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page