Skip to main content

Wrangle unstructured AI data at scale

Project description

PyPI Python Version Codecov Tests

AI 🔗 DataChain

DataChain is an open-source Python data processing library for wrangling unstructured AI data at scale.

Datachain enables multimodal API calls and local AI inferences to run in parallel over many samples as chained operations. The resulting datasets can be saved, versioned, and sent directly to PyTorch and TensorFlow for training. Datachain can persist features of Python objects returned by AI models, and enables vectorized analytical operations over them.

The typical use cases are data curation, LLM analytics and validation, image segmentation, pose detection, and GenAI alignment. Datachain is especially helpful if batch operations can be optimized – for instance, when synchronous API calls can be parallelized or where an LLM API offers batch processing.

$ pip install datachain

Operation basics

DataChain is built by composing wrangling operations.

For example, let us consider a dataset from Karlsruhe Institute of Technology detailing dialogs between users and customer service chatbots. We can use the chain to read data from the cloud, map it onto the parallel API calls for LLM evaluation, and organize the output into a dataset :

# pip install mistralai
# this example requires a free Mistral API key, get yours at https://console.mistral.ai
# add the key to your shell environment: $ export MISTRAL_API_KEY= your key

# pip install mistralai
# this example requires a free Mistral API key, get yours at https://console.mistral.ai
# add the key to your shell environment: $ export MISTRAL_API_KEY= your key

import os

from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

from datachain.lib.dc import DataChain, Column

PROMPT = "Was this bot dialog successful? Describe the 'result' as 'Yes' or 'No' in a short JSON"

model = "mistral-large-latest"
api_key = os.environ["MISTRAL_API_KEY"]

chain = (
    DataChain.from_storage("gs://datachain-demo/chatbot-KiT/")
    .limit(5)
    .settings(cache=True, parallel=5)
    .map(
        mistral_response=lambda file: MistralClient(api_key=api_key)
        .chat(
            model=model,
            response_format={"type": "json_object"},
            messages=[
                ChatMessage(role="user", content=f"{PROMPT}: {file.get_value()}")
            ],
        )
        .choices[0]
        .message.content,
    )
    .save()
)

try:
    print(chain.select("mistral_response").results())
except Exception as e:
    print(f"do you have the right Mistral API key? {e}")
[('{"result": "Yes"}',), ('{"result": "No"}',), ... , ('{"result": "Yes"}',)]

Now we have parallel-processed an LLM API-based query over cloud data and persisted the results.

Vectorized analytics

Datachain internally represents datasets as tables, so analytical queries on the chain are automatically vectorized:

failed_dialogs = chain.filter(Column("mistral_response") == '{"result": "No"}')
success_rate = failed_dialogs.count() / chain.count()
print(f"Chatbot dialog success rate: {100*success_rate:.2f}%")
"40.00%"

Note that DataChain represents file samples as pointers into their respective storage locations. This means a newly created dataset version does not duplicate files in storage, and storage remains the single source of truth for the original samples

Handling Python objects

In addition to storing primitive Python data types, chain is also capable of using data models.

For example, instead of collecting just a text response from Mistral API, we might be interested in more fields of the Mistral response object. For this task, we can define a Pydantic-like model and populate it from the API replies:

import os

from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

from datachain.lib.dc import DataChain
from datachain.lib.feature import Feature


PROMPT = (
    "Was this dialog successful? Describe the 'result' as 'Yes' or 'No' in a short JSON"
)

model = "mistral-large-latest"
api_key = os.environ["MISTRAL_API_KEY"]


## define the data model ###
class Usage(Feature):
    prompt_tokens: int = 0
    completion_tokens: int = 0


class MyChatMessage(Feature):
    role: str = ""
    content: str = ""


class CompletionResponseChoice(Feature):
    message: MyChatMessage = MyChatMessage()


class MistralModel(Feature):
    id: str = ""
    choices: list[CompletionResponseChoice]
    usage: Usage = Usage()


## Populate model instances ###
chain = (
    DataChain.from_storage("gs://datachain-demo/chatbot-KiT/")
    .limit(5)
    .settings(cache=True, parallel=5)
    .map(
        mistral_response=lambda file: MistralModel(
            **MistralClient(api_key=api_key)
            .chat(
                model=model,
                response_format={"type": "json_object"},
                messages=[
                    ChatMessage(role="user", content=f"{PROMPT}: {file.get_value()}")
                ],
            )
            .dict()
        ),
        output=MistralModel,
    )
    .save("dialog-eval")
)

After the chain execution, we can collect the objects:

for obj in responses:
    assert isinstance(obj, MistralModel)
    print(obj.dict())
{'choices': [{'message': {'role': 'assistant', 'content': '{"result": "Yes"}'}}], 'usage': {'prompt_tokens': 610, 'completion_tokens': 6}}
{'choices': [{'message': {'role': 'assistant', 'content': '{"result": "No"}'}}], 'usage': {'prompt_tokens': 3983, 'completion_tokens': 6}}
{'choices': [{'message': {'role': 'assistant', 'content': '{"result": "Yes"}'}}], 'usage': {'prompt_tokens': 706, 'completion_tokens': 6}}
{'choices': [{'message': {'role': 'assistant', 'content': '{"result": "No"}'}}], 'usage': {'prompt_tokens': 1250, 'completion_tokens': 6}}
{'choices': [{'message': {'role': 'assistant', 'content': '{"result": "Yes"}'}}], 'usage': {'prompt_tokens': 1217, 'completion_tokens': 6}}

Dataset persistence

The “save” operation makes chain dataset persistent in the current (working) directory of the query. A hidden folder .datachain/ holds the records. A persistent dataset can be accessed later to start a derivative chain:

DataChain.from_dataset("dialog-eval").limit(2).save("dialog-eval")

Persistent datasets are immutable and automatically versioned. Versions can be listed from shell:

$ datachain ls-datasets

dialog-rate (v1)
dialog-rate (v2)

By default, when a persistent dataset is loaded, the latest version is fetched but another version can be requested:

ds = DataChain.from_dataset("dialog-eval", version = 1)

Chain optimization and execution

Datachain avoids redundant operations. Execution is triggered only when a downstream operation requests the processed results. However, it would be inefficient to run, say, LLM queries again every time you just want to collect several objects.

“Save” operation nails execution results and automatically refers to them every time the downstream functions ask for data. Saving without an explicit name generates an auto-named dataset which serves the same purpose.

Matching data with metadata

It is common for AI data to come with pre-computed metadata (annotations, classes, etc).

DataChain library understands common metadata formats (JSON, CSV and parquet), and can unite data samples from storage with side-loaded metadata. The schema for metadata can be set explicitly or be inferred.

Here is an example of reading a CSV file where schema is heuristically derived from the header:

from datachain.lib.dc import DataChain
csv_dataset = DataChain.from_csv("gs://datachain-demo/chatbot-csv/")

print(csv_dataset.to_pandas())

Reading metadata from JSON format is a more complicated scenario because a JSON-annotated dataset typically references data samples (e.g. images) in annotation arrays somewhere within JSON files.

Here is an example from MS COCO “captions” JSON which employs separate sections for image meta and captions:

{
  "images": [
    {
      "license": 4,
      "file_name": "000000397133.jpg",
      "coco_url": "http://images.cocodataset.org/val2017/000000397133.jpg",
      "height": 427,
      "width": 640,
      "date_captured": "2013-11-14 17:02:52",
      "flickr_url": "http://farm7.staticflickr.com/6116/6255196340_da26cf2c9e_z.jpg",
      "id": 397133
    },
    ...
  ],
  "annotations": [
    {
      "image_id"  :       "179765",
      "id"        :       38,
      "caption"   :       "A black Honda motorcycle parked in front of a garage."
    },
    ...
  ],
  ...
}

To deal with this layout, we can take the following steps:

  1. Generate a dataset of raw image files from storage

  2. Generate a meta-information dataset from the JSON section “images”

  3. Join these datasets via the matching id keys

from datachain.lib.dc import DataChain

images = DataChain.from_storage("gs://datachain-demo/coco2017/images/val/")
meta = DataChain.from_json("gs://datachain-demo/coco2017/annotations_captions", jmespath = "images")

images_with_meta = images.merge(meta, on="file.name", right_on="images.file_name")

print(images_with_meta.limit(1).results())
Processed: 5000 rows [00:00, 15481.66 rows/s]
Processed: 1 rows [00:00, 1291.75 rows/s]
Processed: 1 rows [00:00,  4.70 rows/s]
Generated: 5000 rows [00:00, 27128.67 rows/s]
[(1, 2336066478558845549, '', 0, 'coco2017/images/val', '000000000139.jpg', 'CNvXoemj8IYDEAE=', '1719096046021595', 1, datetime.datetime(2024, 6, 22, 22, 40, 46, 70000, tzinfo=datetime.timezone.utc), 161811, '', '', None, 'gs://datachain-demo', 'gs://datachain-demo', 'coco2017/images/val', '000000000139.jpg', 161811, '1719096046021595', 'CNvXoemj8IYDEAE=', 1, datetime.datetime(1970, 1, 1, 0, 0, tzinfo=datetime.timezone.utc), None, '', 4146, 6967063844996569113, 2, '000000000139.jpg', 'http://images.cocodataset.org/val2017/000000000139.jpg', 426, 640, '2013-11-21 01:34:01', 'http://farm9.staticflickr.com/8035/8024364858_9c41dc1666_z.jpg', 139)]

Passing data to training

Chain results can be exported or passed directly to Pytorch dataloader. For example, if we are interested in passing three columns to training, the following Pytorch code will do it:

ds = train.select("file", "caption_choices", "label_ind").to_pytorch(
    transform=preprocess,
    tokenizer=clip.tokenize,
)

loader = DataLoader(ds, batch_size=2)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
train(loader, model, optimizer)

Tutorials

Contributions

Contributions are very welcome. To learn more, see the Contributor Guide.

License

Distributed under the terms of the Apache 2.0 license, DataChain is free and open source software.

Issues

If you encounter any problems, please file an issue along with a detailed description.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datachain-0.2.6.tar.gz (2.1 MB view details)

Uploaded Source

Built Distribution

datachain-0.2.6-py3-none-any.whl (196.6 kB view details)

Uploaded Python 3

File details

Details for the file datachain-0.2.6.tar.gz.

File metadata

  • Download URL: datachain-0.2.6.tar.gz
  • Upload date:
  • Size: 2.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for datachain-0.2.6.tar.gz
Algorithm Hash digest
SHA256 dddb4fc9d5fef321547ebb508d113422e317c8e4705e51bec47036201d2261a0
MD5 f66117e2eda6d2b95907049d24e92db2
BLAKE2b-256 2af6030ccf153c46e6e3eba30135553492fff078d83dd315966c9610cf28737c

See more details on using hashes here.

File details

Details for the file datachain-0.2.6-py3-none-any.whl.

File metadata

  • Download URL: datachain-0.2.6-py3-none-any.whl
  • Upload date:
  • Size: 196.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for datachain-0.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 dafba307cd7106bba23875c8ce9fb3c2f3c74e8cdf2eeb78366bd586556f3c59
MD5 2567a322f65b1b33baa8c05ed1566f0f
BLAKE2b-256 dcf0c8c27cf27c27925fb9dba9c340428dc1bb88c43b4b07c596f3029e9910c6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page