Wrangle unstructured AI data at scale
Project description
DataChain is a Python-based AI-data warehouse for transforming and analyzing unstructured data like images, audio, videos, text and PDFs. It integrates with external storage (e.g., S3) to process data efficiently without data duplication and manages metadata in an internal database for easy and efficient querying.
Use Cases
Multimodal Dataset Preparation and Curation: ideal for organizing and refining data in pre-training, finetuning or LLM evaluating stages.
GenAI Data Analytics: Enables advanced analytics for multimodal data and ad-hoc analytics using LLMs.
Key Features
- 📂 Multimodal Dataset Versioning.
Version unstructured data without redundant data copies, by supporitng references to S3, GCP, Azure, and local file systems.
Multimodal data support: images, video, text, PDFs, JSONs, CSVs, parquet, etc.
Unite files and metadata together into persistent, versioned, columnar datasets.
- 🐍 Python-friendly.
Operate on Python objects and object fields: float scores, strings, matrixes, LLM response objects.
Run Python code in a high-scale, terabytes size datasets, with built-in parallelization and memory-efficient computing — no SQL or Spark required.
- 🧠 Data Enrichment and Processing.
Generate metadata using local AI models and LLM APIs.
Filter, join, and group datasets by metadata. Search by vector embeddings.
High-performance vectorized operations on Python objects: sum, count, avg, etc.
Pass datasets to Pytorch and Tensorflow, or export them back into storage.
Quick Start
$ pip install datachain
Selecting files using JSON metadata
A storage consists of images of cats and dogs (dog.1048.jpg, cat.1009.jpg), annotated with ground truth and model inferences in the ‘json-pairs’ format, where each image has a matching JSON file like cat.1009.json:
{
"class": "cat", "id": "1009", "num_annotators": 8,
"inference": {"class": "dog", "confidence": 0.68}
}
Example of downloading only “high-confidence cat” inferred images using JSON metadata:
from datachain import Column, DataChain
meta = DataChain.from_json("gs://datachain-demo/dogs-and-cats/*json", object_name="meta")
images = DataChain.from_storage("gs://datachain-demo/dogs-and-cats/*jpg")
images_id = images.map(id=lambda file: file.path.split('.')[-2])
annotated = images_id.merge(meta, on="id", right_on="meta.id")
likely_cats = annotated.filter((Column("meta.inference.confidence") > 0.93) \
& (Column("meta.inference.class_") == "cat"))
likely_cats.export_files("high-confidence-cats/", signal="file")
Data curation with a local AI model
Batch inference with a simple sentiment model using the transformers library:
pip install transformers
The code below downloads files from the cloud, and applies a user-defined function to each one of them. All files with a positive sentiment detected are then copied to the local directory.
from transformers import pipeline
from datachain import DataChain, Column
classifier = pipeline("sentiment-analysis", device="cpu",
model="distilbert/distilbert-base-uncased-finetuned-sst-2-english")
def is_positive_dialogue_ending(file) -> bool:
dialogue_ending = file.read()[-512:]
return classifier(dialogue_ending)[0]["label"] == "POSITIVE"
chain = (
DataChain.from_storage("gs://datachain-demo/chatbot-KiT/",
object_name="file", type="text")
.settings(parallel=8, cache=True)
.map(is_positive=is_positive_dialogue_ending)
.save("file_response")
)
positive_chain = chain.filter(Column("is_positive") == True)
positive_chain.export_files("./output")
print(f"{positive_chain.count()} files were exported")
13 files were exported
$ ls output/datachain-demo/chatbot-KiT/
15.txt 20.txt 24.txt 27.txt 28.txt 29.txt 33.txt 37.txt 38.txt 43.txt ...
$ ls output/datachain-demo/chatbot-KiT/ | wc -l
13
LLM judging chatbots
LLMs can work as universal classifiers. In the example below, we employ a free API from Mistral to judge the publicly available chatbot dialogs. Please get a free Mistral API key at https://console.mistral.ai
$ pip install mistralai (Requires version >=1.0.0)
$ export MISTRAL_API_KEY=_your_key_
DataChain can parallelize API calls; the free Mistral tier supports up to 4 requests at the same time.
from mistralai import Mistral
from datachain import File, DataChain, Column
PROMPT = "Was this dialog successful? Answer in a single word: Success or Failure."
def eval_dialogue(file: File) -> bool:
client = Mistral()
response = client.chat.complete(
model="open-mixtral-8x22b",
messages=[{"role": "system", "content": PROMPT},
{"role": "user", "content": file.read()}])
result = response.choices[0].message.content
return result.lower().startswith("success")
chain = (
DataChain.from_storage("gs://datachain-demo/chatbot-KiT/", object_name="file")
.settings(parallel=4, cache=True)
.map(is_success=eval_dialogue)
.save("mistral_files")
)
successful_chain = chain.filter(Column("is_success") == True)
successful_chain.export_files("./output_mistral")
print(f"{successful_chain.count()} files were exported")
With the instruction above, the Mistral model considers 31/50 files to hold the successful dialogues:
$ ls output_mistral/datachain-demo/chatbot-KiT/
1.txt 15.txt 18.txt 2.txt 22.txt 25.txt 28.txt 33.txt 37.txt 4.txt 41.txt ...
$ ls output_mistral/datachain-demo/chatbot-KiT/ | wc -l
31
Serializing Python-objects
LLM responses may contain valuable information for analytics – such as the number of tokens used, or the model performance parameters.
Instead of extracting this information from the Mistral response data structure (class ChatCompletionResponse), DataChain can serialize the entire LLM response to the internal DB:
from mistralai import Mistral
from mistralai.models import ChatCompletionResponse
from datachain import File, DataChain, Column
PROMPT = "Was this dialog successful? Answer in a single word: Success or Failure."
def eval_dialog(file: File) -> ChatCompletionResponse:
client = MistralClient()
return client.chat(
model="open-mixtral-8x22b",
messages=[{"role": "system", "content": PROMPT},
{"role": "user", "content": file.read()}])
chain = (
DataChain.from_storage("gs://datachain-demo/chatbot-KiT/", object_name="file")
.settings(parallel=4, cache=True)
.map(response=eval_dialog)
.map(status=lambda response: response.choices[0].message.content.lower()[:7])
.save("response")
)
chain.select("file.name", "status", "response.usage").show(5)
success_rate = chain.filter(Column("status") == "success").count() / chain.count()
print(f"{100*success_rate:.1f}% dialogs were successful")
Output:
file status response response response
name usage usage usage
prompt_tokens total_tokens completion_tokens
0 1.txt success 547 548 1
1 10.txt failure 3576 3578 2
2 11.txt failure 626 628 2
3 12.txt failure 1144 1182 38
4 13.txt success 1100 1101 1
[Limited by 5 rows]
64.0% dialogs were successful
Iterating over Python data structures
In the previous examples, datasets were saved in the embedded database (SQLite in folder .datachain of the working directory). These datasets were automatically versioned, and can be accessed using DataChain.from_dataset(“dataset_name”).
Here is how to retrieve a saved dataset and iterate over the objects:
chain = DataChain.from_dataset("response")
# Iterating one-by-one: support out-of-memory workflow
for file, response in chain.limit(5).collect("file", "response"):
# verify the collected Python objects
assert isinstance(response, ChatCompletionResponse)
status = response.choices[0].message.content[:7]
tokens = response.usage.total_tokens
print(f"{file.get_uri()}: {status}, file size: {file.size}, tokens: {tokens}")
Output:
gs://datachain-demo/chatbot-KiT/1.txt: Success, file size: 1776, tokens: 548
gs://datachain-demo/chatbot-KiT/10.txt: Failure, file size: 11576, tokens: 3578
gs://datachain-demo/chatbot-KiT/11.txt: Failure, file size: 2045, tokens: 628
gs://datachain-demo/chatbot-KiT/12.txt: Failure, file size: 3833, tokens: 1207
gs://datachain-demo/chatbot-KiT/13.txt: Success, file size: 3657, tokens: 1101
Vectorized analytics over Python objects
Some operations can run inside the DB without deserialization. For instance, let’s calculate the total cost of using the LLM APIs, assuming the Mixtral call costs $2 per 1M input tokens and $6 per 1M output tokens:
chain = DataChain.from_dataset("mistral_dataset")
cost = chain.sum("response.usage.prompt_tokens")*0.000002 \
+ chain.sum("response.usage.completion_tokens")*0.000006
print(f"Spent ${cost:.2f} on {chain.count()} calls")
Output:
Spent $0.08 on 50 calls
PyTorch data loader
Chain results can be exported or passed directly to PyTorch dataloader. For example, if we are interested in passing image and a label based on file name suffix, the following code will do it:
from torch.utils.data import DataLoader
from transformers import CLIPProcessor
from datachain import C, DataChain
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
chain = (
DataChain.from_storage("gs://datachain-demo/dogs-and-cats/", type="image")
.map(label=lambda name: name.split(".")[0], params=["file.name"])
.select("file", "label").to_pytorch(
transform=processor.image_processor,
tokenizer=processor.tokenizer,
)
)
loader = DataLoader(chain, batch_size=1)
DataChain Studio Platform
DataChain Studio is a proprietary solution for teams that offers:
Centralized dataset registry to manage data, code and dependency dependencies in one place.
Data Lineage for data sources as well as direvative dataset.
UI for Multimodal Data like images, videos, and PDFs.
Scalable Compute to handle large datasets (100M+ files) and in-house AI model inference.
Access control including SSO and team based collaboration.
Tutorials
Multimodal (try in Colab)
LLM evaluations (try in Colab)
Reading JSON metadata (try in Colab)
Contributions
Contributions are very welcome. To learn more, see the Contributor Guide.
Community and Support
File an issue if you encounter any problems
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file datachain-0.7.1.tar.gz
.
File metadata
- Download URL: datachain-0.7.1.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0d56f010fa5bd059537ebb10854d382365ea8bb2939e043a90177f64f35a2b05 |
|
MD5 | 0ac89c84cfd70652ceef2e92a38b2f39 |
|
BLAKE2b-256 | d9036f8a36a291e9c6210cccee2ae63533739e1c489bcb1094496e51c564d57b |
Provenance
The following attestation bundles were made for datachain-0.7.1.tar.gz
:
Publisher:
release.yml
on iterative/datachain
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
datachain-0.7.1.tar.gz
- Subject digest:
0d56f010fa5bd059537ebb10854d382365ea8bb2939e043a90177f64f35a2b05
- Sigstore transparency entry: 150835480
- Sigstore integration time:
- Predicate type:
File details
Details for the file datachain-0.7.1-py3-none-any.whl
.
File metadata
- Download URL: datachain-0.7.1-py3-none-any.whl
- Upload date:
- Size: 221.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fca678b975db3b1791423c9d913b8bba7053d2c5a0c27507235870af429588cf |
|
MD5 | 4c15d5c85a0baa11a5963eee86dcb936 |
|
BLAKE2b-256 | 205120c9d3a2d0fdec46b285c92765970f1e3965cbce65216194b56e2fb0e701 |
Provenance
The following attestation bundles were made for datachain-0.7.1-py3-none-any.whl
:
Publisher:
release.yml
on iterative/datachain
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
datachain-0.7.1-py3-none-any.whl
- Subject digest:
fca678b975db3b1791423c9d913b8bba7053d2c5a0c27507235870af429588cf
- Sigstore transparency entry: 150835482
- Sigstore integration time:
- Predicate type: