Skip to main content

Modern Data Centric AI system for Large Language Models

Project description

DataFlow

https://github.com/user-attachments/assets/3dadeeb0-7007-4cdf-b412-593af000020c

1 News

🎉 [2025-06-28] We’re excited to announce that DataFlow, our Data-centric AI system, is now released! Stay tuned for future updates.

2 Overview

DataFlow is a data preparation and training system designed to parse, generate, process and evaluate high-quality data from noisy sources (PDF, plain-text, low-quality QA), thereby improving the performance of large language models (LLMs) in specific domains through targeted training (Pre-training, Supervised Fine-tuing, RL training) or RAG using knowledge base cleaning. DataFlow has been empirically validated to improve domain-oriented LLM's performance in fields such as healthcare, finance, and law.

Specifically, we constructing diverse operators leveraging rule-based methods, deep learning models, LLMs, and LLM APIs. These operators are systematically integrated into distinct pipelines, collectively forming the comprehensive DataFlow system. Additionally, we develop an intelligent DataFlow-agent capable of dynamically assembling new pipelines by recombining existing operators on demand.

3 Pipelines Functionality

3.1 Ready-to-Use PipeLines

Current Pipelines in Dataflow are as follows:

  • Text Pipeline: Mine question-answer pairs from large-scale plain-text data (mostly crawed from InterNet) for use in SFT and RL training.
  • Reasoning Pipeline: Enhances existing question–answer pairs with (1) extended chain-of-thought, (2) category classification, and (3) difficulty estimation.
  • Text2SQL Pipeline: Translates natural language questions into SQL queries, supplemented with explanations, chain-of-thought reasoning, and contextual schema information.
  • Knowlege Base Cleaning Pipeline: Extract and structure knowledge from unorganized sources like tables, PDFs, and Word documents into usable entries for downstream RAG or QA pair generation.
  • Agentic RAG Pipeline: Identify and extract QA pairs from existing QA datasets or knowledge bases that require external knowledge to answer, for use in downstream training of Agnetic RAG tasks.

3.2 Flexible Operator PipeLines

In this framework, operators are categorized into Fundamental Operators, Generic Operators, Domain-Specific Operators, and Evaluation Operators, etc., supporting data processing and evaluation functionalities. Please refer to the documentation for details.

3.3 Agent Guided Pipelines

4 Quick Start

For environment setup and installation, please using the following commands👇

conda create -n dataflow python=3.10 
conda activate dataflow

pip install open-dataflow

If you want to use your own GPU to inference locally, please use:

pip install open-dataflow[vllm]

Dataflow supports Python>=3.10

You can use follwing command to check if installed correctly:

dataflow -v

You are expected to see following outputs:

open-dataflow codebase version: 1.0.0
        Checking for updates...
        Local version:  1.0.0
        PyPI newest version:  1.0.0
You are using the latest version: 1.0.0.

For Quick-Start and Guide, please visit our Documentation.

Documents

5 Experimental Results

For Detailed Experiments setting, please visit our documentation.

5.1 Text PipeLine

5.1.1 Pre-training data filter pipeline

The pre-training data processing pipeline was applied to randomly sampled data from the RedPajama dataset, resulting in a final data retention rate of 13.65%. The analysis results using QuratingScorer are shown in the figure. As can be seen, the filtered pretraining data significantly outperforms the original data across four scoring dimensions: writing style, requirement for expert knowledge, factual content, and educational value. This demonstrates the effectiveness of the DataFlow pretraining data processing.

alt text

5.1.2 SFT data filter pipeline

We filted 3k record from alpaca dataset and compare it with radom selected 3k data from alpaca dataset by training it on Qwen2.5-7B. Results are:

text-sft

2. Reasoning Pipeline

We verify our reasoning pipeline by SFT on a Qwen2.5-32B-Instruct with Reasoning Pipeline synsthized data. We generated 1k and 5k SFT data pairs. Results are:

text-sft

3. Text2SQL PipeLine

We fine-tuned the Qwen2.5-Coder-14B model on the Bird dataset using both Supervised Fine-tuning (SFT) and Reinforcement Learning (RL), with data constructed via the DataFlow-Text2SQL Pipeline. Results are: alt text

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

open_dataflow-1.0.2.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

open_dataflow-1.0.2-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file open_dataflow-1.0.2.tar.gz.

File metadata

  • Download URL: open_dataflow-1.0.2.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for open_dataflow-1.0.2.tar.gz
Algorithm Hash digest
SHA256 29314ec846bdfed83727fb5bd75b3605dc98f2aa960f13066b53788374609c55
MD5 0b0476e8723f06bbc56dc82b3e4b4410
BLAKE2b-256 b44a3f10b6b1416c178ca0c180e5b0ba5c770c04f2fcf0ef5086889a49871493

See more details on using hashes here.

Provenance

The following attestation bundles were made for open_dataflow-1.0.2.tar.gz:

Publisher: python-publish.yml on OpenDCAI/DataFlow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file open_dataflow-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: open_dataflow-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for open_dataflow-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 12891aa7d8025e6a47a8f3ad8f543fc36f45ce62bda0ef9b58276c2258b9c361
MD5 4b34588621e1199001a0b789259c5f5a
BLAKE2b-256 4221980c92170e88d9b5e03b5fa39273c1c765239582e613f33ac4e3041c652f

See more details on using hashes here.

Provenance

The following attestation bundles were made for open_dataflow-1.0.2-py3-none-any.whl:

Publisher: python-publish.yml on OpenDCAI/DataFlow

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page