Scalable Collaborative Agents for Data Science
Project description
Overview
Scald automates machine learning workflows through collaborative AI agents using the Actor-Critic pattern. The Actor agent explores data, engineers features, and trains models using six specialized MCP servers. The Critic agent evaluates solutions and provides targeted feedback for iterative refinement. This approach combines LLM-powered reasoning with gradient boosting algorithms (CatBoost, LightGBM, XGBoost) for both classification and regression tasks.
The system learns from past experiences through ChromaDB-based memory, enabling transfer learning across datasets. Each iteration produces executable code artifacts, comprehensive logs, and cost tracking for full reproducibility.
Installation
Install from PyPI:
pip install scald
Configure API credentials:
cp .env.example .env # Add your OpenRouter API key
For development work, clone the repository and install with all dependencies:
git clone https://github.com/dmitryglhf/scald.git
cd scald
uv sync
Usage
Run AutoML from the command line:
scald --train data/train.csv --test data/test.csv --target price --task-type regression
Or use the Python API:
from scald import Scald
import polars as pl
scald = Scald(max_iterations=5)
# Option 1: Using CSV file paths
predictions = await scald.run(
train="data/train.csv",
test="data/test.csv",
target="target_column",
task_type="classification",
)
# Option 2: Using DataFrames (Polars or Pandas)
train_df = pl.read_csv("data/train.csv")
test_df = pl.read_csv("data/test.csv")
predictions = await scald.run(
train=train_df,
test=test_df,
target="target_column",
task_type="classification",
)
The Actor-Critic loop executes for the specified iterations (default: 5), producing predictions and saving all artifacts to a timestamped session directory.
Architecture
The Actor agent has access to specialized MCP servers for data preview, statistical analysis, preprocessing, model training, file operations, and structured reasoning. The Critic agent reviews solutions without tool access to maintain evaluation objectivity. This separation enables independent verification while the memory system accumulates experience for improved performance on similar tasks.
Documentation
Full documentation available at dmitryglhf.github.io/scald
Serve locally:
uv sync --group docs
mkdocs serve
Development
Run tests and code quality checks:
make test # Run tests with
make lint # Check code quality
make format # Format code
make help # Show all commands
Requirements
Python 3.11+, uv package manager, and an API key from OpenRouter or compatible LLM provider.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scald-0.1.1.tar.gz.
File metadata
- Download URL: scald-0.1.1.tar.gz
- Upload date:
- Size: 556.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
569b3078f3bdb052f126640a9163059d7a483d5ec1a1c84ae057e21a4b21f321
|
|
| MD5 |
ad4ff158c39d726e9da4e7d740002587
|
|
| BLAKE2b-256 |
3bd8291744dd9a5d61d62336ab7969ec9f2c7d8101a6f635809d07527c8d9eda
|
File details
Details for the file scald-0.1.1-py3-none-any.whl.
File metadata
- Download URL: scald-0.1.1-py3-none-any.whl
- Upload date:
- Size: 35.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
391603f51d6b17d215ff8a3d8ffee1b9f5c41ffb0fe8ae547cb19bd61ca7ce11
|
|
| MD5 |
4b8ec6693c47a6b9fd0f5458490131c9
|
|
| BLAKE2b-256 |
13e010f45d68aff3046a8c190c364735a0ec60a682b4db7959874b2636ae3e14
|