Skip to main content

AI Agent-oriented CLI for Microsoft Fabric Data Science

Project description

Python 3.10+ Microsoft Fabric MIT License Output Formats

⚡ fds — Fabric Data Science CLI

Your terminal is your workspace. One CLI for the entire Fabric DS lifecycle — from experiment to endpoint.

fds is a command-line tool for Microsoft Fabric Data Science that lets you manage experiments, models, notebooks, environments, and endpoints without ever opening a browser. Designed for data scientists who live in the terminal and AI agents that need structured, scriptable output.

# Train → Track → Deploy in three commands
fds experiment runs list my-exp --sort-by metrics.auc --limit 1 -o json
fds model register churn-model --from-run <run-id>
fds endpoint activate churn-model --version 1 --wait

fds CLI demo — experiment to endpoint in one session


✨ Key Features

Feature Description
🧪 Full ML Lifecycle Experiments → Runs → Models → Endpoints — all from the terminal
🤖 Agent-First Design Every command supports --help for self-discovery and --output json for structured parsing
📓 Notebook Ops Upload, push, run (with -p key=value or JSON parameters), and download notebooks
🌍 Environment Management Create environments, manage pip/conda libraries, publish — no portal clicks
🏗️ Lakehouse & Spark Upload files, import data, run Spark jobs, vacuum Delta tables
📊 DAX Queries Execute DAX queries against semantic models directly from CLI
🔐 Flexible Auth Browser, Azure CLI, device-code, and service-principal authentication
🎯 Structured Output table (human), json (agent), csv/tsv (export) — every command, every format
⏱️ Async & Wait Long-running operations (notebook run, env publish) support --wait and --timeout
💡 Actionable Errors Every error includes a hint: what went wrong and what to do next

🚀 Quick Start

Install

pip install fabric-ds-cli

Authenticate & Configure

fds auth login                              # browser-based login
fds config set workspace "My-Workspace"     # set default workspace
fds auth status                             # verify connection

Your First Workflow

# 1. List experiments
fds experiment list

# 2. View recent runs, sorted by accuracy
fds experiment runs list my-experiment --since 7d --sort-by metrics.accuracy

# 3. Compare top runs side by side
fds experiment runs compare --experiment my-experiment --last 3

# 4. Register the best model
fds model register churn-model --from-run <best-run-id>

# 5. Deploy to real-time endpoint
fds endpoint activate churn-model --version 1 --wait

# 6. Score!
fds endpoint score churn-model --data '{"inputs": [[1, 2, 3]], "formatType": "dataframe", "orientation": "values"}'

Explore Semantic Models

# List semantic models in the workspace
fds dataset list

# Inspect tables and measures
fds dataset tables sales-model
fds dataset measures sales-model

# Run DAX queries directly from the terminal
fds dataset dax "EVALUATE TOPN(10, 'Sales', 'Sales'[Amount], DESC)" --dataset sales-model

🤖 For AI Agents

Install Skills (Copilot / Claude / Cursor)

fds ships pre-built Skills that teach agents how to use the CLI:

npx skills add <fds-repo> --skill ds-authoring-cli     # ML lifecycle (create, deploy, score)
npx skills add <fds-repo> --skill ds-consumption-cli    # Read-only (query, compare, explore)

Manual: cp -r .github/skills/ds-* ~/.copilot/skills/

Other Frameworks

For LangChain, AutoGPT, OpenAI Agents SDK, or custom orchestrators — see the Agent Integration Guide (output contract, exit codes, system prompt template, retry patterns, headless auth).


📖 Command Reference

📘 Full reference with all parameters and examples: docs/commands.md

Core Workflow

Command Subcommands Description
fds experiment list · create · get · delete ML experiment management
fds experiment runs list · get · compare Run tracking, comparison, filtering
fds model list · get · versions · register · delete Model registry operations
fds endpoint list · get · versions · activate · deactivate · score · update Real-time endpoint lifecycle

Infrastructure

Command Subcommands Description
fds notebook list · upload · push · download · delete Notebook management
fds notebook run start · status · cancel · list Notebook execution & monitoring
fds env list · create · get · add-libraries · remove-library · publish · delete Environment & dependency management
fds spark-job list · run · status · cancel Spark job definitions
fds lakehouse create · get · delete · list · tables · vacuum · upload · import Lakehouse data operations
fds dataset list · tables · measures · refresh · dax Semantic models & DAX queries

System

Command Subcommands Description
fds auth login · status · logout Authentication management
fds config set · get · list · reset CLI configuration
fds workspace create · delete · list · items · set Workspace management

💡 Every command supports --help for detailed usage, arguments, and examples.

Global Options

fds [command] --output json     # Output format: table | json | csv | tsv
fds [command] --workspace "X"   # Override default workspace
fds [command] --verbose         # Debug-level logging
fds [command] --quiet           # Suppress non-essential output

🏗️ Architecture

┌──────────────────────────────────────────────────────────┐
│                    fds CLI (Typer)                        │
│  fds experiment · model · notebook · endpoint · env ...  │
├──────────────┬───────────────────────┬───────────────────┤
│  commands/   │     services/         │     core/         │
│  (CLI layer) │  (business logic)     │  (infrastructure) │
│  parse args  │  one service per      │  config, errors,  │
│  → service   │  resource type        │  auth, output,    │
│  → render    │                       │  workspace        │
├──────────────┴───────────┬───────────┴───────────────────┤
│                          │                               │
│    MLflow Tracking API   │   Fabric REST API             │
│    (experiment, model,   │   (notebook, env, endpoint,   │
│     run, metric, tag)    │    lakehouse, spark-job)       │
│                          │                               │
├──────────────────────────┴───────────────────────────────┤
│              Microsoft Fabric Platform                    │
└──────────────────────────────────────────────────────────┘

Design principles:

  • Commands are thin: parse CLI args → call service → render output
  • Services own business logic: one class per Fabric resource type
  • Core handles cross-cutting concerns: auth, config, errors, formatting
  • No raw HTTP — all platform calls go through established client libraries

⚖️ fds vs fab CLI

Capability fds fab
ML experiments & runs ✅ Full CRUD + compare
Model registry ✅ Register, version, delete
Real-time endpoints ✅ Activate, score, scale
Notebook execution ✅ Run with params, poll status ⚠️ Basic
Environment management ✅ Libraries, publish
DAX queries
Workspace file operations ⚠️ Lakehouse upload ✅ Full (ls, cd, cp, rm)
Git integration
Permissions management
AI Agent optimized output --output json

TL;DR: fds owns the DS lifecycle (experiment → model → endpoint). fab owns platform infrastructure (files, Git, permissions). They complement each other.


🔄 CI/CD

# GitHub Actions example
steps:
  - run: pip install fabric-ds-cli
  - run: fds auth login -m service-principal --tenant-id $TENANT --client-id $CLIENT_ID --client-secret $SECRET
  - run: |
      fds notebook run start training \
        -p version="$GITHUB_SHA" --wait
  - run: |
      BEST=$(fds experiment runs list my-exp \
        --sort-by metrics.auc --limit 1 -o json | jq -r '.[0].run_id')
      fds model register prod-model --from-run "$BEST"
      fds endpoint activate prod-model --version 1 --wait

🔧 Prerequisites

  • Python 3.10+
  • Microsoft Fabric workspace with appropriate permissions
  • Azure CLIaz login for interactive authentication

Optional

  • .NET SDK 8.0 — required only for fds dataset dax and fds dataset measures (PythonNet bridge)
    # macOS
    brew install dotnet-sdk
    # Ubuntu/Debian
    sudo apt install dotnet-sdk-8.0
    # Windows — https://dotnet.microsoft.com/download
    

    If .NET is not installed, DAX/measures commands show installation instructions instead of crashing.


🛠️ Development

Setup

git clone https://github.com/microsoft/fabric-ds-cli.git
cd fabric-ds-cli
uv venv .venv --python 3.12
source .venv/bin/activate
uv pip install -e ".[dev]"

Note: Always use uv for dependency management. Bare pip may resolve to the system Python.

Testing

pytest tests/ -v                          # All unit tests
pytest tests/ -k "not fabric" -v          # Unit only (no Fabric auth needed)
pytest tests/ -m fabric -v                # E2E tests (requires Fabric workspace)

E2E tests require az login and a configured workspace. See Testing Guide for full setup.

Linting & Type Checking

ruff check src/ tests/                    # Lint (E, F, W, I, UP, B, SIM, D, C901)
mypy src/fabric_ds_cli/                   # Type check (strict mode)

Project Structure

src/fabric_ds_cli/
├── main.py              # CLI entry point, command registration
├── core/                # Infrastructure (config, errors, auth, output)
├── commands/            # Thin CLI layer (11 command groups)
└── services/            # Business logic (one service per resource)

📄 License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fabric_ds_cli-0.1.0-py3-none-any.whl (56.3 kB view details)

Uploaded Python 3

File details

Details for the file fabric_ds_cli-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: fabric_ds_cli-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 56.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: RestSharp/106.13.0.0

File hashes

Hashes for fabric_ds_cli-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dec6981f5b45e3dfa0547006b883a64b443c5c4efc061761981fa7cd78e71809
MD5 ea115d0cbb9c87e183eebc940c367384
BLAKE2b-256 4652ce21dc8ef08de00fc4e490ff1c01e028980e7c731eda7dedd077bcb58e6c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page