Skip to main content

A state machine for data projects

Project description

Burr

Burr Discord

Burr makes it easy to develop applications that make decisions based on state (chatbots, agents, simulations, etc...) from simple python building blocks. Burr includes a UI that can track/monitor those decisions in real time.

Link to documentation. Quick (<3min) video intro here. Longer video intro & walkthrough. Blog post here.

🏃Quick start

Install from pypi:

pip install "burr[start]"

Then run the UI server:

burr

This will open up Burr's telemetry UI. It comes loaded with some default data so you can click around. It also has a demo chat application to help demonstrate what the UI captures enabling you too see things changing in real-time. Hit the "Demos" side bar on the left and select chatbot. To chat it requires the OPENAI_API_KEY environment variable to be set, but you can still see how it works if you don't have an API key set.

Next, start coding / running examples:

git clone https://github.com/dagworks-inc/burr && cd burr/examples/hello-world-counter
python application.py

You'll see the counter example running in the terminal, along with the trace being tracked in the UI. See if you can find it.

For more details see the getting started guide.

🔩 How does Burr work?

With Burr you express your application as a state machine (i.e. a graph/flowchart). You can (and should!) use it for anything where managing state can be hard. Hint: managing state is always hard! This is true across a wide array of contexts, from building RAG applications to power a chatbot, to running ML parameter tuning/evaluation workflows, to conducting a complex forecasting simulation.

Burr includes:

  1. A (dependency-free) low abstraction python library that enables you to build and manage state machines with simple python functions
  2. A UI you can use view execution telemetry for introspection and debugging
  3. A set of integrations to make it easier to persist state, connect to telemetry, and integrate with other systems

Burr at work

💻️ What can you do with Burr?

Burr can be used to power a variety of applications, including:

  1. A simple gpt-like chatbot
  2. A stateful RAG-based chatbot
  3. A machine learning pipeline
  4. A simulation

And a lot more!

Using hooks and other integrations you can (a) integrate with any of your favorite vendors (LLM observability, storage, etc...), and (b) build custom actions that delegate to your favorite libraries (like Hamilton).

Burr will not tell you how to build your models, how to query APIs, or how to manage your data. It will help you tie all these together in a way that scales with your needs and makes following the logic of your system easy. Burr comes out of the box with a host of integrations including tooling to build a UI in streamlit and watch your state machine execute.

🏗 Start Building

See the documentation for getting started, and follow the example. Then read through some of the concepts and write your own application!

📃 Comparison against common frameworks

While Burr is attempting something (somewhat) unique, there are a variety of tools that occupy similar spaces:

Criteria Burr Langgraph temporal Langchain Superagent Hamilton
Explicitly models a state machine
Framework-agnostic
Asynchronous event-based orchestration
Built for core web-service logic
Open-source user-interface for monitoring
Works with non-LLM use-cases

🌯 Why the name Burr?

Burr is named after Aaron Burr, founding father, third VP of the United States, and murderer/arch-nemesis of Alexander Hamilton. What's the connection with Hamilton? This is DAGWorks' second open-source library release after the Hamilton library We imagine a world in which Burr and Hamilton lived in harmony and saw through their differences to better the union. We originally built Burr as a harness to handle state between executions of Hamilton DAGs (because DAGs don't have cycles), but realized that it has a wide array of applications and decided to release it more broadly.

🛣 Roadmap

While Burr is stable and well-tested, we have quite a few tools/features on our roadmap!

  1. Testing & eval curation. Curating data with annotations and being able to export these annotations to create unit & integration tests.
  2. Various efficiency/usability improvements for the core library (see planned capabilities for more details). This includes:
    1. Fully typed state with validation
    2. First-class support for retries + exception management
    3. More integration with popular frameworks (LCEL, LLamaIndex, Hamilton, etc...)
    4. Capturing & surfacing extra metadata, e.g. annotations for particular point in time, that you can then pull out for fine-tuning, etc.
  3. Cloud-based checkpointing/restart for debugging or production use cases (save state to db and replay/warm start, backed by a configurable database)
  4. Tooling for hosted execution of state machines, integrating with your infrastructure (Ray, modal, FastAPI + EC2, etc...)
  5. Storage integrations. More integrations with technologies like Redis, MongoDB, MySQL, etc. so you can run Burr on top of what you have available.

If you want to avoid self-hosting the above solutions we're building Burr Cloud. To let us know you're interested sign up here for the waitlist to get access.

🤲 Contributing

We welcome contributors! To get started on developing, see the developer-facing docs.

👪 Contributors

Code contributions

Users who have contributed core functionality, integrations, or examples.

Bug hunters/special mentions

Users who have contributed small docs fixes, design suggestions, and found bugs

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

burr-0.17.0rc0.tar.gz (21.0 MB view details)

Uploaded Source

Built Distribution

burr-0.17.0rc0-py3-none-any.whl (4.0 MB view details)

Uploaded Python 3

File details

Details for the file burr-0.17.0rc0.tar.gz.

File metadata

  • Download URL: burr-0.17.0rc0.tar.gz
  • Upload date:
  • Size: 21.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for burr-0.17.0rc0.tar.gz
Algorithm Hash digest
SHA256 5f908b2dc9c9f3c3d4441be3e37e9e006fd34dcc9073d229b55687488b76b7aa
MD5 7ed1d3fba7fa075ea73ddc2e7b5fe66f
BLAKE2b-256 5bc6489d84d92e6c57ef10cc3f598151e1f272b90a58c344ddb61fc8712c35de

See more details on using hashes here.

File details

Details for the file burr-0.17.0rc0-py3-none-any.whl.

File metadata

  • Download URL: burr-0.17.0rc0-py3-none-any.whl
  • Upload date:
  • Size: 4.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for burr-0.17.0rc0-py3-none-any.whl
Algorithm Hash digest
SHA256 fa734b44622357e1abf73b5a9eb920b07b71ce9a9feb3694444557fb214b9cc5
MD5 c472cf5203ca5484ea9bd20f700e6d47
BLAKE2b-256 175ebf1786ff3a602287e46a62cede7904cbde1eb4c0fb1353d84fdd609e1b3d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page