Skip to main content

Self-hosted AI workbench for code, architecture, and operations. Rich visualizations (Graphviz, Mermaid, Vega-Lite, DrawIO), diff application with per-hunk tracking, parallel AI agents, MCP tool integration, enterprise plugin system.

Project description

Ziya AI Technical Workbench

Self-hosted AI workbench for code, architecture, and operations.
Runs alongside your editor — not instead of it.

PyPI Python License Stars


Ziya is a complete AI working environment. Code generation, architecture analysis, operational debugging, visual output, persistent context, parallel agents, tool integration — it handles the full surface of how senior engineers actually work, not just the editing part. Two years of daily production use across hundreds of engineers went into making every normal workflow feel right. Self-hosted, MIT licensed, runs alongside your editor.

See Design Philosophy for the reasoning behind the choices.


What You Get

Code changes come back as rendered diffs with per-hunk Apply/Undo buttons — a 4-stage patch pipeline handles imperfect model output so you never copy-paste from a chat window. Conversations persist across sessions with context you control: mute dead-end messages, fork to explore alternatives, drop files you're done with. No auto-compaction deciding for you. Persistent memory carries domain knowledge across sessions. Parallel agent swarms decompose large tasks into delegates with dependency ordering, memory crystals, and crash-resilient checkpointing. MCP tool integration with HMAC result signing, poisoning detection, and shell allowlisting. AST-based code intelligence with cross-file reference tracing. Projects with scoped conversations and reusable skill bundles. Web UI and full CLI from the same codebase.

Multiple windows into the same codebase with different selected context, different models, even different conversations — open as many as make sense for how you work. Multiple projects with completely different codebases, each with their own conversations and context selections. You decide what layout fits your workflow.

Seven visualization renderers — Graphviz, Mermaid, Vega-Lite, DrawIO, KaTeX, HTML mockups, packet frame diagrams — all render inline with a normalization layer that fixes broken LLM output. The model chooses the right format for the situation: a dependency graph when you're debugging, a sequence diagram when you're tracing a flow, a chart when you're looking at data, rendered math when you're working through a proof, a mockup when you're designing a UI. You don't pick the renderer — you describe the problem and the response comes back visual when visual is the right answer.

An immense amount of engineering went into making all of this feel like one tool, not twelve features bolted together. Standard MCP tool protocol, reusable skill bundles, and agent delegation work the way you'd expect — Ziya isn't a closed system. Feature Inventory has the full reference.

Same conversation, different universe every time

Debug a deadlock — Paste the thread dump. Get a dependency graph showing the cycle. The model correlates the lock ordering with your source code in the same conversation.

Iterate on a UI — Describe a component, get a rendered HTML mockup inline. Adjust the layout, spacing, color. Each revision renders live in the conversation — no switching to a browser, no Figma, no screenshots.

Model a system mathematically — Discuss queueing theory for a rate limiter. The model derives the steady-state probabilities and renders them as fully typeset KaTeX — not ASCII approximations, real mathematical notation. Then plot the results: arrival rate vs. queue depth as a chart, sensitivity analysis across parameter ranges, distribution curves — right in the same conversation. Formulas and their visualizations, together.

Analyze a protocol — Drop in a pcap, a header file, and a client implementation. Ziya reads all three and synthesizes a graph showing the error domains. Or if you're designing a protocol: lay out the frame format as a bit-level diagram with field widths and byte boundaries, then trace how each field is actually used across the codebase to find inconsistencies between the spec and the implementation.

Diagnose a production incident — Drag in a monitoring screenshot. Paste the error log. Add the service code. The model works through the root cause — choosing whatever visualization fits: a timing chart for latency patterns, a sequence diagram for request flow, a dependency graph for cascade failures — and generates the fix as an applicable diff.

Refactor a codebase — Decompose a migration into parallel agents. Each handles a module independently, produces a memory crystal when done, downstream agents pick up where upstream left off. Apply the diffs when they come back.


Quick Start

pip install ziya

For AWS Bedrock (default):

export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
ziya

For Google Gemini:

export GOOGLE_API_KEY=<your-key>
ziya --endpoint=google

For OpenAI:

export OPENAI_API_KEY=<your-key>
ziya --endpoint=openai

Then open http://localhost:6969.

CLI mode (no browser):

ziya chat                          # Interactive chat
ziya ask "what does this do?"      # One-shot question
ziya review --staged               # Review git staged changes
git diff | ziya ask "review this"  # Pipe anything in

Supported Models

Provider Models What You Need
AWS Bedrock Claude Sonnet 4.6/4.5/4.0/3.7, Opus 4.6/4.5/4.1/4.0, Haiku 4.5/3, Nova Premier/Pro/Lite/Micro, DeepSeek R1/V3, Qwen3, Kimi K2.5, and more AWS credentials with Bedrock access
Google Gemini 3.1 Pro, 3 Pro/Flash, 2.5 Pro/Flash, 2.0 Flash Google API key
OpenAI GPT-4.1/Mini/Nano, GPT-4o, o3, o3-mini, o4-mini OpenAI API key
Anthropic Claude (direct API) Anthropic API key

Switch models mid-conversation. Configure temperature, top-k, max tokens, and thinking mode from the UI.


Documentation

Author

Ziya is primarily built and maintained by Dan Cohn, with early contributions from Vishnu Kool.

Contributing

See CONTRIBUTING.md for guidelines.

Security

See SECURITY.md for reporting vulnerabilities.

License

MIT — see LICENSE.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ziya-0.6.4.7-py3-none-any.whl (28.5 MB view details)

Uploaded Python 3

File details

Details for the file ziya-0.6.4.7-py3-none-any.whl.

File metadata

  • Download URL: ziya-0.6.4.7-py3-none-any.whl
  • Upload date:
  • Size: 28.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for ziya-0.6.4.7-py3-none-any.whl
Algorithm Hash digest
SHA256 e56d00fd8b38848d2e71d6a874cffcc09b92b29c6bfebc08fc5094705e334f7a
MD5 52eeacd91a3a4331ceea7532962f9843
BLAKE2b-256 3de31003eafed4ca6cfeff8e465200932752d0c309bbac569a64bc950d0d3915

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page