Skip to main content

Async-native Python framework for building LLM applications — RAG pipelines, tool-using agents, and graph workflows. Streaming-first, transparent API, 2 hard deps.

Project description

SynapseKit

Build production LLM apps with 2 dependencies. Async-native RAG, Agents, and Graph workflows — no magic, no SaaS, no bloat.

"LangChain for people who hate LangChain."

SynapseKit is the minimal, async-first Python framework for LLM applications. 33 providers · 48 tools · 46 loaders · 10 vector stores. Every abstraction is plain Python you can read, debug, and extend. No hidden chains. No global state. No lock-in.


⚡ Async-native

Every API is async/await first.
Sync wrappers for scripts and notebooks.
No event loop surprises.

🌊 Streaming-first

Token-level streaming is the default,
not an afterthought.
Works across all providers.

🪶 Minimal footprint

2 hard dependencies: numpy + rank-bm25.
Everything else is optional.
Install only what you use.

🔌 One interface

31 LLM providers and 9 vector stores
behind the same API.
Swap without rewriting.

🧩 Composable

RAG pipelines, agents, and graph nodes
are interchangeable.
Wrap anything as anything.

🔍 Transparent

No hidden chains.
Every step is plain Python
you can read and override.

SynapseKit vs LangChain vs LlamaIndex

SynapseKit LangChain LlamaIndex
Hard dependencies 2 50+ 20+
Install size ~5 MB ~200 MB+ ~100 MB+
Async-native ✅ Default ⚠️ Partial ⚠️ Partial
Cost tracking ✅ Built-in ❌ LangSmith (SaaS) ❌ No
Evaluation ✅ CLI + GitHub Action ❌ LangSmith (SaaS) ✅ Built-in
Graph workflows ✅ Built-in ✅ LangGraph (separate pkg) ❌ No
LLM providers 31 38+ 20+
Stack traces Your code Framework internals Framework internals

LangChain has more raw integrations and more tutorials. That's not what SynapseKit is optimizing for. SynapseKit is optimizing for the engineer who needs to ship, debug, and maintain an LLM feature in production — where readable code, predictable async behavior, and no surprise SaaS bills actually matter.


Who is it for?

SynapseKit is for Python developers who want to ship LLM features without fighting their framework.

  • Burned LangChain users — hit a wall with debugging, dependency hell, or version churn and want full control back
  • Async backend engineers — building FastAPI services where LangChain's sync-first model feels bolted on
  • Cost-conscious teams — startups and teams who don't want a LangSmith subscription for basic observability
  • ML engineers — building RAG or agent pipelines who need full control over retrieval, prompting, and tool use

What it covers

🗂 RAG Pipelines
Retrieval-augmented generation with streaming, BM25 reranking, conversation memory, and token tracing. Load from PDFs, URLs, CSVs, HTML, directories, and more.

🤖 Agents
ReAct loop (any LLM) and native function calling (OpenAI / Anthropic / Gemini / Mistral). 48 built-in tools including calculator, Python REPL, code interpreter, web search, SQL, HTTP, shell, Twilio, arxiv, pubmed, wolfram, wikipedia, and more. Fully extensible.

🔀 Graph Workflows
DAG-based async pipelines. Nodes run in waves — parallel nodes execute concurrently. Conditional routing, typed state with reducers, fan-out/fan-in, SSE streaming, event callbacks, human-in-the-loop, checkpointing, and Mermaid export.

🧠 LLM Providers
OpenAI, Anthropic, Ollama, Gemini, Cohere, Mistral, Bedrock, Azure OpenAI, Groq, DeepSeek, OpenRouter, Together, Fireworks, Cerebras, Cloudflare, Moonshot, Perplexity, Vertex AI, Zhipu, AI21 Labs, Databricks, Baidu ERNIE, llama.cpp, LM Studio, Minimax, Aleph Alpha, Hugging Face, SambaNova, xAI, NovitaAI, Writer — all behind one interface. Auto-detected from the model name. Swap without rewriting.

🗄 Vector Stores
InMemory (built-in, .npz persistence), ChromaDB, FAISS, Qdrant, Pinecone, Weaviate, PGVector, Milvus, LanceDB. One interface for all 9 backends.

🔧 Utilities
Output parsers (JSON, Pydantic, List), prompt templates (standard, chat, few-shot), token tracing with cost estimation.

🧪 EvalCI — LLM Quality Gates
GitHub Action that runs @eval_case suites on every PR and blocks merge if quality drops. No infrastructure, 2-minute setup. Score, cost, and latency tracked per case. Works with any LLM provider. → GitHub Marketplace · Docs


Install

pip

pip install synapsekit[openai]       # OpenAI
pip install synapsekit[anthropic]    # Anthropic
pip install synapsekit[ollama]       # Ollama (local)
pip install synapsekit[all]          # Everything

uv

uv add synapsekit[openai]
uv add synapsekit[all]

Poetry

poetry add synapsekit[openai]
poetry add "synapsekit[all]"

Full installation options → docs


Documentation

Everything you need to get started and go deep is in the docs.

🚀 Quickstart Up and running in 5 minutes
🗂 RAG Pipelines, loaders, retrieval, vector stores
🤖 Agents ReAct, function calling, tools, executor
🔀 Graph Workflows DAG pipelines, conditional routing, parallel execution
🧠 LLM Providers All 33 providers with examples
🧪 EvalCI LLM quality gates on every PR — GitHub Action
📖 API Reference Full class and method reference

Development

git clone https://github.com/SynapseKit/SynapseKit
cd SynapseKit
uv sync --group dev
uv run pytest tests/ -q

Contributing

Contributions are welcome — bug reports, documentation fixes, new providers, new features.

Read CONTRIBUTING.md to get started. Look for issues tagged good first issue if you're new.


Community


Contributors

Nautiverse
Nautiverse

💻 📖 🚧
Gordienko Andrey
Gordienko Andrey

💻
Deepak singh
Deepak singh

💻
by22Jy
by22Jy

💻
Arjun Kundapur
Arjun Kundapur

💻
Harshit Gupta
Harshit Gupta

📖
Dhruv Garg
Dhruv Garg

💻
Adam Silva
Adam Silva

💻
qorex
qorex

💻
Abhay Krishna
Abhay Krishna

💻
AYUSH BHATT
AYUSH BHATT

💻
HARSH
HARSH

📖
mikemolinet
mikemolinet

💻 🐛

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

synapsekit-1.5.6.tar.gz (978.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

synapsekit-1.5.6-py3-none-any.whl (383.7 kB view details)

Uploaded Python 3

File details

Details for the file synapsekit-1.5.6.tar.gz.

File metadata

  • Download URL: synapsekit-1.5.6.tar.gz
  • Upload date:
  • Size: 978.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for synapsekit-1.5.6.tar.gz
Algorithm Hash digest
SHA256 cc10d65b8c8ed0cf1ad237a894ad533345c2be4db37300fe4f154d88ab176305
MD5 7cd44f2603377796352c9c58a7bbd83c
BLAKE2b-256 79d899e020ae993ea02e7e72bc213ba8191906eee173d6ed86e92eaf2662aa78

See more details on using hashes here.

Provenance

The following attestation bundles were made for synapsekit-1.5.6.tar.gz:

Publisher: publish.yml on SynapseKit/SynapseKit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file synapsekit-1.5.6-py3-none-any.whl.

File metadata

  • Download URL: synapsekit-1.5.6-py3-none-any.whl
  • Upload date:
  • Size: 383.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for synapsekit-1.5.6-py3-none-any.whl
Algorithm Hash digest
SHA256 07fcf184b846106a1b0967b077edf1042f7d61d30534180e9db119d180360d33
MD5 ea39932a91ec74ab368c6d4bfc9032ee
BLAKE2b-256 7044c5ed14b0e20e4c931dea782f9c7f9788acf54cae011cf9fb83235defb9ac

See more details on using hashes here.

Provenance

The following attestation bundles were made for synapsekit-1.5.6-py3-none-any.whl:

Publisher: publish.yml on SynapseKit/SynapseKit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page