Skip to main content

Async-native Python framework for building LLM applications — RAG pipelines, tool-using agents, and graph workflows. Streaming-first, transparent API, 2 hard deps.

Project description

SynapseKit

Build production LLM apps with 2 dependencies. Async-native RAG, Agents, and Graph workflows — no magic, no SaaS, no bloat.

"LangChain for people who hate LangChain."

SynapseKit is the minimal, async-first Python framework for LLM applications. 30 providers · 46 tools · 33 loaders · 9 vector stores. Every abstraction is plain Python you can read, debug, and extend. No hidden chains. No global state. No lock-in.


⚡ Async-native

Every API is async/await first.
Sync wrappers for scripts and notebooks.
No event loop surprises.

🌊 Streaming-first

Token-level streaming is the default,
not an afterthought.
Works across all providers.

🪶 Minimal footprint

2 hard dependencies: numpy + rank-bm25.
Everything else is optional.
Install only what you use.

🔌 One interface

30 LLM providers and 9 vector stores
behind the same API.
Swap without rewriting.

🧩 Composable

RAG pipelines, agents, and graph nodes
are interchangeable.
Wrap anything as anything.

🔍 Transparent

No hidden chains.
Every step is plain Python
you can read and override.

SynapseKit vs LangChain vs LlamaIndex

SynapseKit LangChain LlamaIndex
Hard dependencies 2 50+ 20+
Install size ~5 MB ~200 MB+ ~100 MB+
Async-native ✅ Default ⚠️ Partial ⚠️ Partial
Cost tracking ✅ Built-in ❌ LangSmith (SaaS) ❌ No
Evaluation ✅ CLI + GitHub Action ❌ LangSmith (SaaS) ✅ Built-in
Graph workflows ✅ Built-in ✅ LangGraph (separate pkg) ❌ No
LLM providers 30 38+ 20+
Stack traces Your code Framework internals Framework internals

LangChain has more raw integrations and more tutorials. That's not what SynapseKit is optimizing for. SynapseKit is optimizing for the engineer who needs to ship, debug, and maintain an LLM feature in production — where readable code, predictable async behavior, and no surprise SaaS bills actually matter.


Who is it for?

SynapseKit is for Python developers who want to ship LLM features without fighting their framework.

  • Burned LangChain users — hit a wall with debugging, dependency hell, or version churn and want full control back
  • Async backend engineers — building FastAPI services where LangChain's sync-first model feels bolted on
  • Cost-conscious teams — startups and teams who don't want a LangSmith subscription for basic observability
  • ML engineers — building RAG or agent pipelines who need full control over retrieval, prompting, and tool use

What it covers

🗂 RAG Pipelines
Retrieval-augmented generation with streaming, BM25 reranking, conversation memory, and token tracing. Load from PDFs, URLs, CSVs, HTML, directories, and more.

🤖 Agents
ReAct loop (any LLM) and native function calling (OpenAI / Anthropic / Gemini / Mistral). 43 built-in tools including calculator, Python REPL, web search, SQL, HTTP, shell, Twilio, arxiv, pubmed, wolfram, wikipedia, and more. Fully extensible.

🔀 Graph Workflows
DAG-based async pipelines. Nodes run in waves — parallel nodes execute concurrently. Conditional routing, typed state with reducers, fan-out/fan-in, SSE streaming, event callbacks, human-in-the-loop, checkpointing, and Mermaid export.

🧠 LLM Providers
OpenAI, Anthropic, Ollama, Gemini, Cohere, Mistral, Bedrock, Azure OpenAI, Groq, DeepSeek, OpenRouter, Together, Fireworks, Cerebras, Cloudflare, Moonshot, Perplexity, Vertex AI, Zhipu, AI21 Labs, Databricks, Baidu ERNIE, llama.cpp, Minimax, Aleph Alpha, Hugging Face, SambaNova — all behind one interface. Auto-detected from the model name. Swap without rewriting.

🗄 Vector Stores
InMemory (built-in, .npz persistence), ChromaDB, FAISS, Qdrant, Pinecone, Weaviate, PGVector, Milvus, LanceDB. One interface for all 9 backends.

🔧 Utilities
Output parsers (JSON, Pydantic, List), prompt templates (standard, chat, few-shot), token tracing with cost estimation.

🧪 EvalCI — LLM Quality Gates
GitHub Action that runs @eval_case suites on every PR and blocks merge if quality drops. No infrastructure, 2-minute setup. Score, cost, and latency tracked per case. Works with any LLM provider. → GitHub Marketplace · Docs


Install

pip

pip install synapsekit[openai]       # OpenAI
pip install synapsekit[anthropic]    # Anthropic
pip install synapsekit[ollama]       # Ollama (local)
pip install synapsekit[all]          # Everything

uv

uv add synapsekit[openai]
uv add synapsekit[all]

Poetry

poetry add synapsekit[openai]
poetry add "synapsekit[all]"

Full installation options → docs


Documentation

Everything you need to get started and go deep is in the docs.

🚀 Quickstart Up and running in 5 minutes
🗂 RAG Pipelines, loaders, retrieval, vector stores
🤖 Agents ReAct, function calling, tools, executor
🔀 Graph Workflows DAG pipelines, conditional routing, parallel execution
🧠 LLM Providers All 30 providers with examples
🧪 EvalCI LLM quality gates on every PR — GitHub Action
📖 API Reference Full class and method reference

Development

git clone https://github.com/SynapseKit/SynapseKit
cd SynapseKit
uv sync --group dev
uv run pytest tests/ -q

Contributing

Contributions are welcome — bug reports, documentation fixes, new providers, new features.

Read CONTRIBUTING.md to get started. Look for issues tagged good first issue if you're new.


Community


Contributors

Nautiverse
Nautiverse

💻 📖 🚧
Gordienko Andrey
Gordienko Andrey

💻
Deepak singh
Deepak singh

💻
by22Jy
by22Jy

💻
Arjun Kundapur
Arjun Kundapur

💻
Harshit Gupta
Harshit Gupta

📖
Dhruv Garg
Dhruv Garg

💻
Adam Silva
Adam Silva

💻
qorex
qorex

💻
Abhay Krishna
Abhay Krishna

💻
AYUSH BHATT
AYUSH BHATT

💻
HARSH
HARSH

📖

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

synapsekit-1.5.5.tar.gz (948.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

synapsekit-1.5.5-py3-none-any.whl (372.4 kB view details)

Uploaded Python 3

File details

Details for the file synapsekit-1.5.5.tar.gz.

File metadata

  • Download URL: synapsekit-1.5.5.tar.gz
  • Upload date:
  • Size: 948.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for synapsekit-1.5.5.tar.gz
Algorithm Hash digest
SHA256 29d837f4257dce36de21c91cc0446a0b37de74d65a3446c0495a0b85ac7ac782
MD5 810d96090970a78d9debf7998e01afe9
BLAKE2b-256 4ca1844b5ecc5a49bfd58d16ba5febf0d8c7afb85c71f2700ea64afcf57b117b

See more details on using hashes here.

Provenance

The following attestation bundles were made for synapsekit-1.5.5.tar.gz:

Publisher: publish.yml on SynapseKit/SynapseKit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file synapsekit-1.5.5-py3-none-any.whl.

File metadata

  • Download URL: synapsekit-1.5.5-py3-none-any.whl
  • Upload date:
  • Size: 372.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for synapsekit-1.5.5-py3-none-any.whl
Algorithm Hash digest
SHA256 92c045dc79a897e08ee44884c0fe017c44f6cf4079b9fb3b059906395b902151
MD5 5f4a15686ff857aabd941a3eb722df5c
BLAKE2b-256 47f7ce288aedc067ba3f9814434d77dcf7958d881d465e37b2c6c7471662ef80

See more details on using hashes here.

Provenance

The following attestation bundles were made for synapsekit-1.5.5-py3-none-any.whl:

Publisher: publish.yml on SynapseKit/SynapseKit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page