Graph-Based Programmable Neuro-Symbolic LM Framework
Project description
From idea to production in just few lines
The first neuro-symbolic Language Model (LM) framework leveraging the simplicity of Keras and the rigor of Deep Learning best practices.
Build RAGs, autonomous agents, multi-agents systems, self-evolving systems and more in just few lines
Deutsch | English | Español | Français | 日本語 | 한국어 | Português | Русский | 中文
Documentation · FAQ · Discord · Code Examples
⭐ If you find Synalinks useful, please star the repo! Help us reach more AI/ML engineers and grow the community. ⭐
Too busy to read the documentation? Give the llms.txt or llms-full.txt to you favorite LMs or AI coding tools. Or better, use Synalinks Claude Skills with Claude Code to use Synalinks right away!
What Is Synalinks?
Synalinks is an open-source neuro-symbolic framework that makes it simple to create, train, evaluate, and deploy advanced LM-based applications, including RAGs, autonomous agents, and self-evolving reasoning systems.
Think Keras for Language Models applications, a clean, declarative API where:
- 🧩 You compose
Modules like you would with deep learningLayers. - ⚙️ You train & optimize with in-context reinforcement learning.
- 🌐 You deploy as REST APIs or MCP servers.
Key Principles
- Progressive complexity: Start simple and grow advanced naturally.
- Neuro-symbolic learning: Combine logic, structure, and language models.
- In-context optimization: Improve model reasoning without retraining weights.
Who Is It For?
| Role | Why Synalinks Helps |
|---|---|
| 🧑💻 AI Developers | Build complex production grade LM apps without boilerplate. |
| 🧠 AI Researchers | Prototype neuro-symbolic and RL-in-context systems fast. |
| 🏢 Data Scientists | Integrate LM workflows with APIs & databases. |
| 🎓 Students/Hobbyists | Learn AI composition in a clean, intuitive framework. |
Why Synalinks?
Building robust LM apps is hard. Synalinks simplifies it with:
- Prompt/Anything optimization per module via In-Context RL
- Versionable, JSON-serializable pipelines
- Constrained structured outputs (JSON) for correctness
- Automatic async & parallel execution by default
- Metrics, rewards & evaluations built-in
- Native integrations: Ollama, vLLM, OpenAI, Azure, Anthropic, Mistral, Groq, Gemini, xAI, Cohere, DeepSeek, Together AI, OpenRouter, AWS Bedrock, Doubleword
- Embeddable fast knowledge base support: based on DuckDB
- API-ready: Deploy with FastAPI or FastMCP
- KerasTuner compatibility for hyperparameter search
- Built-In MLFlow callbacks and hooks for observability
| Framework | MCP | Logical Flow | Robust Branching | Parallel Function Calling | Hyperparameter Tuning | Ease of Use |
|---|---|---|---|---|---|---|
| Synalinks | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | 😀 |
| DSPy | ✅ Yes | ❌ No | ❌ No | ❌ No | ❌ No | 😢 |
| AdalFlow | ✅ Yes | ❌ No | ❌ No | ❌ No | ❌ No | 😢 |
| TextGrad | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | 😭 |
| Trace | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | 😭 |
Notable differences with DSPy
Beyond the Keras programming style, Synalinks:
- Can optimize anything, not only prompts
- Is fully async by default to prevent bad programming practices
- Detect & run parallel branches automatically with
asyncioto ease async programming - Implement logic-based python operators to easily manipulate data models and the app control flow
- Use constrained JSON decoding to be robust in production
- Is fully compatible with Pydantic BaseModel (via
schema=in every module) to ease integration with FastAPI/FastMCP etc. - Have instrospection tools like
summarize()andplot_program()to write better documentation
Installation
uv pip install synalinks
Example
import synalinks
import asyncio
class Query(synalinks.DataModel):
query: str = synalinks.Field(
description="The user query",
)
class NumericalAnswer(synalinks.DataModel):
answer: float = synalinks.Field(
description="The correct final numerical answer",
)
language_model = synalinks.LanguageModel(
model="gemini/gemini-2.5-pro",
)
@synalinks.saving.register_synalinks_serializable()
async def calculate(expression: str):
"""Calculate the result of a mathematical expression.
Args:
expression (str): The mathematical expression to calculate, such as
'2 + 2'. The expression can contain numbers, operators (+, -, *, /),
parentheses, and spaces.
"""
if not all(char in "0123456789+-*/(). " for char in expression):
return {
"result": None,
"log": "Error: invalid characters in expression",
}
try:
# Evaluate the mathematical expression safely
result = round(float(eval(expression, {"__builtins__": None}, {})), 2)
return {
"result": result,
"log": "Successfully executed",
}
except Exception as e:
return {
"result": None,
"log": f"Error: {e}",
}
async def main():
inputs = synalinks.Input(data_model=Query)
outputs = await synalinks.FunctionCallingAgent(
data_model=NumericalAnswer,
tools=[
synalinks.Tool(calculate),
],
language_model=language_model,
)(inputs)
program = synalinks.Program(
inputs=inputs,
outputs=outputs,
name="math_agent",
description="A math agent",
)
Data Model Operators
Synalinks provides Python operators for combining and manipulating data models, enabling sophisticated control flow:
| Operator | Name | Description | Use Case |
|---|---|---|---|
+ |
Concatenation | Combines fields from both data models. Raises exception if either is None. |
Merging outputs from parallel branches |
& |
Logical And | Safe concatenation that returns None if either input is None. |
Combining with potentially null branch outputs |
| |
Logical Or | Returns the non-None data model. If both are non-None, merges them. |
Gathering outputs from conditional branches |
^ |
Logical Xor | Returns data if exactly one input is non-None, otherwise None. |
Exclusive branch selection |
~ |
Logical Not | Returns None if input is non-None, or a empty data model if None. |
Inverting branch conditions |
in |
Contains | Checks if a string key exists in the schema properties, or if another data model's schema is contained. Returns True or False. |
Conditional field checking, schema validation |
# Parallel branches with concatenation
x1 = await generator1(inputs)
x2 = await generator2(inputs)
combined = x1 & x2 # Merge both outputs
# Conditional branches with logical or
(easy, hard) = await synalinks.Branch(
question="Is this query complex?",
labels=["easy", "hard"],
branches=[simple_generator, complex_generator],
)(inputs)
result = easy | hard # Get whichever branch was selected
Getting a summary of your program
To print a tabular summary of your program:
program.summary()
Or a plot (Useful to document your system):
synalinks.utils.plot_program(
program,
show_module_names=True,
show_trainable=True,
show_schemas=True,
)
The math agent program visualized with plot_program: Input → FunctionCallingAgent. Trainable modules are marked in green.
Running your program
To run your program use the following:
result = await program(
Query(
query=(
"A bookstore receives a shipment of 135 new books."
"They place the books evenly onto 9 shelves."
"Later, they decide to move 3 books from each shelf to a display table"
" at the front of the store. "
"How many books are left on the shelves after the books are moved?"
)
),
)
Training your program/agent
async def main():
# ... your program definition
(x_train, y_train), (x_test, y_test) = synalinks.datasets.gsm8k.load_data()
program.compile(
reward=synalinks.rewards.ExactMatch(
in_mask=["answer"],
),
optimizer=synalinks.optimizers.OMEGA(
language_model=language_model,
embedding_model=embedding_model,
),
)
batch_size=1
epochs=10
history = await program.fit(
x_train,
y_train,
validation_split=0.2,
batch_size=batch_size,
epochs=epochs,
)
if __name__ == "__main__":
asyncio.run(main())
Saving & Loading
To save the entire architecture and variables (the program's state) into a JSON file, do:
program.save("my_program.json")
In order to load it, do:
loaded_program = synalinks.Program.load("my_program.json")
To save only the state your program (the variables) into JSON:
program.save_variables("my_program.variables.json")
To load its variables (needs a program with the same architecture), do:
program.load_variables("my_program.variables.json")
Logging
To enable logging, use the following at the beginning of your script:
synalinks.enable_logging()
Observability
Synalinks provides built-in observability through MLflow for tracing and monitoring your programs.
Important: Call
enable_observability()before creating any modules.
import synalinks
# Enable observability first
synalinks.enable_observability(
tracking_uri="http://localhost:5000", # Optional: MLflow server URI
experiment_name="my_experiment" # Optional: defaults to "synalinks_traces"
)
# Then create your modules - they will be automatically traced
inputs = synalinks.Input(data_model=Query)
outputs = await synalinks.Generator(...)(inputs)
For training metrics and artifacts, use the Monitor callback:
monitor = synalinks.callbacks.Monitor(
tracking_uri="http://localhost:5000",
experiment_name="training_runs",
)
await program.fit(x=train_x, y=train_y, callbacks=[monitor])
See the Observability documentation for Docker setup and advanced configuration.
Learn more
You can learn more by reading our documentation. If you have questions, the FAQ might help you.
Contributions
Contributions are welcome, either for the implementation of additional modules, metrics, or optimizers. For more information, or help for implementing your ideas (or ones from a paper), please join our discord.
Beware that every additional metric/module/optimizer should be approved by the core team, we want to keep the library minimal and clean as possible to avoid an uncontrolled growth leading to bad software practices like in most current leading LM frameworks.
If you have specific feedbacks or features request we invite you to open an issue.
Contributors
Your contributions, feedback, and support are what make this project thrive.
From small bug fixes to major features, thank you for believing in open collaboration and the future of neuro-symbolic AI.
Community
Join our community to learn more about neuro-symbolic systems and the future of AI. We welcome the participation of people from very different backgrounds or education levels.
Citing our work
This work have been done under the supervision of François Chollet, the author of Keras. If this work is useful for your research please use the following bibtex entry:
@misc{sallami2025synalinks,
title={Synalinks},
author={Sallami, Yoan and Chollet, Fran\c{c}ois},
year={2025},
howpublished={\url{https://github.com/SynaLinks/Synalinks}},
}
Credit
Synalinks would not be possible without the great work of the following open-source projects:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file synalinks-0.8.6.tar.gz.
File metadata
- Download URL: synalinks-0.8.6.tar.gz
- Upload date:
- Size: 373.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7f336612ed0367031aeb8e30ab97b1c4d110b9b01bf2729c2ce053176fe72230
|
|
| MD5 |
13c839e0df853da936dd7c4290d27494
|
|
| BLAKE2b-256 |
3fd00fa08eeaca228f4502bdc612d4e83148325d5d28fb43e639648912c8c2d9
|
File details
Details for the file synalinks-0.8.6-py3-none-any.whl.
File metadata
- Download URL: synalinks-0.8.6-py3-none-any.whl
- Upload date:
- Size: 491.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
feb5994fa59fa25e86a3a7fe97c89bda4b09258f69a8a63664663d8887ecb62c
|
|
| MD5 |
61281e90199c30cc0e9355f39fef527a
|
|
| BLAKE2b-256 |
31e66aaa541d70aaec8b080680fcbc32ccdf10b078ba3f0293a527f6b56f603a
|