Skip to main content

No project description provided

Project description

image

Agently 4 🚀

Build production‑grade AI apps faster, with stable outputs and maintainable workflows.

English Introduction | 中文介绍

license PyPI version Downloads GitHub Stars Twitter Follow WeChat


🔥 Latest Docs | 🚀 5‑minute Quickstart | 💡 Core Features


📚 Quick Links

🤔 Why Agently?

Many GenAI POCs fail in production not because models are weak, but because engineering control is missing:

Common challenge How Agently helps
Output schema drifts, JSON parsing fails Contract‑first output control with output() + ensure_keys
Workflows get complex and hard to maintain TriggerFlow orchestration with to / if / match / batch / for_each
Multi‑turn state becomes unstable Session (v4.0.8.1+) with session activation, context window control, custom memo strategy, and persistence
Tool calls are hard to audit Tool logs via extra.tool_logs
Switching models is expensive OpenAICompatible unified model settings

Agently turns LLM uncertainty into a stable, testable, maintainable engineering system.

✨ Core Features

1) 📝 Contract‑first Output Control

Define the structure with output(), enforce critical keys with ensure_keys.

result = (
    agent
    .input("Analyze user feedback")
    .output({
        "sentiment": (str, "positive/neutral/negative"),
        "key_issues": [(str, "issue summary")],
        "priority": (int, "1-5, 5 is highest")
    })
    .start(ensure_keys=["sentiment", "key_issues[*]"])
)

2) ⚡ Structured Streaming (Instant)

Consume structured fields as they are generated.

response = (
    agent
    .input("Explain recursion and give 2 tips")
    .output({"definition": (str, "one sentence"), "tips": [(str, "tip")]})
    .get_response()
)

for msg in response.get_generator(type="instant"):
    if msg.path == "definition" and msg.delta:
        ui.update_definition(msg.delta)
    if msg.wildcard_path == "tips[*]" and msg.delta:
        ui.add_tip(msg.delta)

3) 🧩 TriggerFlow Orchestration

Readable, testable workflows with branching and concurrency.

(
    flow.to(handle_request)
    .if_condition(lambda d: d.value["type"] == "query")
    .to(handle_query)
    .elif_condition(lambda d: d.value["type"] == "order")
    .to(check_inventory)
    .to(create_order)
    .end_condition()
)

4) 🧠 Session (Multi‑turn Context, v4.0.8.1+)

Built-in SessionExtension with activate_session/deactivate_session, context window control, custom memo strategies, and JSON/YAML persistence.

from agently import Agently

agent = Agently.create_agent()

# Activate per-user session (reused by session_id)
agent.activate_session(session_id="demo_user_1001")

# Optional: default window trimming by max length
agent.set_settings("session.max_length", 12000)

# Optional: custom strategy (analysis -> resize)
session = agent.activated_session
assert session is not None

def analysis_handler(full_context, context_window, memo, session_settings):
    if len(context_window) > 6:
        return "keep_last_six"
    return None

def keep_last_six(full_context, context_window, memo, session_settings):
    return None, list(context_window[-6:]), memo

session.register_analysis_handler(analysis_handler)
session.register_resize_handler("keep_last_six", keep_last_six)

5) 🔧 Tool Calls + Logs

Tool selection and usage are logged in extra.tool_logs.

@agent.tool_func
def add(a: int, b: int) -> int:
    return a + b

response = agent.input("12+34=?").use_tool(add).get_response()
full = response.get_data(type="all")
print(full["extra"]["tool_logs"])

6) 🌐 Unified Model Settings (OpenAICompatible)

One config for multiple providers and local models.

from agently import Agently

Agently.set_settings(
    "OpenAICompatible",
    {
        "base_url": "https://api.deepseek.com/v1",
        "model": "deepseek-chat",
        "auth": "DEEPSEEK_API_KEY",
    },
)

🚀 Quickstart

Install

pip install -U agently

Requirements: Python >= 3.10, recommended Agently >= 4.0.7.2

5‑minute example

1. Structured output

from agently import Agently

agent = Agently.create_agent()

result = (
    agent.input("Introduce Python in one sentence and list 2 advantages")
    .output({
        "introduction": (str, "one sentence"),
        "advantages": [(str, "advantage")]
    })
    .start(ensure_keys=["introduction", "advantages[*]"])
)

print(result)

2. Workflow routing

from agently import TriggerFlow, TriggerFlowEventData

flow = TriggerFlow()

@flow.chunk
def classify_intent(data: TriggerFlowEventData):
    text = data.value
    if "price" in text:
        return "price_query"
    if "feature" in text:
        return "feature_query"
    if "buy" in text:
        return "purchase"
    return "other"

@flow.chunk
def handle_price(_: TriggerFlowEventData):
    return {"response": "Pricing depends on the plan..."}

@flow.chunk
def handle_feature(_: TriggerFlowEventData):
    return {"response": "Our product supports..."}

(
    flow.to(classify_intent)
    .match()
    .case("price_query")
    .to(handle_price)
    .case("feature_query")
    .to(handle_feature)
    .case_else()
    .to(lambda d: {"response": "What would you like to know?"})
    .end_match()
    .end()
)

print(flow.start("How much does it cost?"))

✅ Is Your App Production‑Ready? — Release Readiness Checklist

Based on teams shipping real projects with Agently, this production readiness checklist helps reduce common risks before release.

Area Check Recommended Practice
📝 Output Stability Are key interfaces stable? Define schemas with output() and lock critical fields with ensure_keys.
⚡ Real‑time UX Need updates while generating? Consume type="instant" structured streaming events.
🔍 Observability Tool calls auditable? Inspect extra.tool_logs for full arguments and results.
🧩 Workflow Robustness Complex flows fully tested? Unit test each TriggerFlow branch and concurrency limit with expected outputs.
🧠 Memory & Context Multi‑turn experience consistent? Define Session/Memo summary, trimming, and persistence policies.
📄 Prompt Management Can logic evolve safely? Version and configure prompts to keep changes traceable.
🌐 Model Strategy Can you switch or downgrade models? Centralize settings with OpenAICompatible for fast provider switching.
🚀 Performance & Scale Can it handle concurrency? Validate async performance in real web‑service scenarios.
🧪 Quality Assurance Regression tests complete? Create fixed inputs with expected outputs for core scenarios.

📈 Who Uses Agently to Solve Real Problems?

"Agently helped us turn evaluation rules into executable workflows and keep key scoring accuracy at 75%+, significantly improving bid‑evaluation efficiency." — Project lead at a large energy SOE

"Agently enabled a closed loop from clarification to query planning to rendering, reaching 90%+ first‑response accuracy and stable production performance." — Data lead at a large energy group

"Agently’s orchestration and session capabilities let us ship a teaching assistant for course management and Q&A quickly, with continuous iteration." — Project lead at a university teaching‑assistant initiative

Your project can be next.
📢 Share your case on GitHub Discussions →

❓ FAQ

Q: How is Agently different from LangChain or LlamaIndex?
A: Agently is built for production. It focuses on stable interfaces (contract‑first outputs), readable/testable orchestration (TriggerFlow), and observable tool calls (tool_logs). It’s a better fit for teams that need reliability and maintainability after launch.

Q: Which models are supported? Is switching expensive?
A: With OpenAICompatible, you can connect OpenAI, Claude, DeepSeek, Qwen and most OpenAI‑compatible endpoints, plus local models like Llama/Qwen. The same business code can switch models without rewrites, reducing vendor lock‑in.

Q: What’s the learning curve? Where should I start?
A: The core API is straightforward—you can run your first agent in minutes. Start with Quickstart, then dive into Output Control and TriggerFlow.

Q: How do I deploy an Agently‑based service?
A: Agently doesn’t lock you into a specific deployment path. It provides async APIs and FastAPI examples. The FastAPI integration example covers SSE, WebSocket, and standard POST.

Q: Do you offer enterprise support?
A: Yes. The core framework in this repository remains open‑source under Apache 2.0. Enterprise support, private extensions, managed services, and SLA-based collaboration are provided under separate commercial agreements. Contact us via the community.

Q: What is open-source vs enterprise in Agently?
A: The open-source core includes the general framework and public capabilities in this repository. Enterprise offerings (for example private extension packs, advanced governance modules, private deployment support, and SLA services) are delivered separately under commercial terms.

🧭 Docs Guide (Key Paths)

🤝 Community

📄 License

Agently follows an open-core + commercial extension model:

  • Open-source core in this repository: Apache 2.0
  • Trademark usage policy: TRADEMARK.md
  • Contributor rights agreement: CLA.md
  • Enterprise extensions and commercial services: provided under separate commercial agreements

Start building your production‑ready AI apps →
pip install -U agently

Questions? Read the docs or join the community.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agently-4.0.8.2.tar.gz (127.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agently-4.0.8.2-py3-none-any.whl (186.6 kB view details)

Uploaded Python 3

File details

Details for the file agently-4.0.8.2.tar.gz.

File metadata

  • Download URL: agently-4.0.8.2.tar.gz
  • Upload date:
  • Size: 127.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.14.0-1017-azure

File hashes

Hashes for agently-4.0.8.2.tar.gz
Algorithm Hash digest
SHA256 84b345876aeea0fcd68ca887d1d55f131f34cc4b288582b609ea5d811107c402
MD5 e03bf0020e1cddc668f51adadbf5ed79
BLAKE2b-256 bd63694876f393c4f287085f554d76cb8ad0f5d977463d4211ee0e9a58cb806b

See more details on using hashes here.

File details

Details for the file agently-4.0.8.2-py3-none-any.whl.

File metadata

  • Download URL: agently-4.0.8.2-py3-none-any.whl
  • Upload date:
  • Size: 186.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.14.0-1017-azure

File hashes

Hashes for agently-4.0.8.2-py3-none-any.whl
Algorithm Hash digest
SHA256 22368b373fd461ca532cd613f44aaeaedbbe808c63783ca659199cc011ff0582
MD5 f7bd254b0a22986118736dd51d2fd978
BLAKE2b-256 b965352ebbc82c509e5c5ea3098b600ce9ad3e67f4e12d56582bcefc5904badc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page