Bridgic-cognitive has implemented an amphibious execution model: the same system can operate in both LLM-driven agent mode (on_agent()) and deterministic code-driven workflow mode (on_workflow()), and can switch between the two modes autonomously when necessary.
Project description
Bridgic Amphibious
Dual-mode agent framework — build agents that operate in both LLM-driven and deterministic modes, with automatic fallback between them.
Core Design Philosophy
1. Agent = Think Units + Context Orchestration
Traditional agent frameworks require developers to work with low-level execution primitives. Bridgic Amphibious raises the abstraction level: an agent is defined by declaring think units and orchestrating them with context.
class TravelAgent(AmphibiousAutoma[TravelContext]):
# Declare think units — each encapsulates a specific thinking pattern
planner = think_unit(
CognitiveWorker.inline("Analyze the goal and decide the next step"),
max_attempts=20,
)
async def on_agent(self, ctx: TravelContext):
# Orchestrate think units with context scoping
async with self.snapshot(goal="Plan the trip"):
await self.planner
async with self.snapshot(goal="Execute each step of the plan"):
await self.planner
Each await self.planner triggers a complete observe-think-act cycle:
- Observe — gather the current state (overridable at both worker and agent level)
- Think — LLM decides the next action based on context and available tools
- Act — execute tool calls or produce structured output
The developer focuses on what to think about and how to scope context, not on the mechanics of LLM calls, tool matching, or output parsing.
2. Functional Execution vs. Decision Making — Decoupled
In traditional software, what to do and when to do it are intertwined in code logic. Bridgic Amphibious cleanly separates them:
- Functional modules (tools, skills) — pure capabilities, independent of execution order
- Decision making — can be handled in two fundamentally different ways:
| Mode | Decision Maker | Defined In | Best For |
|---|---|---|---|
Workflow (on_workflow) |
Developer's code | yield ActionCall(...) |
Known, repeatable processes |
Agent (on_agent) |
LLM reasoning | await self.think_unit |
Open-ended, adaptive tasks |
The same agent can implement both modes and switch between them at runtime:
class ResilientAgent(AmphibiousAutoma[MyContext]):
exec_think = think_unit(CognitiveWorker.inline("Execute step"), max_attempts=10)
async def on_agent(self, ctx):
"""LLM-driven mode: AI decides what to do."""
await self.exec_think
async def on_workflow(self, ctx):
"""Deterministic mode: developer defines the exact steps."""
yield ActionCall("login", username="admin", password="secret")
yield ActionCall("navigate_to", url="/dashboard")
result = yield ActionCall("extract_data", selector=".metrics")
# Pause and ask the human for confirmation
feedback = yield HumanCall(prompt="Data extracted. Proceed with analysis?")
# Fall back to agent for complex situations
yield AgentCall(goal="Analyze the extracted data", max_attempts=5)
Runtime mode switching happens automatically:
- Workflow failure → Agent fallback: If a workflow step fails, the framework can automatically switch to agent mode to resolve the issue, then resume the workflow
- Configurable degradation:
max_consecutive_fallbackscontrols when to abandon the workflow entirely and hand over to full agent mode - Explicit mode control:
arun(mode=RunMode.AGENT)orarun(mode=RunMode.AMPHIFLOW)
Architecture
graph TB
subgraph AmphibiousAutoma
subgraph Row1["Running Mode"]
direction LR
Workflow -. "Auto-degradation" .-> Agent
Agent -. "Auto-swith" .-> Workflow
end
subgraph Row2["Observe → Think → Act "]
direction LR
Observe["Observe"]
Think["Think"]
Act["Act"]
Observe --> Think --> Act --> Observe
end
subgraph Row3["CognitiveContext — global shared state"]
Data
Exposure
end
end
The key insight: Context sits on top as the global state, and the on_agent / on_workflow duality lives inside the Think phase — they are two interchangeable strategies for the same decision point in the observe-think-act cycle.
Layer 1: Data Exposure
Controls how context data is disclosed to the LLM.
EntireExposure[T]— All data visible at once (used for tools)LayeredExposure[T]— Progressive disclosure: summary first, details on demand (used for skills, history)
# The LLM sees skill summaries initially
# It can request details via the acquiring policy:
# details: [{field: "skills", index: 0}]
# The framework then reveals the full skill content
Layer 2: Context
Context is a Pydantic BaseModel that auto-detects Exposure fields and provides summary(), get_details(), and format_summary().
CognitiveContext is the default implementation with:
goal— what the agent is trying to achievetools(EntireExposure) — available tool specificationsskills(LayeredExposure) — available skills with progressive disclosurecognitive_history(LayeredExposure) — execution history with layered memory:- Working memory: recent steps with full details
- Short-term memory: older steps as summaries, queryable for details
- Long-term memory: compressed via LLM into concise summaries
Layer 3: CognitiveWorker (Think Unit)
A CognitiveWorker is a pure thinking unit — it only decides what to do, not how to execute.
class AnalysisWorker(CognitiveWorker):
async def thinking(self):
return "Analyze the current situation and decide the best next action."
async def observation(self, context):
# Custom observation logic
return f"Page title: {await get_page_title()}"
# Or use the factory for simple cases:
worker = CognitiveWorker.inline("Plan ONE immediate next step")
Cognitive Policies enhance thinking with optional multi-round deliberation:
- Acquiring (built-in) — request details from LayeredExposure fields before deciding
- Rehearsal (opt-in) — mentally simulate the planned action before committing
- Reflection (opt-in) — assess information quality and consistency
Each policy fires at most once per cycle, then closes.
Structured Output: Set output_schema to skip the tool-call loop entirely and produce a typed Pydantic instance:
class PlanResult(BaseModel):
phases: List[Phase]
estimated_steps: int
planner = CognitiveWorker.inline(
"Create an execution plan",
output_schema=PlanResult,
)
Layer 4: AmphibiousAutoma (Orchestration)
The top-level agent class that ties everything together.
Think Unit Descriptors declare reusable thinking patterns at the class level:
class MyAgent(AmphibiousAutoma[CognitiveContext]):
# Declare think units as class attributes
main_think = think_unit(
CognitiveWorker.inline("Execute the next step"),
max_attempts=20,
on_error=ErrorStrategy.RETRY,
max_retries=2,
)
async def on_agent(self, ctx):
# Simple: single execution
await self.main_think
# With loop condition
await self.main_think.until(
lambda ctx: some_condition(ctx),
max_attempts=50,
)
# With tool/skill filtering
await self.main_think.until(
lambda ctx: ctx.goal_reached,
tools=["search", "analyze"],
skills=["data_extraction"],
)
Phase Annotation scopes context and captures execution traces:
async def on_agent(self, ctx):
async with self.snapshot(goal="Handle edge case", custom_field="override"):
await self.fix_think
snapshot()— temporarily overrides context fields
Quick Start
Installation
pip install bridgic-amphibious
Minimal Agent (Agent Mode)
from bridgic.amphibious import (
AmphibiousAutoma, CognitiveContext, CognitiveWorker, think_unit
)
class SimpleAgent(AmphibiousAutoma[CognitiveContext]):
executor = think_unit(
CognitiveWorker.inline("Decide and execute the next step"),
max_attempts=10,
)
async def on_agent(self, ctx):
await self.executor
# Run
agent = SimpleAgent(llm=my_llm)
await agent.arun(goal="Book a flight from Beijing to Tokyo", tools=[...])
Minimal Agent (Workflow Mode)
from bridgic.amphibious import AmphibiousAutoma, CognitiveContext, ActionCall, HumanCall
class WorkflowAgent(AmphibiousAutoma[CognitiveContext]):
async def on_workflow(self, ctx):
result = yield ActionCall("search_flights", origin="Beijing", destination="Tokyo", date="2024-06-01")
feedback = yield HumanCall(prompt="Found flights. Book CA123?")
if feedback == "yes":
yield ActionCall("book_flight", flight_number="CA123")
# Pure workflow mode does not need an LLM
agent = WorkflowAgent()
await agent.arun(goal="Book a flight", tools=[...])
Amphiflow Mode (Workflow + Agent Fallback)
class AmphiflowAgent(AmphibiousAutoma[CognitiveContext]):
fixer = think_unit(
CognitiveWorker.inline("Fix the current issue and complete the step"),
max_attempts=5,
)
async def on_agent(self, ctx):
await self.fixer
async def on_workflow(self, ctx):
yield ActionCall("login", username="admin", password="secret")
yield ActionCall("navigate_to", url="/dashboard")
# If any step fails, the framework falls back to on_agent() to resolve it
agent = AmphiflowAgent(llm=my_llm)
await agent.arun(
goal="Extract dashboard data",
tools=[...],
mode=RunMode.AMPHIFLOW, # or RunMode.AUTO (default, auto-detects)
max_consecutive_fallbacks=2, # switch to full agent mode after 2 consecutive failures
)
Human-in-the-Loop
Three entry points for requesting human input during agent execution:
from bridgic.amphibious import AmphibiousAutoma, CognitiveContext, ActionCall, HumanCall
class InteractiveAgent(AmphibiousAutoma[CognitiveContext]):
planner = think_unit(
CognitiveWorker.inline("Plan and execute. Use request_human when you need confirmation."),
max_attempts=10,
)
# Entry 1: Code-level — call between think units in on_agent()
async def on_agent(self, ctx):
await self.planner
feedback = await self.request_human("Task complete. Any follow-up?")
# Entry 2: Workflow yield — pause workflow for human input
async def on_workflow(self, ctx):
yield ActionCall("search_flights", origin="Beijing", destination="Tokyo", date="2024-06-01")
feedback = yield HumanCall(prompt="Book this flight?")
if feedback == "yes":
yield ActionCall("book_flight", flight_number="CA123")
agent = InteractiveAgent(llm=my_llm)
# Entry 3: LLM tool — `request_human` is auto-injected into every agent's tools,
# so the LLM can call it autonomously without adding it to `tools=[...]`.
await agent.arun(
goal="Plan a trip with user preferences",
tools=[search_tool],
)
Override human_input() to integrate with your UI (default reads from stdin):
class WebAgent(AmphibiousAutoma[CognitiveContext]):
async def human_input(self, data):
return await my_websocket.ask(data["prompt"])
Custom Context
from bridgic.amphibious import CognitiveContext, CognitiveHistory
from pydantic import Field, ConfigDict
class MyContext(CognitiveContext):
model_config = ConfigDict(arbitrary_types_allowed=True)
current_page: str = Field(default="", description="Current page URL")
extracted_data: dict = Field(default_factory=dict)
class MyAgent(AmphibiousAutoma[MyContext]):
...
Execution Tracing
agent = MyAgent(llm=my_llm)
await agent.arun(goal="...", tools=[...], trace_running=True)
# Access trace data
trace = agent._agent_trace.build()
# Save to file
agent._agent_trace.save("trace.json")
Key Concepts
| Concept | Description |
|---|---|
| CognitiveWorker | Pure thinking unit — decides what to do |
| think_unit | Descriptor for declaring workers with execution parameters |
| AmphibiousAutoma | Agent orchestrator with dual execution modes |
| on_agent() | LLM-driven orchestration logic |
| on_workflow() | Deterministic workflow as async generator |
| Exposure | Data visibility abstraction (Entire vs. Layered) |
| CognitiveContext | Agent state: goal, tools, skills, history |
| Cognitive Policies | Acquiring, rehearsal, reflection — enhance thinking |
| AgentTrace | Structured execution trace for inspection |
| ErrorStrategy | RAISE, IGNORE, or RETRY on failures |
| ActionCall | Yield in on_workflow() for deterministic tool execution |
| HumanCall | Yield in on_workflow() to pause and request human input |
| AgentCall | Yield in on_workflow() to delegate to agent mode |
| request_human_tool | Built-in FunctionToolSpec for LLM-driven human requests |
| request_human() | Code-level method to request human input in on_agent() |
| RunMode | AGENT, WORKFLOW, AMPHIFLOW, or AUTO |
License
See the repository root for license information.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bridgic_amphibious-0.1.1.tar.gz.
File metadata
- Download URL: bridgic_amphibious-0.1.1.tar.gz
- Upload date:
- Size: 47.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1832b14e82eb9a89a6ce10cfe4a37fda7a8a54f099ea265cfde967810c8ebb60
|
|
| MD5 |
621957fc71fa69e21730c865c289299e
|
|
| BLAKE2b-256 |
167fda5a60efbe4c83001fd0ef74a79e2d108ba71ade7b3a85d9c3209fb8a0b4
|
File details
Details for the file bridgic_amphibious-0.1.1-py3-none-any.whl.
File metadata
- Download URL: bridgic_amphibious-0.1.1-py3-none-any.whl
- Upload date:
- Size: 53.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7aa679e559cd2ecf90f0d2a4f586f3f7bf81818d80db559dac0e86897c4f5f50
|
|
| MD5 |
1f8dfd867fcbb7db59572ab617919588
|
|
| BLAKE2b-256 |
2e8928b79a7e59e53476783e8fdf0d0066840ed1c9edc948210263e457cb4c5c
|