A tool for AI workflows based on human-computer collaboration and structured output.
Project description
Goose
Goose is a framework for building LLM-based agents and workflows with strong typing and state management. Here's what's fundamentally possible:
- Structured LLM interactions - Organize model calls with typed inputs/outputs
- Task orchestration - Create reusable tasks that can be composed into flows
- Stateful conversations - Maintain conversation history and model outputs
- Result caching - Avoid redundant computation based on input hashing
- Iterative refinement - Enhance results through progressive feedback loops
- Result validation - Ensure model outputs conform to expected schemas
- Run persistence - Save and reload workflow executions
- Custom logging - Track telemetry and performance metrics
It enables building reliable, maintainable AI applications with proper error handling, state tracking, and flow control while ensuring type safety throughout.
Key Features
Structured LLM Interactions
Organize model calls with typed inputs and outputs using Pydantic models. This ensures that responses from language models conform to expected structures.
graph LR
A[User Input] --> B[Agent]
B --> C[LLM Model]
C --> D[Structured Response]
D --> E[Validated Result]
E --> F[Application Logic]
classDef user fill:#f9f,stroke:#333,stroke-width:2px
classDef llm fill:#bbf,stroke:#333,stroke-width:2px
classDef validation fill:#bfb,stroke:#333,stroke-width:2px
class A user
class C llm
class D,E validation
Task Orchestration
Create reusable tasks that can be composed into flows. Tasks are decorated functions that handle specific operations, while flows coordinate multiple tasks.
graph TD
A[Flow] --> B[Task 1]
A --> C[Task 2]
A --> D[Task 3]
B --> E[Result 1]
C --> F[Result 2]
D --> G[Result 3]
E --> H[Flow Output]
F --> H
G --> H
classDef flow fill:#f9f,stroke:#333,stroke-width:2px
classDef task fill:#bbf,stroke:#333,stroke-width:2px
classDef result fill:#bfb,stroke:#333,stroke-width:2px
class A flow
class B,C,D task
class E,F,G,H result
Stateful Conversations
Maintain conversation history and model outputs across multiple interactions. The framework tracks the state of each task in a flow.
sequenceDiagram
participant User
participant Flow
participant Task
participant Agent
participant LLM
User->>Flow: Start Conversation
Flow->>Task: Execute
Task->>Agent: Generate Response
Agent->>LLM: Send Messages
LLM-->>Agent: Generate Response
Agent-->>Task: Store Result
Task-->>Flow: Update State
Flow-->>User: Return Result
User->>Flow: Follow-up Question
Flow->>Task: Get State
Task->>Agent: Send Previous Context + New Question
Agent->>LLM: Send Updated Messages
LLM-->>Agent: Generate Response
Agent-->>Task: Update Conversation
Task-->>Flow: Update State
Flow-->>User: Return Result
Result Caching
Avoid redundant computation by caching results based on input hashing. The framework automatically detects when inputs change and only regenerates results when necessary.
flowchart TD
A[Task Call] --> B{Inputs Changed?}
B -- Yes --> C[Execute Task]
B -- No --> D[Return Cached Result]
C --> E[Cache Result]
E --> F[Return Result]
D --> F
classDef decision fill:#f9f,stroke:#333,stroke-width:2px
classDef action fill:#bbf,stroke:#333,stroke-width:2px
classDef cache fill:#bfb,stroke:#333,stroke-width:2px
class B decision
class A,C,F action
class D,E cache
Iterative Refinement
Enhance results through progressive feedback loops. The framework supports asking follow-up questions about results and refining them based on feedback.
sequenceDiagram
participant User
participant Task
participant Agent
participant LLM
User->>Task: Generate Initial Result
Task->>Agent: Send Request
Agent->>LLM: Generate Structured Output
LLM-->>Agent: Return Output
Agent-->>Task: Store Result
Task-->>User: Return Result
User->>Task: Request Refinement
Task->>Agent: Send Feedback + Original Result
Agent->>LLM: Generate Find/Replace Operations
LLM-->>Agent: Return Changes
Agent-->>Task: Apply Changes to Result
Task-->>User: Return Refined Result
Result Validation
Ensure model outputs conform to expected schemas using Pydantic validation. All results must conform to predefined models.
flowchart LR
A[LLM Response] --> B[Parse JSON]
B --> C{Valid Schema?}
C -- Yes --> D[Return Validated Result]
C -- No --> E[Raise Error]
classDef input fill:#bbf,stroke:#333,stroke-width:2px
classDef validation fill:#f9f,stroke:#333,stroke-width:2px
classDef output fill:#bfb,stroke:#333,stroke-width:2px
classDef error fill:#fbb,stroke:#333,stroke-width:2px
class A input
class B,C validation
class D output
class E error
Run Persistence
Save and reload workflow executions. The framework provides interfaces for storing flow runs, allowing for resuming work or reviewing past executions.
graph TD
A[Start Flow] --> B[Create Flow Run]
B --> C[Execute Tasks]
C --> D[Save Run State]
D --> E[End Flow]
F[Later Time] --> G[Load Saved Run]
G --> H[Resume Execution]
H --> D
classDef flow fill:#f9f,stroke:#333,stroke-width:2px
classDef execution fill:#bbf,stroke:#333,stroke-width:2px
classDef storage fill:#bfb,stroke:#333,stroke-width:2px
class A,E,F flow
class B,C,H execution
class D,G storage
Custom Logging
Track telemetry and performance metrics. The framework supports custom loggers to record model usage, token counts, and execution time.
flowchart TD
A[Agent Call] --> B[Execute LLM Request]
B --> C[Record Metrics]
C --> D{Custom Logger?}
D -- Yes --> E[Send to Custom Logger]
D -- No --> F[Log to Default Logger]
E --> G[Return Result]
F --> G
classDef action fill:#bbf,stroke:#333,stroke-width:2px
classDef logging fill:#bfb,stroke:#333,stroke-width:2px
classDef decision fill:#f9f,stroke:#333,stroke-width:2px
class A,B,G action
class C,E,F logging
class D decision
Building with Goose
Goose enables building reliable, maintainable AI applications with proper error handling, state tracking, and flow control while ensuring type safety throughout. This approach reduces common issues in LLM applications like:
- Type inconsistencies in model responses
- Loss of context between interactions
- Redundant LLM calls for identical inputs
- Difficulty in resuming interrupted workflows
- Lack of structured error handling
Start building more robust LLM applications with Goose's typed, stateful approach to agent development.
Installation and Package Management
Goose uses uv for package management. Never use pip with this project.
# Install dependencies
uv add <package-name>
# Update dependencies file
uv sync
# Run commands
uv run <command>
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file goose_py-0.12.0.tar.gz.
File metadata
- Download URL: goose_py-0.12.0.tar.gz
- Upload date:
- Size: 96.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.5.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c148f39f911c7eee8dffe76b7dcd62026df6109b2777f9bb4a405b7cadfe0b80
|
|
| MD5 |
dcac9256df502f76c8a8588db333f523
|
|
| BLAKE2b-256 |
33b452c70efe4ffc523c10269337fdedc37582b6dce4d4b9e8ef8780119a42cb
|
File details
Details for the file goose_py-0.12.0-py3-none-any.whl.
File metadata
- Download URL: goose_py-0.12.0-py3-none-any.whl
- Upload date:
- Size: 18.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.5.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
58584813abb73c63f12720a11fd548907a96761b868ed501fc402220089d37e8
|
|
| MD5 |
baa46ee3efd5c383c32c5a7d41bb81e2
|
|
| BLAKE2b-256 |
e6520c31b533dc6e95e5f90f6f2caefc5b11aaced8f6842f8077eac6b3563652
|