Skip to main content

A production-ready runtime framework for agent applications, providing secure sandboxed execution environments and scalable deployment solutions with multi-framework support.

Project description

AgentScope Runtime

PyPI Python Version License Code Style GitHub Stars GitHub Forks Build Status Cookbook DeepWiki A2A MCP Discord DingTalk

[Cookbook] [ไธญๆ–‡README]

A Production-Ready Runtime Framework for Intelligent Agent Applications

AgentScope Runtime tackles two critical challenges in agent development: secure sandboxed tool execution and scalable agent deployment. Built with a dual-core architecture, it provides framework-agnostic infrastructure for deploying agents with full observability and safe tool interactions.


โœจ Key Features

  • ๐Ÿ—๏ธ Deployment Infrastructure: Built-in services for session management, memory, and sandbox environment control

  • ๐Ÿ”’ Sandboxed Tool Execution: Isolated sandboxes ensure safe tool execution without system compromise

  • ๐Ÿ”ง Framework Agnostic: Not tied to any specific framework. Works seamlessly with popular open-source agent frameworks and custom implementations

  • โšก Developer Friendly: Simple deployment with powerful customization options

  • ๐Ÿ“Š Observability: Comprehensive tracing and monitoring for runtime operations


๐Ÿ’ฌ Contact

Welcome to join our community on

Discord DingTalk

๐Ÿ“‹ Table of Contents


๐Ÿš€ Quick Start

Prerequisites

  • Python 3.10 or higher
  • pip or uv package manager

Installation

From PyPI:

# Install core dependencies
pip install agentscope-runtime

# Install sandbox dependencies
pip install "agentscope-runtime[sandbox]"

(Optional) From source:

# Pull the source code from GitHub
git clone -b main https://github.com/agentscope-ai/agentscope-runtime.git
cd agentscope-runtime

# Install core dependencies
pip install -e .

# Install sandbox dependencies
pip install -e ".[sandbox]"

Basic Agent Usage Example

This example demonstrates how to create a simple LLM agent using AgentScope Runtime and stream responses from the Qwen model.

import asyncio
import os
from agentscope_runtime.engine import Runner
from agentscope_runtime.engine.agents.llm_agent import LLMAgent
from agentscope_runtime.engine.llms import QwenLLM
from agentscope_runtime.engine.schemas.agent_schemas import AgentRequest
from agentscope_runtime.engine.services.context_manager import ContextManager


async def main():
    # Set up the language model and agent
    model = QwenLLM(
        model_name="qwen-turbo",
        api_key=os.getenv("DASHSCOPE_API_KEY"),
    )
    llm_agent = LLMAgent(model=model, name="llm_agent")

    async with ContextManager() as context_manager:
        runner = Runner(agent=llm_agent, context_manager=context_manager)

        # Create a request and stream the response
        request = AgentRequest(
            input=[
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "text",
                            "text": "What is the capital of France?",
                        },
                    ],
                },
            ],
        )

        async for message in runner.stream_query(request=request):
            if hasattr(message, "text"):
                print(f"Streamed Answer: {message.text}")


asyncio.run(main())

Basic Sandbox Usage Example

This example demonstrates how to create sandboxed and execute tool within the sandbox.

from agentscope_runtime.sandbox import BaseSandbox

with BaseSandbox() as box:
    print(box.run_ipython_cell(code="print('hi')"))
    print(box.run_shell_command(command="echo hello"))

[!NOTE]

Current version requires Docker or Kubernetes to be installed and running on your system. Please refer to this tutorial for more details.


๐Ÿ“š Cookbook


๐Ÿ”Œ Agent Framework Integration

AgentScope Integration

# pip install "agentscope-runtime[agentscope]"
import os

from agentscope.agent import ReActAgent
from agentscope.model import OpenAIChatModel
from agentscope_runtime.engine.agents.agentscope_agent import AgentScopeAgent

agent = AgentScopeAgent(
    name="Friday",
    model=OpenAIChatModel(
        "gpt-4",
        api_key=os.getenv("OPENAI_API_KEY"),
    ),
    agent_config={
        "sys_prompt": "You're a helpful assistant named {name}.",
    },
    agent_builder=ReActAgent,
)

Agno Integration

# pip install "agentscope-runtime[agno]"
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agentscope_runtime.engine.agents.agno_agent import AgnoAgent

agent = AgnoAgent(
    name="Friday",
    model=OpenAIChat(
        id="gpt-4",
    ),
    agent_config={
        "instructions": "You're a helpful assistant.",
    },
    agent_builder=Agent,
)

AutoGen Integration

# pip install "agentscope-runtime[autogen]"
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from agentscope_runtime.engine.agents.autogen_agent import AutogenAgent

agent = AutogenAgent(
    name="Friday",
    model=OpenAIChatCompletionClient(
        model="gpt-4",
    ),
    agent_config={
        "system_message": "You're a helpful assistant",
    },
    agent_builder=AssistantAgent,
)

LangGraph Integration

# pip install "agentscope-runtime[langgraph]"
from typing import TypedDict
from langgraph import graph, types
from agentscope_runtime.engine.agents.langgraph_agent import LangGraphAgent


# define the state
class State(TypedDict, total=False):
    id: str


# define the node functions
async def set_id(state: State):
    new_id = state.get("id")
    assert new_id is not None, "must set ID"
    return types.Command(update=State(id=new_id), goto="REVERSE_ID")


async def reverse_id(state: State):
    new_id = state.get("id")
    assert new_id is not None, "ID must be set before reversing"
    return types.Command(update=State(id=new_id[::-1]))


state_graph = graph.StateGraph(state_schema=State)
state_graph.add_node("SET_ID", set_id)
state_graph.add_node("REVERSE_ID", reverse_id)
state_graph.set_entry_point("SET_ID")
compiled_graph = state_graph.compile(name="ID Reversal")
agent = LangGraphAgent(graph=compiled_graph)

[!NOTE]

More agent framework interations are comming soon!


๐Ÿ—๏ธ Deployment

The agent runner exposes a deploy method that takes a DeployManager instance and deploys the agent. The service port is set as the parameter port when creating the LocalDeployManager. The service endpoint path is set as the parameter endpoint_path when deploying the agent. In this example, we set the endpoint path to /process. After deployment, you can access the service at http://localhost:8090/process.

from agentscope_runtime.engine.deployers import LocalDeployManager

# Create deployment manager
deploy_manager = LocalDeployManager(
    host="localhost",
    port=8090,
)

# Deploy the agent as a streaming service
deploy_result = await runner.deploy(
    deploy_manager=deploy_manager,
    endpoint_path="/process",
    stream=True,  # Enable streaming responses
)

๐Ÿค Contributing

We welcome contributions from the community! Here's how you can help:

๐Ÿ› Bug Reports

  • Use GitHub Issues to report bugs
  • Include detailed reproduction steps
  • Provide system information and logs

๐Ÿ’ก Feature Requests

  • Discuss new ideas in GitHub Discussions
  • Follow the feature request template
  • Consider implementation feasibility

๐Ÿ”ง Code Contributions

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

For detailed contributing guidelines, please see CONTRIBUTE.


๐Ÿ“„ License

AgentScope Runtime is released under the Apache License 2.0.

Copyright 2025 Tongyi Lab

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Contributors โœจ

All Contributors

Thanks goes to these wonderful people (emoji key):

Weirui Kuang
Weirui Kuang

๐Ÿ’ป ๐Ÿ‘€ ๐Ÿšง ๐Ÿ“†
Bruce Luo
Bruce Luo

๐Ÿ’ป ๐Ÿ‘€ ๐Ÿ’ก
Zhicheng Zhang
Zhicheng Zhang

๐Ÿ’ป ๐Ÿ‘€ ๐Ÿ“–
ericczq
ericczq

๐Ÿ’ป ๐Ÿ“–
qbc
qbc

๐Ÿ‘€
Ran Chen
Ran Chen

๐Ÿ’ป
jinliyl
jinliyl

๐Ÿ’ป ๐Ÿ“–
Osier-Yi
Osier-Yi

๐Ÿ’ป ๐Ÿ“–
Kevin Lin
Kevin Lin

๐Ÿ’ป
Add your contributions

This project follows the all-contributors specification. Contributions of any kind welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentscope_runtime-0.1.5b1.tar.gz (208.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentscope_runtime-0.1.5b1-py3-none-any.whl (265.8 kB view details)

Uploaded Python 3

File details

Details for the file agentscope_runtime-0.1.5b1.tar.gz.

File metadata

  • Download URL: agentscope_runtime-0.1.5b1.tar.gz
  • Upload date:
  • Size: 208.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agentscope_runtime-0.1.5b1.tar.gz
Algorithm Hash digest
SHA256 1782b90f11b63e71d0e0390715f6f241ac2b1f2cf3be7d480330f9174787f81f
MD5 bfae741d899f184084853aac0f3d2c81
BLAKE2b-256 a7d4342f71c2e492f37a0af00bd2293ae9e158d00f30731b42c6784a2676136f

See more details on using hashes here.

File details

Details for the file agentscope_runtime-0.1.5b1-py3-none-any.whl.

File metadata

File hashes

Hashes for agentscope_runtime-0.1.5b1-py3-none-any.whl
Algorithm Hash digest
SHA256 736162012ac81130abb732a8c3555ccae43705918e6cc90b1f2365a5fde666cf
MD5 63fb3a14a369707f0c86057bcb0cdc76
BLAKE2b-256 d9dc68c74da1c8f0512bb48eda418b4945e238b2ebb2d3e1792f8a90c7726eb0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page