Skip to main content

Huawei Cloud AgentArts SDK - Build, deploy and manage AI agents with cloud capabilities

Project description

Huawei Cloud AgentArts SDK

License Python Code style: black

Build, deploy and manage AI agents with Huawei Cloud capabilities.

Overview

Huawei Cloud AgentArts SDK is a comprehensive toolkit for developing, deploying, and managing AI agents. It provides seamless integration with Huawei Cloud services while supporting mainstream agent frameworks.

Key Features

  • Framework Agnostic - Compatible with LangChain, LangGraph, AutoGen, CrewAI, Google ADK, and any custom agent framework
  • One-Click Deployment - Deploy agents to Huawei Cloud with a single command
  • Built-in Tools - Code interpreter sandbox, memory management, MCP gateway support
  • Cloud Integration - Seamless integration with Huawei Cloud authentication, monitoring, and logging
  • CLI Toolkit - Complete command-line tools for project initialization, local development, and cloud deployment

Repository Structure

agentarts-sdk-python/
├── src/agentarts/
│   ├── sdk/                    # Core SDK modules
│   │   ├── runtime/            # HTTP server runtime (AgentArtsRuntimeApp)
│   │   ├── memory/             # Conversation memory management
│   │   ├── tools/              # Built-in tools (Code Interpreter)
│   │   ├── mcpgateway/         # MCP Gateway client
│   │   ├── identity/           # Authentication & authorization
│   │   ├── integration/        # Framework adapters (LangGraph, etc.)
│   │   └── service/            # HTTP clients for cloud services
│   └── toolkit/                # CLI toolkit
│       ├── cli/                # Command-line interface
│       ├── operations/         # CLI operation handlers
│       └── utils/templates/    # Project templates
├── docs/                       # Documentation
│   └── cn/                     # Chinese documentation
│       ├── sdk_user_guide/     # SDK usage guides
│       └── toolkit_user_guide/ # CLI usage guides
└── tests/                      # Test suites

Wrapping Your Agent as HTTP Server

The SDK provides AgentArtsRuntimeApp to wrap your agent logic as a standard HTTP server, exposing:

  • POST /invocations - Main agent invocation endpoint
  • GET /ping - Health check endpoint
  • WS /ws - WebSocket endpoint for streaming

Example: LangGraph Agent

# agent.py
import os
from typing import Dict, Any, TypedDict, Annotated
from operator import add

from agentarts.sdk import AgentArtsRuntimeApp, RequestContext

app = AgentArtsRuntimeApp()


class State(TypedDict):
    messages: Annotated[list, add]
    query: str
    response: str


class LangGraphAgent:
    def __init__(self):
        self.model_name = os.environ.get("OPENAI_MODEL_NAME", "gpt-4o-mini")
        self._graph = None

    def _build_graph(self):
        from langgraph.graph import StateGraph, END
        from langchain_openai import ChatOpenAI
        from langchain_core.messages import HumanMessage, AIMessage

        llm = ChatOpenAI(
            model=self.model_name,
            api_key=os.environ.get("OPENAI_API_KEY"),
            base_url=os.environ.get("OPENAI_BASE_URL")
        )

        async def process_node(state: State) -> Dict[str, Any]:
            query = state.get("query", "")
            messages = state.get("messages", []) or [HumanMessage(content=query)]
            response = await llm.ainvoke(messages)
            return {
                "messages": [AIMessage(content=response.content)],
                "response": response.content,
            }

        workflow = StateGraph(State)
        workflow.add_node("process", process_node)
        workflow.set_entry_point("process")
        workflow.add_edge("process", END)
        return workflow.compile()

    async def run(self, query: str) -> Dict[str, Any]:
        graph = self._graph or self._build_graph()
        self._graph = graph
        result = await graph.ainvoke({"messages": [], "query": query, "response": ""})
        return {"response": result.get("response", "")}


_agent = LangGraphAgent()


@app.entrypoint
async def handler(payload: Dict[str, Any], context: RequestContext = None) -> Dict[str, Any]:
    query = payload.get("message", "")
    return await _agent.run(query)


if __name__ == "__main__":
    app.run()

Key Points

  1. Focus on Agent Logic - You only need to implement the agent logic; the SDK handles HTTP server, request parsing, and response formatting
  2. Framework Agnostic - Works with any agent framework (LangChain, LangGraph, AutoGen, CrewAI, or custom implementations)
  3. Simple Decorator - Use @app.entrypoint to mark your handler function
  4. Context Support - Optional RequestContext parameter provides session info and request metadata
  5. Configurable Model - Model name can be configured via environment variable (e.g., OPENAI_MODEL_NAME)

Installation

Requirements

  • Python 3.10 or higher
  • pip or uv package manager

Create Virtual Environment (Recommended)

It is recommended to install the SDK in a virtual environment to avoid dependency conflicts.

Windows:

# Create virtual environment
python -m venv venv

# Activate virtual environment
.\venv\Scripts\Activate.ps1

# Or using Command Prompt
.\venv\Scripts\activate.bat

Linux/macOS:

# Create virtual environment
python -m venv venv

# Activate virtual environment
source venv/bin/activate

Install via pip

Windows:

pip install agentarts-sdk

Linux/macOS:

pip install agentarts-sdk

Install with Optional Dependencies

# With LangChain support
pip install agentarts-sdk[langchain]

# With LangGraph support
pip install agentarts-sdk[langgraph]

# With all optional dependencies
pip install agentarts-sdk[all]

Install from Source

Windows:

git clone https://github.com/huaweicloud/agentarts-sdk-python.git
cd agentarts-sdk-python

# Create and activate virtual environment
python -m venv venv
.\venv\Scripts\Activate.ps1

# Install in development mode
pip install -e ".[dev]"

Linux/macOS:

git clone https://github.com/huaweicloud/agentarts-sdk-python.git
cd agentarts-sdk-python

# Create and activate virtual environment
python -m venv venv
source venv/bin/activate

# Install in development mode
pip install -e ".[dev]"

Configure Huawei Cloud Credentials

Set environment variables for Huawei Cloud authentication:

Windows (PowerShell):

$env:HUAWEICLOUD_SDK_AK = "your-access-key"
$env:HUAWEICLOUD_SDK_SK = "your-secret-key"

Windows (Command Prompt):

set HUAWEICLOUD_SDK_AK=your-access-key
set HUAWEICLOUD_SDK_SK=your-secret-key

Linux/macOS:

export HUAWEICLOUD_SDK_AK="your-access-key"
export HUAWEICLOUD_SDK_SK="your-secret-key"

Note: Get your AK/SK from Huawei Cloud Console -> My Credentials -> Access Keys.

For complete environment variable configuration, see Environment Variables Guide.

Quick Start

1. Initialize a New Project

# Create a new agent project with LangGraph template
agentarts init -n my_agent -t langgraph

# Available templates: basic, langchain, langgraph, google-adk

This creates:

my_agent/
├── agent.py              # Agent implementation
├── requirements.txt      # Python dependencies
├── .agentarts_config.yaml # Project configuration
└── Dockerfile            # Docker build file

2. Configure Environment

Edit .agentarts_config.yaml to set environment variables:

runtime:
  environment_variables:
    - key: OPENAI_API_KEY
      value: "your-openai-api-key"
    - key: OPENAI_MODEL_NAME
      value: "gpt-4o-mini"  # Optional: gpt-4o, gpt-4-turbo, etc.
    - key: OPENAI_BASE_URL
      value: ""  # Optional: custom API endpoint

3. Local Development

# Start local development server
agentarts dev

# Server runs at http://127.0.0.1:8080
# Endpoints:
#   POST /invocations - Invoke agent
#   GET  /ping        - Health check

4. Deploy to Huawei Cloud

# Configure region
agentarts config set region cn-southwest-2

# Deploy to cloud
agentarts deploy

# Check deployment status
agentarts status

# Invoke deployed agent
agentarts invoke '{"message": "Hello, AgentArts!"}'

# Destroy deployment
agentarts destroy

CLI Commands Reference

Command Description
agentarts init Initialize a new agent project
agentarts dev Start local development server
agentarts config Configure SDK settings (alias: configure)
agentarts deploy Deploy agent to Huawei Cloud (alias: launch)
agentarts invoke Invoke deployed agent
agentarts status Check deployment status
agentarts destroy Remove deployed agent
agentarts mcp-gateway Manage MCP gateways

Limitations & Requirements

Python Version

  • Minimum: Python 3.10
  • Recommended: Python 3.10 or 3.11

Framework Versions

When using optional framework dependencies, ensure the following minimum versions:

Framework Minimum Version Install Command
LangGraph 1.0.0 pip install agentarts-sdk[langgraph]
LangChain 0.1.0 pip install agentarts-sdk[langchain]
langchain-core 0.1.0 Included with langgraph/langchain

Note: LangGraph 1.0+ introduces a new Checkpoint format with required fields (step, pending_sends, parents). The SDK's integration module is compatible with LangGraph 1.0 and above.

Docker

Docker is required for:

  • Building and deploying agents with agentarts deploy (alias: launch)

Install Docker:

Resource Quotas

Refer to Huawei Cloud AgentArts Documentation for resource quotas and limits.

Documentation

Development

# Install development dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Code formatting
black . && isort .

# Type checking
mypy agentarts

# Linting
ruff check .

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for details.

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentarts_sdk-0.1.0.tar.gz (170.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentarts_sdk-0.1.0-py3-none-any.whl (346.5 kB view details)

Uploaded Python 3

File details

Details for the file agentarts_sdk-0.1.0.tar.gz.

File metadata

  • Download URL: agentarts_sdk-0.1.0.tar.gz
  • Upload date:
  • Size: 170.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agentarts_sdk-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1487c6777e16f30082c6a000d132539dec2998abce2e49797df20adb6907a34c
MD5 74ee043f6d3037a9729a0ddff921e964
BLAKE2b-256 0435d51fc03b6226be7ffed4c48f0bbce35be042a20a6a8cae89ea795bed220f

See more details on using hashes here.

File details

Details for the file agentarts_sdk-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: agentarts_sdk-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 346.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agentarts_sdk-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f2dfa139f54ab71931ca00b6190fa3c31dee7522666a2f00dec2a0b954cf5052
MD5 e74a83597b928e893bcf17d0668ca5d8
BLAKE2b-256 2d239cc2c8addc1eb49f01e95ca42235c3eab1f8b2bb5d99f707beed6bc81e26

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page