Skip to main content

XAI for stream agents

Project description

XAI Plugin for Stream Agents

This package provides xAI (Grok) integration for the Stream Agents ecosystem, enabling you to use xAI's powerful language models in your conversational AI applications.

Features

  • Native xAI SDK Integration: Full access to xAI's chat completion and streaming APIs
  • Conversation Memory: Automatic conversation history management
  • Streaming Support: Real-time response streaming with standardized events
  • Multimodal Support: Handle text and image inputs
  • Event System: Subscribe to response events for custom handling
  • Easy Integration: Drop-in replacement for other LLM providers

Installation

uv add "vision-agents[xai]"
# or directly
uv add vision-agents-plugins-xai

Quick Start

import asyncio
from vision_agents.plugins import xai

async def main():
    # Initialize with your xAI API key
    llm = xai.LLM(
        model="grok-4",
        api_key="your_xai_api_key"  # or set XAI_API_KEY environment variable
    )

    # Simple response
    response = await llm.simple_response("Explain quantum computing in simple terms")

    print(f"\n\nComplete response: {response.text}")

if __name__ == "__main__":
    asyncio.run(main())

Advanced Usage

Conversation with Memory

from vision_agents.plugins import xai

llm = xai.LLM(model="grok-4", api_key="your_api_key")

# First message
await llm.simple_response("My name is Alice and I have 2 cats")

# Second message - the LLM remembers the context
response = await llm.simple_response("How many pets do I have?")
print(response.text)  # Will mention the 2 cats

Using Instructions

llm = LLM(
    model="grok-4",
    api_key="your_api_key"
)

# Create a response with system instructions
response = await llm.create_response(
    input="Tell me about the weather",
    instructions="You are a helpful weather assistant. Always be cheerful and optimistic.",
    stream=True
)

Multimodal Input

# Handle complex multimodal messages
advanced_message = [
    {
        "role": "user",
        "content": [
            {"type": "input_text", "text": "What do you see in this image?"},
            {"type": "input_image", "image_url": "https://example.com/image.jpg"},
        ],
    }
]

messages = LLM._normalize_message(advanced_message)
# Use with your conversation system

API Reference

XAILLM Class

Constructor

LLM(
    model: str = "grok-4",
    api_key: Optional[str] = None,
    client: Optional[AsyncClient] = None
)

Parameters:

  • model: xAI model to use (default: "grok-4")
  • api_key: Your xAI API key (default: reads from XAI_API_KEY environment variable)
  • client: Optional pre-configured xAI AsyncClient

Methods

async simple_response(text: str, processors=None, participant=None)

Generate a simple response to text input.

Parameters:

  • text: Input text to respond to
  • processors: Optional list of processors for video/voice AI context
  • participant: Optional participant object

Returns: LLMResponseEvent[Response] with the generated text

async create_response(input: str, instructions: str = "", model: str = None, stream: bool = True)

Create a response with full control over parameters.

Parameters:

  • input: Input text
  • instructions: System instructions for the model
  • model: Override the default model
  • stream: Whether to stream the response (default: True)

Returns: LLMResponseEvent[Response] with the generated text

Configuration

Environment Variables

  • XAI_API_KEY: Your xAI API key (required if not provided in constructor)

Requirements

  • Python 3.10+
  • xai-sdk
  • vision-agents-core

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_agents_plugins_xai-0.4.5.tar.gz (12.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vision_agents_plugins_xai-0.4.5-py3-none-any.whl (14.0 kB view details)

Uploaded Python 3

File details

Details for the file vision_agents_plugins_xai-0.4.5.tar.gz.

File metadata

  • Download URL: vision_agents_plugins_xai-0.4.5.tar.gz
  • Upload date:
  • Size: 12.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.6 {"installer":{"name":"uv","version":"0.10.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for vision_agents_plugins_xai-0.4.5.tar.gz
Algorithm Hash digest
SHA256 78ae0661f8f4085a4da55df8c63b422505fce42abbad2def618622dfb7c022a1
MD5 cb714ee3fd1074b39969abf8d811bd84
BLAKE2b-256 fb97ff0cc89cf408acbc97c2be493de2a85c1bf14df0d7e95a8f6a07d19ad684

See more details on using hashes here.

File details

Details for the file vision_agents_plugins_xai-0.4.5-py3-none-any.whl.

File metadata

  • Download URL: vision_agents_plugins_xai-0.4.5-py3-none-any.whl
  • Upload date:
  • Size: 14.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.6 {"installer":{"name":"uv","version":"0.10.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for vision_agents_plugins_xai-0.4.5-py3-none-any.whl
Algorithm Hash digest
SHA256 42f23f14315e8b26e57e01a7fc4a01166a22d4c329e8e25ac31b377ded6c41ca
MD5 6c98c7ba1b4c40faa6763a6b16f035df
BLAKE2b-256 552ea3ad60a802869d20f471b43ae839ffcccafd5a31b898eec383e6734e9afd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page