Skip to main content

VM-X AI Python SDK

Project description

VM-X SDK for Python

Description

VM-X AI SDK client for Python

Installation

pip install vm-x-ai-sdk
poetry add vm-x-ai-sdk

Create VMXClient

from vmxai import (
    CompletionRequest,
    RequestMessage,
    RequestMessageToolCall,
    RequestMessageToolCallFunction,
    RequestToolFunction,
    RequestTools,
    VMXClient,
    VMXClientOAuth,
)

client = VMXClient(
    domain="env-abc123.clnt.dev.vm-x.ai", # (Or VMX_DOMAIN env variable)
    # API Key (Or VMX_API_KEY env variable)
    api_key="abc123",
)

# Streaming
streaming_response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="Hey there!",
            )
        ],
    ),
)

for message in streaming_response:
    print(message.message, end="", flush=True)

Examples

Non-Streaming

from vmxai import (
    CompletionRequest,
    RequestMessage,
    VMXClient,
)

client = VMXClient()

response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="Hey there!",
            )
        ],
    ),
    stream=False,
)

print(response.message)

Streaming

from vmxai import (
    CompletionRequest,
    RequestMessage,
    VMXClient,
)

client = VMXClient()

streaming_response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="Hey there!",
            )
        ],
    ),
)

for message in streaming_response:
    print(message.message, end="", flush=True)

Tool Call

from vmxai import (
    CompletionRequest,
    RequestMessage,
    RequestMessageToolCall,
    RequestMessageToolCallFunction,
    RequestToolFunction,
    RequestTools,
    VMXClient,
)

client = VMXClient()

# Function Calling
function_response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="whats the temperature in Dallas, New York and San Diego?",
            )
        ],
        tools=[
            RequestTools(
                type="function",
                function=RequestToolFunction(
                    name="get_weather",
                    description="Lookup the temperature",
                    parameters={
                        "type": "object",
                        "properties": {"city": {"description": "City you want to get the temperature"}},
                        "required": ["city"],
                    },
                ),
            )
        ],
    ),
)

print("Function Response")
print("#" * 100)
for message in function_response:
    print(message, end="")

print("\n" * 2)

# Function Calling Callback
function_response_callback = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="whats the temperature in Dallas, New York and San Diego?",
            ),
            RequestMessage(
                role="assistant",
                tool_calls=[
                    RequestMessageToolCall(
                        id="call_NLcWB6VCdG6x9UW6xrGVTTTR",
                        type="function",
                        function=RequestMessageToolCallFunction(name="get_weather", arguments='{"city": "Dallas"}'),
                    ),
                    RequestMessageToolCall(
                        id="call_6RDTuEDsaHvWr8XjwDXx4UjX",
                        type="function",
                        function=RequestMessageToolCallFunction(name="get_weather", arguments='{"city": "New York"}'),
                    ),
                    RequestMessageToolCall(
                        id="call_NsFzeGVbAWl5bor6RrUDCvTv",
                        type="function",
                        function=RequestMessageToolCallFunction(name="get_weather", arguments='{"city": "San Diego"}'),
                    ),
                ],
            ),
            RequestMessage(
                role="tool", content="The temperature in Dallas is 81F", tool_call_id="call_NLcWB6VCdG6x9UW6xrGVTTTR"
            ),
            RequestMessage(
                role="tool", content="The temperature in New York is 78F", tool_call_id="call_6RDTuEDsaHvWr8XjwDXx4UjX"
            ),
            RequestMessage(
                role="tool", content="The temperature in San Diego is 68F", tool_call_id="call_NsFzeGVbAWl5bor6RrUDCvTv"
            ),
        ],
        tools=[
            RequestTools(
                type="function",
                function=RequestToolFunction(
                    name="get_weather",
                    description="Lookup the temperature",
                    parameters={
                        "type": "object",
                        "properties": {"city": {"description": "City you want to get the temperature"}},
                        "required": ["city"],
                    },
                ),
            )
        ],
    ),
)

print("Function Callback Response")
print("#" * 100)
for message in function_response_callback:
    print(message.message, end="")

Multi-Answer

import asyncio
from typing import Iterator

from blessings import Terminal
from vmxai import (
    CompletionRequest,
    CompletionResponse,
    RequestMessage,
    VMXClient,
)

term = Terminal()
client = VMXClient()


async def print_streaming_response(response: asyncio.Task[Iterator[CompletionResponse]], term_location: int):
    """
    Print a streaming response to the terminal at a specific terminal location.
    So, we can demonstrate multiple streaming responses in parallel.

    Args:
        response (asyncio.Task[Iterator[CompletionResponse]]): Streaming response task
        term_location (int): Terminal location to print the response
    """
    first = True
    with term.location(y=term_location):
        result = await response
        x = 0
        y = term_location + 3
        for message in result:
            if first:
                print("\nModel: ", message.metadata.model)
                first = False
                # Some models start with 2 new lines, this is to remove them
                if message.message.startswith("\n\n"):
                    message.message = message.message[2:]

            await asyncio.sleep(0.01)
            print(term.move(y, x) + message.message)
            x += len(message.message)
            if x > term.width:
                x = 0
                y += 1


async def multi_answer():
    # Please make sure that the "default" resource have 3 providers configured in the VM-X Console.
    resp1, resp2, resp3 = client.completion(
        request=CompletionRequest(
            resource="default",
            messages=[
                RequestMessage(
                    role="user",
                    content="Hey there, how are you?",
                )
            ],
        ),
        multi_answer=True,
    )

    print("Multi-Answer Streaming Response")
    print("#" * 100)
    await asyncio.gather(
        *[print_streaming_response(resp1, 10), print_streaming_response(resp2, 16), print_streaming_response(resp3, 20)]
    )
    print("\n" * 7)

    resp1, resp2, resp3 = client.completion(
        request=CompletionRequest(
            resource="default",
            messages=[
                RequestMessage(
                    role="user",
                    content="Hey there, how are you?",
                )
            ],
        ),
        stream=False,
        multi_answer=True,
    )

    print("Multi-Answer Non-Streaming Response")
    print("#" * 100)

    async def _print(resp):
        result = await resp
        print(result.message, flush=True)

    await asyncio.gather(*[_print(resp1), _print(resp2), _print(resp3)])


asyncio.run(multi_answer())

Change Log

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vm_x_ai_sdk-1.2.0.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

vm_x_ai_sdk-1.2.0-py3-none-any.whl (6.7 kB view details)

Uploaded Python 3

File details

Details for the file vm_x_ai_sdk-1.2.0.tar.gz.

File metadata

  • Download URL: vm_x_ai_sdk-1.2.0.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.9.20 Linux/6.5.0-1025-azure

File hashes

Hashes for vm_x_ai_sdk-1.2.0.tar.gz
Algorithm Hash digest
SHA256 9c940fd02da8ebed07b50a18b4bfbda4aaf69e585ef4eddc70a49f8ae7295811
MD5 c019319df7cfa33779214444cd885482
BLAKE2b-256 9b8eaea03c5000ec37851722954da47eb3fca40db14e7636154c81b585a603ae

See more details on using hashes here.

File details

Details for the file vm_x_ai_sdk-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: vm_x_ai_sdk-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 6.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.9.20 Linux/6.5.0-1025-azure

File hashes

Hashes for vm_x_ai_sdk-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f1003ebe764d29c5443fa49ca0dbc8a0dec7f654b32aa4011f5e02104247055f
MD5 61537add248c2c55778fcbbb673cb8df
BLAKE2b-256 b6893f7dad98e329bf0c5c9914cfefb0f46d4d0450401dc2b94019d11861d0a1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page