Skip to main content

VM-X AI Python SDK

Project description

VM-X SDK for Python

Description

VM-X AI SDK client for Python

Installation

pip install vm-x-ai-sdk
poetry add vm-x-ai-sdk

Create VMXClient

from vmxai import (
    CompletionRequest,
    RequestMessage,
    RequestMessageToolCall,
    RequestMessageToolCallFunction,
    RequestToolFunction,
    RequestTools,
    VMXClient,
    VMXClientOAuth,
)

client = VMXClient(
    domain="env-abc123.clnt.dev.vm-x.ai", # (Or VMX_DOMAIN env variable)
    # API Key (Or VMX_API_KEY env variable)
    api_key="abc123",
)

# Streaming
streaming_response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="Hey there!",
            )
        ],
    ),
)

for message in streaming_response:
    print(message.message, end="", flush=True)

Examples

Non-Streaming

from vmxai import (
    CompletionRequest,
    RequestMessage,
    VMXClient,
)

client = VMXClient()

response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="Hey there!",
            )
        ],
    ),
    stream=False,
)

print(response.message)

Streaming

from vmxai import (
    CompletionRequest,
    RequestMessage,
    VMXClient,
)

client = VMXClient()

streaming_response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="Hey there!",
            )
        ],
    ),
)

for message in streaming_response:
    print(message.message, end="", flush=True)

Tool Call

from vmxai import (
    CompletionRequest,
    RequestMessage,
    RequestMessageToolCall,
    RequestMessageToolCallFunction,
    RequestToolFunction,
    RequestTools,
    VMXClient,
)

client = VMXClient()

# Function Calling
function_response = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="whats the temperature in Dallas, New York and San Diego?",
            )
        ],
        tools=[
            RequestTools(
                type="function",
                function=RequestToolFunction(
                    name="get_weather",
                    description="Lookup the temperature",
                    parameters={
                        "type": "object",
                        "properties": {"city": {"description": "City you want to get the temperature"}},
                        "required": ["city"],
                    },
                ),
            )
        ],
    ),
)

print("Function Response")
print("#" * 100)
for message in function_response:
    print(message, end="")

print("\n" * 2)

# Function Calling Callback
function_response_callback = client.completion(
    request=CompletionRequest(
        resource="default",
        messages=[
            RequestMessage(
                role="user",
                content="whats the temperature in Dallas, New York and San Diego?",
            ),
            RequestMessage(
                role="assistant",
                tool_calls=[
                    RequestMessageToolCall(
                        id="call_NLcWB6VCdG6x9UW6xrGVTTTR",
                        type="function",
                        function=RequestMessageToolCallFunction(name="get_weather", arguments='{"city": "Dallas"}'),
                    ),
                    RequestMessageToolCall(
                        id="call_6RDTuEDsaHvWr8XjwDXx4UjX",
                        type="function",
                        function=RequestMessageToolCallFunction(name="get_weather", arguments='{"city": "New York"}'),
                    ),
                    RequestMessageToolCall(
                        id="call_NsFzeGVbAWl5bor6RrUDCvTv",
                        type="function",
                        function=RequestMessageToolCallFunction(name="get_weather", arguments='{"city": "San Diego"}'),
                    ),
                ],
            ),
            RequestMessage(
                role="tool", content="The temperature in Dallas is 81F", tool_call_id="call_NLcWB6VCdG6x9UW6xrGVTTTR"
            ),
            RequestMessage(
                role="tool", content="The temperature in New York is 78F", tool_call_id="call_6RDTuEDsaHvWr8XjwDXx4UjX"
            ),
            RequestMessage(
                role="tool", content="The temperature in San Diego is 68F", tool_call_id="call_NsFzeGVbAWl5bor6RrUDCvTv"
            ),
        ],
        tools=[
            RequestTools(
                type="function",
                function=RequestToolFunction(
                    name="get_weather",
                    description="Lookup the temperature",
                    parameters={
                        "type": "object",
                        "properties": {"city": {"description": "City you want to get the temperature"}},
                        "required": ["city"],
                    },
                ),
            )
        ],
    ),
)

print("Function Callback Response")
print("#" * 100)
for message in function_response_callback:
    print(message.message, end="")

Multi-Answer

import asyncio
from typing import Iterator

from blessings import Terminal
from vmxai import (
    CompletionRequest,
    CompletionResponse,
    RequestMessage,
    VMXClient,
)

term = Terminal()
client = VMXClient()


async def print_streaming_response(response: asyncio.Task[Iterator[CompletionResponse]], term_location: int):
    """
    Print a streaming response to the terminal at a specific terminal location.
    So, we can demonstrate multiple streaming responses in parallel.

    Args:
        response (asyncio.Task[Iterator[CompletionResponse]]): Streaming response task
        term_location (int): Terminal location to print the response
    """
    first = True
    with term.location(y=term_location):
        result = await response
        x = 0
        y = term_location + 3
        for message in result:
            if first:
                print("\nModel: ", message.metadata.model)
                first = False
                # Some models start with 2 new lines, this is to remove them
                if message.message.startswith("\n\n"):
                    message.message = message.message[2:]

            await asyncio.sleep(0.01)
            print(term.move(y, x) + message.message)
            x += len(message.message)
            if x > term.width:
                x = 0
                y += 1


async def multi_answer():
    # Please make sure that the "default" resource have 3 providers configured in the VM-X Console.
    resp1, resp2, resp3 = client.completion(
        request=CompletionRequest(
            resource="default",
            messages=[
                RequestMessage(
                    role="user",
                    content="Hey there, how are you?",
                )
            ],
        ),
        multi_answer=True,
    )

    print("Multi-Answer Streaming Response")
    print("#" * 100)
    await asyncio.gather(
        *[print_streaming_response(resp1, 10), print_streaming_response(resp2, 16), print_streaming_response(resp3, 20)]
    )
    print("\n" * 7)

    resp1, resp2, resp3 = client.completion(
        request=CompletionRequest(
            resource="default",
            messages=[
                RequestMessage(
                    role="user",
                    content="Hey there, how are you?",
                )
            ],
        ),
        stream=False,
        multi_answer=True,
    )

    print("Multi-Answer Non-Streaming Response")
    print("#" * 100)

    async def _print(resp):
        result = await resp
        print(result.message, flush=True)

    await asyncio.gather(*[_print(resp1), _print(resp2), _print(resp3)])


asyncio.run(multi_answer())

Change Log

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vm_x_ai_sdk-1.7.0.tar.gz (13.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vm_x_ai_sdk-1.7.0-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file vm_x_ai_sdk-1.7.0.tar.gz.

File metadata

  • Download URL: vm_x_ai_sdk-1.7.0.tar.gz
  • Upload date:
  • Size: 13.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.9.23 Linux/6.11.0-1018-azure

File hashes

Hashes for vm_x_ai_sdk-1.7.0.tar.gz
Algorithm Hash digest
SHA256 daa604e428ac228c36cab48324cf349fd2ad4f9258a36bc487ab37a55625cf0d
MD5 2123a966b4a1983626b4fcab271e53fa
BLAKE2b-256 5b7c5c9cfae3c341039c0973bcddc4c20f195faf4cf4c07035dd339f723a0fed

See more details on using hashes here.

File details

Details for the file vm_x_ai_sdk-1.7.0-py3-none-any.whl.

File metadata

  • Download URL: vm_x_ai_sdk-1.7.0-py3-none-any.whl
  • Upload date:
  • Size: 13.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.9.23 Linux/6.11.0-1018-azure

File hashes

Hashes for vm_x_ai_sdk-1.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 657c29e8fb60bef2360be4de6467bc48195a692ee13e3fb51d76e0d6249527fb
MD5 48af4930af7e7ac7b1f370a684d16d02
BLAKE2b-256 b6fd864c96b0ca34bdd478deff28897b88b6fe079dc2fc4c4993c727a7cfc4e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page