Skip to main content

Official Python SDK for VLM Run

Project description

VLM Run Logo

VLM Run Python SDK

Website | Platform | Docs | Blog | Discord

PyPI Version PyPI Version PyPI Downloads
License Discord Twitter Follow

The VLM Run Python SDK is the official Python SDK for VLM Run API platform, providing a convenient way to interact with our REST APIs.

🚀 Getting Started

Installation

pip install vlmrun

Installation with Optional Features

The package provides optional features that can be installed based on your needs:

  • Chat with Orion via the CLI (see vlmrun chat)

    pip install "vlmrun[cli]"
    
  • Video processing features (numpy, opencv-python):

    pip install "vlmrun[video]"
    
  • Document processing features (pypdfium2):

    pip install "vlmrun[doc]"
    
  • OpenAI SDK integration (for chat completions API):

    pip install "vlmrun[openai]"
    
  • All optional features:

    pip install "vlmrun[all]"
    

Basic Usage

from PIL import Image
from vlmrun.client import VLMRun
from vlmrun.common.utils import remote_image

# Initialize the client
client = VLMRun(api_key="<your-api-key>")

# Process an image using local file or remote URL
image: Image.Image = remote_image("https://storage.googleapis.com/vlm-data-public-prod/hub/examples/document.invoice/invoice_1.jpg")
response = client.image.generate(
    images=[image],
    domain="document.invoice"
)
print(response)

# Or process an image directly from URL
response = client.image.generate(
    urls=["https://storage.googleapis.com/vlm-data-public-prod/hub/examples/document.invoice/invoice_1.jpg"],
    domain="document.invoice"
)
print(response)

OpenAI-Compatible Chat Completions

The VLM Run SDK provides OpenAI-compatible chat completions through the agent endpoint. This allows you to use the familiar OpenAI API with VLM Run's powerful vision-language models.

from vlmrun.client import VLMRun

client = VLMRun(
    api_key="your-key",
    base_url="https://agent.vlm.run/v1"
)

response = client.agent.completions.create(
    model="vlmrun-orion-1",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)

For async support:

import asyncio
from vlmrun.client import VLMRun

client = VLMRun(api_key="your-key", base_url="https://agent.vlm.run/v1")

async def main():
    response = await client.agent.async_completions.create(
        model="vlmrun-orion-1",
        messages=[{"role": "user", "content": "Hello!"}]
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Installation: Install with OpenAI support using pip install vlmrun[openai]

🔗 Quick Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vlmrun-0.5.4.tar.gz (67.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vlmrun-0.5.4-py3-none-any.whl (72.8 kB view details)

Uploaded Python 3

File details

Details for the file vlmrun-0.5.4.tar.gz.

File metadata

  • Download URL: vlmrun-0.5.4.tar.gz
  • Upload date:
  • Size: 67.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vlmrun-0.5.4.tar.gz
Algorithm Hash digest
SHA256 533b2c057905f24086b133f0769f0b90c38bb97f6f4ad19ff5b46306b66ea27d
MD5 11c07368aee78c5e4cfdbac29697fb47
BLAKE2b-256 f2ae3b1b4b7e9319d8c3449e18a50c039cb2900575204a860aef37d7647b07cc

See more details on using hashes here.

File details

Details for the file vlmrun-0.5.4-py3-none-any.whl.

File metadata

  • Download URL: vlmrun-0.5.4-py3-none-any.whl
  • Upload date:
  • Size: 72.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vlmrun-0.5.4-py3-none-any.whl
Algorithm Hash digest
SHA256 fa8183762684c64f5dca2e38bf98a8403ced9c4299cbc0c777f9c133a526df6b
MD5 a0dcf4be43d94ba57767044376dc06ee
BLAKE2b-256 b576b9bf8775652ddcb41514450d0db4767851420e9f0dc6ad833f4e580eaa91

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page