Skip to main content

HuggingFace Inference integration for Vision Agents

Project description

HuggingFace Plugin for Vision Agents

HuggingFace Inference integration for Vision Agents. Supports both text-only LLM and vision language models (VLM) through HuggingFace's Inference Providers API.

Installation

uv add vision-agents[huggingface]

Configuration

Set your HuggingFace API token:

export HF_TOKEN=your_huggingface_token

Usage

Text-only LLM

from vision_agents.plugins import huggingface

llm = huggingface.LLM(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    provider="together",  # optional: auto-selects if omitted. You can also pass "fastest" and "cheapest" here if interested in throughput-efficiency or cost-efficiency
)

response = await llm.simple_response("Hello, how are you?")
print(response.text)

Vision Language Model (VLM)

from vision_agents.plugins import huggingface

vlm = huggingface.VLM(
    model="Qwen/Qwen2-VL-7B-Instruct",
    fps=1,
    frame_buffer_seconds=10,
)

# VLM automatically buffers video frames when used with an Agent
response = await vlm.simple_response("What do you see?")
print(response.text)

With Function Calling

from vision_agents.plugins import huggingface

llm = huggingface.LLM(model="meta-llama/Meta-Llama-3-8B-Instruct")

@llm.register_function()
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"The weather in {city} is sunny."

response = await llm.simple_response("What's the weather in Paris?")

Supported Providers

HuggingFace's Inference Providers API supports multiple backends:

  • Together AI
  • Groq
  • Cerebras
  • Replicate
  • Fireworks
  • And more

Specify a provider explicitly or let HuggingFace auto-select:

llm = huggingface.LLM(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    provider="groq",
)

API Reference

huggingface.LLM

Text-only language model integration.

Parameters:

  • model (str): HuggingFace model ID
  • api_key (str, optional): HuggingFace API token (defaults to HF_TOKEN env var)
  • provider (str, optional): Inference provider name

huggingface.VLM

Vision language model integration with video frame buffering.

Parameters:

  • model (str): HuggingFace model ID
  • api_key (str, optional): HuggingFace API token (defaults to HF_TOKEN env var)
  • provider (str, optional): Inference provider name
  • fps (int): Frames per second to buffer (default: 1)
  • frame_buffer_seconds (int): Seconds of video to buffer (default: 10)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_agents_plugins_huggingface-0.2.10.tar.gz (8.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file vision_agents_plugins_huggingface-0.2.10.tar.gz.

File metadata

File hashes

Hashes for vision_agents_plugins_huggingface-0.2.10.tar.gz
Algorithm Hash digest
SHA256 c14408216831877906cc727c9439013e73f0eb1e81c04d65098802dbc21030e6
MD5 2259e73bd0163e3e6bd093975bea3c31
BLAKE2b-256 2517d472f5c2daa48c3862bf1f1941e988f24a798debce97217fbb3fc20225b8

See more details on using hashes here.

File details

Details for the file vision_agents_plugins_huggingface-0.2.10-py3-none-any.whl.

File metadata

File hashes

Hashes for vision_agents_plugins_huggingface-0.2.10-py3-none-any.whl
Algorithm Hash digest
SHA256 39aa2c4e46089ade4fdd9a0d00ff838c93cde31ab0727ac7458b4ef148cb3e48
MD5 52c37226cbb82679b1f953304691ac93
BLAKE2b-256 1ce3e64bd2489dbde4e20146e75e4b76dd241aab432e21c9d056c3d65191e8b5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page