Skip to main content

A unified interface to run inference on machine learning libraries.

Project description

Python PyPI version Downloads License OS Support


🌟 Key Features

x.infer

✅ Run inference with >1000+ models in 3 lines of code.
✅ List and search models interactively.
✅ Launch a Gradio interface to interact with a model.
✅ Serve model as a REST API endpoint with Ray Serve and FastAPI.
✅ Customize and add your own models with minimal code changes.

Tasks supported:

Image Classification Object Detection Image Captioning Visual QA Pose Estimation Instance Segmentation

🤔 Why x.infer?

So, a new computer vision model just dropped last night. It's called GPT-54o-mini-vision-pro-max-xxxl. It's a super cool model, open-source, open-weights, open-data, all the good stuff.

You're excited. You want to try it out.

But it's written in a new framework, TyPorch that you know nothing about. You don't want to spend a weekend learning TyPorch just to find out the model is not what you expected.

This is where x.infer comes in.

x.infer is a simple wrapper that allows you to run inference with any computer vision model in just a few lines of code. All in Python.

Out of the box, x.infer supports the following frameworks:

Transformers TIMM Ultralytics vLLM Ollama

Combined, x.infer supports over 1000+ models from all the above frameworks.

Run any supported model using the following 4 lines of code:

import xinfer

model = xinfer.create_model("vikhyatk/moondream2")
model.infer(image, prompt)         # Run single inference
model.infer_batch(images, prompts) # Run batch inference
model.launch_gradio()              # Launch Gradio interface

Have a custom model? Create a class that implements the BaseXInferModel interface and register it with x.infer. See Add Your Own Model for more details.

🚀 Quickstart

Here's a quick example demonstrating how to use x.infer with a Transformers model:

Open In Colab Open In Kaggle

import xinfer

model = xinfer.create_model("vikhyatk/moondream2")

image = "https://raw.githubusercontent.com/dnth/x.infer/main/assets/demo/00aa2580828a9009.jpg"
prompt = "Describe this image. "

model.infer(image, prompt)

>>> 'A parade with a marching band and a flag-bearing figure passes through a town, with spectators lining the street and a church steeple visible in the background.'

📦 Installation

[!IMPORTANT] You must have PyTorch installed to use x.infer.

To install the barebones x.infer (without any optional dependencies), run:

pip install xinfer

x.infer can be used with multiple optional dependencies. You'll just need to install one or more of the following:

pip install "xinfer[transformers]"
pip install "xinfer[ultralytics]"
pip install "xinfer[timm]"
pip install "xinfer[vllm]"
pip install "xinfer[ollama]"

To install all optional dependencies, run:

pip install "xinfer[all]"

To install from a local directory, run:

git clone https://github.com/dnth/x.infer.git
cd x.infer
pip install -e .

🛠️ Usage

List Models

xinfer.list_models()
                                    Available Models                                      
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓
┃ Implementation ┃ Model ID                                              ┃ Input --> Output     ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩
│ timm           │ timm/eva02_large_patch14_448.mim_m38m_ft_in22k_in1k   │ image --> categories │
│ timm           │ timm/eva02_large_patch14_448.mim_m38m_ft_in1k         │ image --> categories │
│ timm           │ timm/eva02_large_patch14_448.mim_in22k_ft_in22k_in1k  │ image --> categories │
│ timm           │ timm/eva02_large_patch14_448.mim_in22k_ft_in1k        │ image --> categories │
│ timm           │ timm/eva02_base_patch14_448.mim_in22k_ft_in22k_in1k   │ image --> categories │
│ timm           │ timm/eva02_base_patch14_448.mim_in22k_ft_in1k         │ image --> categories │
│ timm           │ timm/eva02_small_patch14_336.mim_in22k_ft_in1k        │ image --> categories │
│ timm           │ timm/eva02_tiny_patch14_336.mim_in22k_ft_in1k         │ image --> categories │
│ transformers   │ Salesforce/blip2-opt-6.7b-coco                        │ image-text --> text  │
│ transformers   │ Salesforce/blip2-flan-t5-xxl                          │ image-text --> text  │
│ transformers   │ Salesforce/blip2-opt-6.7b                             │ image-text --> text  │
│ transformers   │ Salesforce/blip2-opt-2.7b                             │ image-text --> text  │
│ transformers   │ fancyfeast/llama-joycaption-alpha-two-hf-llava        │ image-text --> text  │
│ transformers   │ vikhyatk/moondream2                                   │ image-text --> text  │
│ transformers   │ sashakunitsyn/vlrm-blip2-opt-2.7b                     │ image-text --> text  │
│ ultralytics    │ ultralytics/yolov8x                                   │ image --> boxes      │
│ ultralytics    │ ultralytics/yolov8m                                   │ image --> boxes      │
│ ultralytics    │ ultralytics/yolov8l                                   │ image --> boxes      │
│ ultralytics    │ ultralytics/yolov8s                                   │ image --> boxes      │
│ ultralytics    │ ultralytics/yolov8n                                   │ image --> boxes      │
│ ultralytics    │ ultralytics/yolov8n-seg                               │ image --> masks      │
│ ultralytics    │ ultralytics/yolov8n-pose                              │ image --> poses      │
│ ...            │ ...                                                   │ ...                  │
│ ...            │ ...                                                   │ ...                  │
└────────────────┴───────────────────────────────────────────────────────┴──────────────────────┘

If you're running in a Juypter Notebook environment, you can specify interactive=True to list and search supported models interactively.

https://github.com/user-attachments/assets/d51cf707-2001-478c-881a-ae27f690d1bc

Gradio Interface

For all supported models, you can launch a Gradio interface to interact with the model. This is useful for quickly testing the model and visualizing the results.

Once the model is created, you can launch the Gradio interface with the following line of code:

model.launch_gradio()

https://github.com/user-attachments/assets/25ce31f3-c9e2-4934-b341-000a6d1b7dc4

If you'd like to launch a Gradio interface with all models available in a dropdown, you can use the following line of code:

xinfer.launch_gradio_demo()

https://github.com/user-attachments/assets/bd46f55a-573f-45b9-910f-e22bee27fd3d

See Gradio Demo for more details.

Serve Model

If you're happy with your model, you can serve it with x.infer.

xinfer.serve_model("vikhyatk/moondream2")

This will start a FastAPI server at http://localhost:8000 powered by Ray Serve, allowing you to interact with your model through a REST API.

https://github.com/user-attachments/assets/cd3925f8-ffcb-4890-8a34-13ee5f6884f1

You can also specify deployment options such as the number of replicas and GPU requirements and host/port.

xinfer.serve_model(
    "vikhyatk/moondream2",
    device="cuda",
    dtype="float16",
    host="0.0.0.0",
    port=8000,
    deployment_kwargs={
        "num_replicas": 1, 
        "ray_actor_options": {"num_gpus": 1}
    }
)

FastAPI Endpoint

You can now query the endpoint with an image and prompt.

curl -X 'POST' \
  'http://127.0.0.1:8000/infer' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "image": "https://raw.githubusercontent.com/dnth/x.infer/main/assets/demo/00aa2580828a9009.jpg",
  "infer_kwargs": {"text": "Caption this image"}
}'

Or in Python:

import requests

url = "http://127.0.0.1:8000/infer"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
payload = {
    "image": "https://raw.githubusercontent.com/dnth/x.infer/main/assets/demo/00aa2580828a9009.jpg",
    "infer_kwargs": {
        "text": "Caption this image"
    }
}

response = requests.post(url, headers=headers, json=payload)
print(response.json())

x.infer endpoint is also compatible with the OpenAI chat completions API format.

from openai import OpenAI

client = OpenAI(
    api_key="dummy",
    base_url="http://127.0.0.1:8000/v1"
)

response = client.chat.completions.create(
    model="vikhyatk/moondream2",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image_url",
                    "image_url": "https://raw.githubusercontent.com/dnth/x.infer/main/assets/demo/00aa2580828a9009.jpg"
                },
                {
                    "type": "text",
                    "text": "Caption this image"
                }
            ]
        }
    ]
)

print(response.choices[0].message.content)

Add Your Own Model

  • Step 1: Create a new model class that implements the BaseXInferModel interface.

  • Step 2: Implement the required abstract methods load_model, infer, and infer_batch.

  • Step 3: Decorate your class with the register_model decorator, specifying the model ID, implementation, and input/output.

For example:

@register_model("my-model", "custom", ModelInputOutput.IMAGE_TEXT_TO_TEXT)
class MyModel(BaseXInferModel):
    def load_model(self):
        # Load your model here
        pass

    def infer(self, image, prompt):
        # Run single inference 
        pass

    def infer_batch(self, images, prompts):
        # Run batch inference here
        pass

See an example implementation of the Molmo model here.

🤖 Supported Models

Transformers
Model Usage
BLIP2 Series
xinfer.create_model("Salesforce/blip2-opt-2.7b")
Moondream2
xinfer.create_model("vikhyatk/moondream2")
VLRM-BLIP2
xinfer.create_model("sashakunitsyn/vlrm-blip2-opt-2.7b")
JoyCaption
xinfer.create_model("fancyfeast/llama-joycaption-alpha-two-hf-llava")
Llama-3.2 Vision Series
xinfer.create_model("meta-llama/Llama-3.2-11B-Vision-Instruct")
Florence-2 Series
xinfer.create_model("microsoft/Florence-2-base-ft")

You can also load any AutoModelForVision2Seq model from Transformers by using the Vision2SeqModel class.

from xinfer.transformers import Vision2SeqModel

model = Vision2SeqModel("facebook/chameleon-7b")
model = xinfer.create_model(model)
TIMM

All models from TIMM fine-tuned for ImageNet 1k are supported.

For example load a resnet18.a1_in1k model:

xinfer.create_model("timm/resnet18.a1_in1k")

You can also load any model (or a custom timm model) by using the TIMMModel class.

from xinfer.timm import TimmModel

model = TimmModel("resnet18")
model = xinfer.create_model(model)
Ultralytics
Model Usage
YOLOv8 Detection Series
xinfer.create_model("ultralytics/yolov8n")
YOLOv10 Detection Series
xinfer.create_model("ultralytics/yolov10x")
YOLOv11 Detection Series
xinfer.create_model("ultralytics/yolov11s")
YOLOv8 Classification Series
xinfer.create_model("ultralytics/yolov8n-cls")
YOLOv11 Classification Series
xinfer.create_model("ultralytics/yolov11s-cls")
YOLOv8 Segmentation Series
xinfer.create_model("ultralytics/yolov8n-seg")
YOLOv8 Pose Series
xinfer.create_model("ultralytics/yolov8n-pose")

You can also load any model from Ultralytics by using the UltralyticsModel class.

from xinfer.ultralytics import UltralyticsModel

model = UltralyticsModel("yolov5n6u")
model = xinfer.create_model(model)
vLLM
Model Usage
Molmo-72B
xinfer.create_model("vllm/allenai/Molmo-72B-0924")
Molmo-7B-D
xinfer.create_model("vllm/allenai/Molmo-7B-D-0924")
Molmo-7B-O
xinfer.create_model("vllm/allenai/Molmo-7B-O-0924")
Phi-3.5-vision-instruct
xinfer.create_model("vllm/microsoft/Phi-3.5-vision-instruct")
Phi-3-vision-128k-instruct
xinfer.create_model("vllm/microsoft/Phi-3-vision-128k-instruct")
Ollama

To use Ollama models, you'll need to install the Ollama on your machine. See Ollama Installation Guide for more details.

Model Usage
LLaVA Phi3
xinfer.create_model("ollama/llava-phi3")

🤝 Contributing

If you'd like to contribute, here are some ways you can help:

  1. Add new models: Implement new model classes following the steps in the Adding New Models section.

  2. Improve documentation: Help us enhance our documentation, including this README, inline code comments, and the official docs.

  3. Report bugs: If you find a bug, please open an issue with a clear description and steps to reproduce.

  4. Suggest enhancements: Have ideas for new features? Open a feature request.

  5. Financial support: Please consider sponsoring the project to support continued development.

Please also see the code of conduct here. Thank you for helping make x.infer better!

⚠️ Disclaimer

x.infer is not affiliated with any of the libraries it supports. It is a simple wrapper that allows you to run inference with any of the supported models.

Although x.infer is Apache 2.0 licensed, the models it supports may have their own licenses. Please check the individual model repositories for more details.


Back to Top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xinfer-0.3.0.tar.gz (40.5 MB view details)

Uploaded Source

Built Distribution

xinfer-0.3.0-py2.py3-none-any.whl (50.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file xinfer-0.3.0.tar.gz.

File metadata

  • Download URL: xinfer-0.3.0.tar.gz
  • Upload date:
  • Size: 40.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for xinfer-0.3.0.tar.gz
Algorithm Hash digest
SHA256 57e6620a8520bee8194e19ead59a828302daeda86b34c2b5c72f19cb9e30ba69
MD5 156830948aa08d81c020769eedf16c48
BLAKE2b-256 125469b4bbef35375826718d3f59c607cb05de89d8ebe1cc9ad049975979c43c

See more details on using hashes here.

File details

Details for the file xinfer-0.3.0-py2.py3-none-any.whl.

File metadata

  • Download URL: xinfer-0.3.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 50.8 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for xinfer-0.3.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 19aa4b79d98527dc9f496840d206466686d74c93983a203306f0e659fa3e6ad5
MD5 e29053d4fd1ee1b5a9c534928df81071
BLAKE2b-256 deb98bd7ba9e5e7b9df5dc9028909facb80c9c019b64bf6248f06856b36018e6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page