Skip to main content

A unified interface to run inference on machine learning libraries.

Project description

Python PyPI version Downloads License

Why x.infer?

If you'd like to run many models from different libraries without having to rewrite your inference code, x.infer is for you. It has a simple API and is easy to extend. Currently supports Transformers, Ultralytics, and TIMM.

Have a custom model? Create a class that implements the BaseModel interface and register it with x.infer. See Adding New Models for more details.

Key Features

x.infer
  • Unified Interface: Interact with different machine learning models through a single, consistent API.
  • Modular Design: Integrate and swap out models without altering the core framework.
  • Ease of Use: Simplifies model loading, input preprocessing, inference execution, and output postprocessing.
  • Extensibility: Add support for new models and libraries with minimal code changes.

Quickstart

Here's a quick example demonstrating how to use x.infer with a Transformers model:

Open In Colab Open In Kaggle

import xinfer

model = xinfer.create_model("vikhyatk/moondream2")

image = "https://raw.githubusercontent.com/vikhyat/moondream/main/assets/demo-1.jpg"
prompt = "Describe this image. "

model.infer(image, prompt)

>>> An animated character with long hair and a serious expression is eating a large burger at a table, with other characters in the background.

Get a list of models:

xinfer.list_models()
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Implementation ┃ Model ID                                        ┃ Input --> Output    ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ timm           │ eva02_large_patch14_448.mim_m38m_ft_in22k_in1k  │ image --> class     │
│ timm           │ eva02_large_patch14_448.mim_m38m_ft_in1k        │ image --> class     │
│ timm           │ eva02_large_patch14_448.mim_in22k_ft_in22k_in1k │ image --> class     │
│ timm           │ eva02_large_patch14_448.mim_in22k_ft_in1k       │ image --> class     │
│ timm           │ eva02_base_patch14_448.mim_in22k_ft_in22k_in1k  │ image --> class     │
│ timm           │ eva02_base_patch14_448.mim_in22k_ft_in1k        │ image --> class     │
│ timm           │ eva02_small_patch14_336.mim_in22k_ft_in1k       │ image --> class     │
│ timm           │ eva02_tiny_patch14_336.mim_in22k_ft_in1k        │ image --> class     │
│ transformers   │ Salesforce/blip2-opt-6.7b-coco                  │ image-text --> text │
│ transformers   │ Salesforce/blip2-flan-t5-xxl                    │ image-text --> text │
│ transformers   │ Salesforce/blip2-opt-6.7b                       │ image-text --> text │
│ transformers   │ Salesforce/blip2-opt-2.7b                       │ image-text --> text │
│ transformers   │ vikhyatk/moondream2                             │ image-text --> text │
│ ultralytics    │ yolov8x                                         │ image --> objects   │
│ ultralytics    │ yolov8m                                         │ image --> objects   │
│ ultralytics    │ yolov8l                                         │ image --> objects   │
│ ultralytics    │ yolov8s                                         │ image --> objects   │
│ ultralytics    │ yolov8n                                         │ image --> objects   │
│ ultralytics    │ yolov10x                                        │ image --> objects   │
│ ultralytics    │ yolov10m                                        │ image --> objects   │
│ ...            │ ...                                             │ ...                 │
│ ...            │ ...                                             │ ...                 │
└────────────────┴─────────────────────────────────────────────────┴─────────────────────┘

Launch Gradio Interface

model.launch_gradio()

Gradio Interface

Installation

[!IMPORTANT] You must have PyTorch installed to use x.infer.

To install the barebones x.infer (without any optional dependencies), run:

pip install xinfer

x.infer can be used with multiple optional libraries. You'll just need to install one or more of the following:

pip install "xinfer[transformers]"
pip install "xinfer[ultralytics]"
pip install "xinfer[timm]"

To install all libraries, run:

pip install "xinfer[all]"

To install from a local directory, run:

git clone https://github.com/dnth/x.infer.git
cd x.infer
pip install -e .

Usage

Supported Models

Transformers:

model = xinfer.create_model("Salesforce/blip2-opt-2.7b")
model = xinfer.create_model("vikhyatk/moondream2")
model = xinfer.create_model("sashakunitsyn/vlrm-blip2-opt-2.7b")
model = xinfer.create_model("fancyfeast/llama-joycaption-alpha-two-hf-llava")

[!NOTE] Wish to load an unlisted model? You can load any Vision2Seq model from Transformers by using the Vision2SeqModel class.

from xinfer.transformers import Vision2SeqModel

model = Vision2SeqModel("facebook/chameleon-7b")
model = xinfer.create_model(model)

TIMM:

  • EVA02 Series
model = xinfer.create_model("eva02_small_patch14_336.mim_in22k_ft_in1k")

[!NOTE] Wish to load an unlisted model? You can load any model from TIMM by using the TIMMModel class.

from xinfer.timm import TimmModel

model = TimmModel("resnet18")
model = xinfer.create_model(model)

Ultralytics:

  • YOLOv8 Series
model = xinfer.create_model("yolov8n")
  • YOLOv10 Series
model = xinfer.create_model("yolov10x")
  • YOLOv11 Series
model = xinfer.create_model("yolov11s")

[!NOTE] Wish to load an unlisted model? You can load any model from Ultralytics by using the UltralyticsModel class.

from xinfer.ultralytics import UltralyticsModel

model = UltralyticsModel("yolov5n6u")
model = xinfer.create_model(model)

Adding New Models

  • Step 1: Create a new model class that implements the BaseModel interface.

  • Step 2: Implement the required abstract methods load_model, infer, and infer_batch.

  • Step 3: Decorate your class with the register_model decorator, specifying the model ID, implementation, and input/output.

For example:

@xinfer.register_model("my-model", "custom", ModelInputOutput.IMAGE_TEXT_TO_TEXT)
class MyModel(BaseModel):
    def load_model(self):
        # Load your model here
        pass

    def infer(self, image, prompt):
        # Run single inference 
        pass

    def infer_batch(self, images, prompts):
        # Run batch inference here
        pass

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xinfer-0.0.7.tar.gz (31.2 MB view details)

Uploaded Source

Built Distribution

xinfer-0.0.7-py2.py3-none-any.whl (35.5 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file xinfer-0.0.7.tar.gz.

File metadata

  • Download URL: xinfer-0.0.7.tar.gz
  • Upload date:
  • Size: 31.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for xinfer-0.0.7.tar.gz
Algorithm Hash digest
SHA256 9e5688180a321ab86ab2c2b93e24a0e228a8d6418f40f71f5c02f7ba91c13ec3
MD5 9407b75869906aeee175edf41271b350
BLAKE2b-256 7b5ccf5f609632277cdb996f34e3af31236aae3f8efcd0ac1e4e754b9e866a8f

See more details on using hashes here.

File details

Details for the file xinfer-0.0.7-py2.py3-none-any.whl.

File metadata

  • Download URL: xinfer-0.0.7-py2.py3-none-any.whl
  • Upload date:
  • Size: 35.5 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for xinfer-0.0.7-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 2d43216279270e9a2646a7200078c05c1ab07c8f032bac8c897d8df7f14d4bf1
MD5 03befc4b825c09bc515ef9f5b2775c85
BLAKE2b-256 9ee0c2ea35a6342cee6c5999eb04995d2acec3d78bae6bbac75b3a2e4c6ce7af

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page