Skip to main content

HuggingFace runtime for MLServer

Project description

HuggingFace runtime for MLServer

This package provides a MLServer runtime compatible with HuggingFace Transformers.

Usage

You can install the runtime, alongside mlserver, as:

pip install mlserver mlserver-huggingface

For further information on how to use MLServer with HuggingFace, you can check out this worked out example.

Content Types

The HuggingFace runtime will always decode the input request using its own built-in codec. Therefore, content type annotations at the request level will be ignored. Not that this doesn't include input-level content type annotations, which will be respected as usual.

Settings

The HuggingFace runtime exposes a couple extra parameters which can be used to customise how the runtime behaves. These settings can be added under the parameters.extra section of your model-settings.json file, e.g.

---
emphasize-lines: 5-8
---
{
  "name": "qa",
  "implementation": "mlserver_huggingface.HuggingFaceRuntime",
  "parameters": {
    "extra": {
      "task": "question-answering",
      "optimum_model": true
    }
  }
}
These settings can also be injected through environment variables prefixed with `MLSERVER_MODEL_HUGGINGFACE_`, e.g.

```bash
MLSERVER_MODEL_HUGGINGFACE_TASK="question-answering"
MLSERVER_MODEL_HUGGINGFACE_OPTIMUM_MODEL=true
```

Loading models

Local models

It is possible to load a local model into a HuggingFace pipeline by specifying the model artefact folder path in parameters.uri in model-settings.json.

HuggingFace models

Models in the HuggingFace hub can be loaded by specifying their name in parameters.extra.pretrained_model in model-settings.json.

If `parameters.extra.pretrained_model` is specified, it takes precedence over `parameters.uri`.

Model Inference

Model inference is done by HuggingFace pipeline. It allows users to run inference on a batch of inputs. Extra inference kwargs can be kept in parameters.extra.

{
    "inputs": [
        {
            "name": "text_inputs",
            "shape": [1],
            "datatype": "BYTES",
            "data": ["My kitten's name is JoJo,","Tell me a story:"],
        }
    ],
    "parameters": {
        "extra":{"max_new_tokens": 200,"return_full_text": false}
    }
}

Reference

You can find the full reference of the accepted extra settings for the HuggingFace runtime below:

.. autopydantic_settings:: mlserver_huggingface.settings.HuggingFaceSettings

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file mlserver_huggingface_striveworks-1.4.0.dev3.tar.gz.

File metadata

File hashes

Hashes for mlserver_huggingface_striveworks-1.4.0.dev3.tar.gz
Algorithm Hash digest
SHA256 8376a60201e3f664a0e59ce3eb83e226dbaa94043840a3b64a67ceadb90b49fc
MD5 116efcc0885759756aea63fe716ad491
BLAKE2b-256 e20fed8b7bbed00fc75147acf680b93dc9c041db940f5ca3dfc27688a0513d4c

See more details on using hashes here.

File details

Details for the file mlserver_huggingface_striveworks-1.4.0.dev3-py3-none-any.whl.

File metadata

File hashes

Hashes for mlserver_huggingface_striveworks-1.4.0.dev3-py3-none-any.whl
Algorithm Hash digest
SHA256 fe8d7488082647c351e0afb49e38db25d2e9a8485028f346545e4a97aec0cbd7
MD5 ccaf5ea5410d6fe997de4d22e4feaa9f
BLAKE2b-256 ae4cdca72cc4c921e13741cc5a0906142453af431f408b03e9604dd8397836c5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page