Skip to main content

Model Serving Made Easy

Project description

xorbits

Xorbits Inference: Model Serving Made Easy 🤖

Xinference Enterprise · Self-hosting · Documentation

PyPI Latest Release License Build Status Docker Pulls Discord Twitter

README in English 简体中文版自述文件 日本語のREADME


Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.

🔥 Hot Topics

Framework Enhancements

  • Agent-native Serving: Xinference integrates with Xagent to enable dynamic planning, tool use, and autonomous multi-step reasoning — moving beyond static pipelines.
  • Auto batch: Multiple concurrent requests are automatically batched, significantly improving throughput: #4197
  • Xllamacpp: New llama.cpp Python binding, maintained by Xinference team, supports continuous batching and is more production-ready.: #2997
  • Distributed inference: running models across workers: #2877
  • VLLM enhancement: Shared KV cache across multiple replicas: #2732

New Models

Integrations

  • Xagent: an enterprise agent platform for building and running AI agents with planning, memory, and tool use — not limited to rigid workflows.
  • Dify: an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
  • FastGPT: a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.
  • RAGFlow: is an open-source RAG engine based on deep document understanding.
  • MaxKB: MaxKB = Max Knowledge Brain, it is a powerful and easy-to-use AI assistant that integrates Retrieval-Augmented Generation (RAG) pipelines, supports robust workflows, and provides advanced MCP tool-use capabilities.

Key Features

🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.

⚡️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!

🖥 Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.

⚙️ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.

🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.

🔌 Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.

Why Xinference

Feature Xinference FastChat OpenLLM RayLLM
OpenAI-Compatible RESTful API
vLLM Integrations
More Inference Engines (GGML, TensorRT)
More Platforms (CPU, Metal)
Multi-node Cluster Deployment
Image Models (Text-to-Image)
Text Embedding Models
Multimodal Models
Audio Models
More OpenAI Functionalities (Function Calling)

Using Xinference

  • Self-hosting Xinference Community Edition
    Quickly get Xinference running in your environment with this starter guide. Use our documentation for further references and more in-depth instructions.

  • Xinference for enterprise / organizations
    We provide additional enterprise-centric features. send us an email to discuss enterprise needs.

Staying Ahead

Star Xinference on GitHub and be instantly notified of new releases.

star-us

Getting Started

Jupyter Notebook

The lightest way to experience Xinference is to try our Jupyter Notebook on Google Colab.

Docker

Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.

docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0

K8s via helm

Ensure that you have GPU support in your Kubernetes cluster, then install as follows.

# add repo
helm repo add xinference https://xorbitsai.github.io/xinference-helm-charts

# update indexes and query xinference versions
helm repo update xinference
helm search repo xinference/xinference --devel --versions

# install xinference
helm install xinference xinference/xinference -n xinference --version 0.0.1-v<xinference_release_version>

For more customized installation methods on K8s, please refer to the documentation.

Quick Start

Install Xinference by using pip as follows. (For more options, see Installation page.)

pip install "xinference[all]"

To start a local instance of Xinference, run the following command:

$ xinference-local

Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL, via the command line, or via the Xinference’s python client. Check out our docs for the guide.

web UI

Getting involved

Platform Purpose
Github Issues Reporting bugs and filing feature requests.
Discord Collaborating with other Xinference users.
Twitter Staying up-to-date on new features.

Citation

If this work is helpful, please kindly cite as:

@inproceedings{lu2024xinference,
    title = "Xinference: Making Large Model Serving Easy",
    author = "Lu, Weizheng and Xiong, Lingfeng and Zhang, Feng and Qin, Xuye and Chen, Yueguo",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-demo.30",
    pages = "291--300",
}

Contributors

Star History

Star History Chart

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xinference-2.5.0.tar.gz (51.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xinference-2.5.0-py3-none-any.whl (62.2 MB view details)

Uploaded Python 3

File details

Details for the file xinference-2.5.0.tar.gz.

File metadata

  • Download URL: xinference-2.5.0.tar.gz
  • Upload date:
  • Size: 51.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for xinference-2.5.0.tar.gz
Algorithm Hash digest
SHA256 593afb27ecbdf740c3244bc9dbf147a9fccbecef9021971c9088ff5e7e4b1ba7
MD5 494b822c795618f0f3a73fb3099e5a62
BLAKE2b-256 3dd47edbc2e117d6b896ea8de8c1a0cda790db326d3908001785fe7d8fcf12cd

See more details on using hashes here.

File details

Details for the file xinference-2.5.0-py3-none-any.whl.

File metadata

  • Download URL: xinference-2.5.0-py3-none-any.whl
  • Upload date:
  • Size: 62.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for xinference-2.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 476afc2589d0b75b4b2e32c741c13e51f7ae9bab6d5e9c7918871a2db5fe191b
MD5 286ebeb6235f0017b21d7d834827d1c5
BLAKE2b-256 32e2b53248e60040d7165e6aa3a59af0bf6cb44d16a5e252a42ccd2f6b210496

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page