Skip to main content

Model Serving Made Easy

Project description

xorbits

Xorbits Inference: Model Serving Made Easy 🤖

Xinference Enterprise · Self-hosting · Documentation

PyPI Latest Release License Build Status Discord Twitter

README in English 简体中文版自述文件 日本語のREADME


Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.

🔥 Hot Topics

Framework Enhancements

  • Xllamacpp: New llama.cpp Python binding, maintained by Xinference team, supports continuous batching and is more production-ready.: #2997
  • Distributed inference: running models across workers: #2877
  • VLLM enhancement: Shared KV cache across multiple replicas: #2732
  • Support Continuous batching for Transformers engine: #1724
  • Support MLX backend for Apple Silicon chips: #1765
  • Support specifying worker and GPU indexes for launching models: #1195
  • Support SGLang backend: #1161
  • Support LoRA for LLM and image models: #1080

New Models

Integrations

  • Dify: an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
  • FastGPT: a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.
  • RAGFlow: is an open-source RAG engine based on deep document understanding.
  • MaxKB: MaxKB = Max Knowledge Base, it is a chatbot based on Large Language Models (LLM) and Retrieval-Augmented Generation (RAG).
  • Chatbox: a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.

Key Features

🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.

⚡️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!

🖥 Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.

⚙️ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.

🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.

🔌 Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.

Why Xinference

Feature Xinference FastChat OpenLLM RayLLM
OpenAI-Compatible RESTful API
vLLM Integrations
More Inference Engines (GGML, TensorRT)
More Platforms (CPU, Metal)
Multi-node Cluster Deployment
Image Models (Text-to-Image)
Text Embedding Models
Multimodal Models
Audio Models
More OpenAI Functionalities (Function Calling)

Using Xinference

  • Cloud
    We host a Xinference Cloud service for anyone to try with zero setup.

  • Self-hosting Xinference Community Edition
    Quickly get Xinference running in your environment with this starter guide. Use our documentation for further references and more in-depth instructions.

  • Xinference for enterprise / organizations
    We provide additional enterprise-centric features. send us an email to discuss enterprise needs.

Staying Ahead

Star Xinference on GitHub and be instantly notified of new releases.

star-us

Getting Started

Jupyter Notebook

The lightest way to experience Xinference is to try our Jupyter Notebook on Google Colab.

Docker

Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.

docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0

K8s via helm

Ensure that you have GPU support in your Kubernetes cluster, then install as follows.

# add repo
helm repo add xinference https://xorbitsai.github.io/xinference-helm-charts

# update indexes and query xinference versions
helm repo update xinference
helm search repo xinference/xinference --devel --versions

# install xinference
helm install xinference xinference/xinference -n xinference --version 0.0.1-v<xinference_release_version>

For more customized installation methods on K8s, please refer to the documentation.

Quick Start

Install Xinference by using pip as follows. (For more options, see Installation page.)

pip install "xinference[all]"

To start a local instance of Xinference, run the following command:

$ xinference-local

Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL, via the command line, or via the Xinference’s python client. Check out our docs for the guide.

web UI

Getting involved

Platform Purpose
Github Issues Reporting bugs and filing feature requests.
Discord Collaborating with other Xinference users.
Twitter Staying up-to-date on new features.

Citation

If this work is helpful, please kindly cite as:

@inproceedings{lu2024xinference,
    title = "Xinference: Making Large Model Serving Easy",
    author = "Lu, Weizheng and Xiong, Lingfeng and Zhang, Feng and Qin, Xuye and Chen, Yueguo",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-demo.30",
    pages = "291--300",
}

Contributors

Star History

Star History Chart

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xinference-1.7.1.post1.tar.gz (31.3 MB view details)

Uploaded Source

Built Distribution

xinference-1.7.1.post1-py3-none-any.whl (41.4 MB view details)

Uploaded Python 3

File details

Details for the file xinference-1.7.1.post1.tar.gz.

File metadata

  • Download URL: xinference-1.7.1.post1.tar.gz
  • Upload date:
  • Size: 31.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for xinference-1.7.1.post1.tar.gz
Algorithm Hash digest
SHA256 df0c41d7b119bd155bc55b3a4ffc4bfc6c23ca76f03833dbd293fe053c73e7b7
MD5 e9f56d2e9d40cb548b9a5b29cb60caae
BLAKE2b-256 aae201506f2b2787249b725be7e1cfa47e46a4e9b44377c57010673adf77c915

See more details on using hashes here.

File details

Details for the file xinference-1.7.1.post1-py3-none-any.whl.

File metadata

File hashes

Hashes for xinference-1.7.1.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 b89712a740504ce2844761309d2d3af20008f58ca0999b512695087dc910dfee
MD5 5b4dc5ecb73d00746ba6a01eee17eada
BLAKE2b-256 87bd547a0177707fc28c0a4c8af2fe4ea043903b13a08870daf130842bc250f8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page