Model Serving Made Easy
Project description
Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.
🔥 Hot Topics
Framework Enhancements
- Metrics support: #906
- Docker image: #855
- Support multimodal: #829
- Auto recover: #694
- Function calling API: #701, here's example: https://github.com/xorbitsai/inference/blob/main/examples/FunctionCall.ipynb
- Support rerank model: #672
- Speculative decoding: #509
New Models
- Built-in support for InternLM2-chat: #829
- Built-in support for qwen-vl: #829
- Built-in support for phi-2: #828
- Built-in support for mistral-instruct-v0.2: #796
- Built-in support for deepseek-llm and deepseek-coder: #786
- Built-in support for Mixtral-8x7B-v0.1: #782
Integrations
- Dify: an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
- Chatbox: a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.
Key Features
🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.
⚡️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!
🖥 Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.
⚙️ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.
🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.
🔌 Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.
Why Xinference
Feature | Xinference | FastChat | OpenLLM | RayLLM |
---|---|---|---|---|
OpenAI-Compatible RESTful API | ✅ | ✅ | ✅ | ✅ |
vLLM Integrations | ✅ | ✅ | ✅ | ✅ |
More Inference Engines (GGML, TensorRT) | ✅ | ❌ | ✅ | ✅ |
More Platforms (CPU, Metal) | ✅ | ✅ | ❌ | ❌ |
Multi-node Cluster Deployment | ✅ | ❌ | ❌ | ✅ |
Image Models (Text-to-Image) | ✅ | ✅ | ❌ | ❌ |
Text Embedding Models | ✅ | ❌ | ❌ | ❌ |
Multimodal Models | ✅ | ❌ | ❌ | ❌ |
More OpenAI Functionalities (Function Calling) | ✅ | ❌ | ❌ | ❌ |
Getting Started
Please give us a star before you begin, and you'll receive instant notifications for every new release on GitHub!
Jupyter Notebook
The lightest way to experience Xinference is to try our Juypter Notebook on Google Colab.
Docker
Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.
docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0
Quick Start
Install Xinference by using pip as follows. (For more options, see Installation page.)
pip install "xinference[all]"
To start a local instance of Xinference, run the following command:
$ xinference-local
Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL, via the command line, or via the Xinference’s python client. Check out our docs for the guide.
Getting involved
Platform | Purpose |
---|---|
Github Issues | Reporting bugs and filing feature requests. |
Slack | Collaborating with other Xorbits users. |
Staying up-to-date on new features. |
Contributors
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for xinference-0.8.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2883b7640413177f235ed574eb126cb3eab7ca86998a7912e2d3b97d7b52a0a0 |
|
MD5 | a2a9ac555bfed514fc0e9ef2c69a4d1b |
|
BLAKE2b-256 | fd45012d480665d7437f8d69234aa6ff100e0292ffa3b0c438601d5d3e46d85c |