No project description provided
Project description
Imitater
A unified language model server built upon vllm and infinity.
Usage
Install
pip install -U imitater
Launch Server
python -m imitater.service.app -c config/example.yaml
Show configuration instruction.
Add an OpenAI model
- name: OpenAI model name
- token: OpenAI token
Add a chat model
- name: Display name
- path: Model name on hub or local model path
- device: Device IDs
- port: Port ID
- maxlen: Maximum model length (optional)
- agent_type: Agent type (optional) {react, aligned}
- template: Template jinja file (optional)
- gen_config: Generation config folder (optional)
Add an embedding model
- name: Display name
- path: Model name on hub or local model path
- device: Device IDs (does not support multi-gpus)
- port: Port ID
- batch_size: Batch size (optional)
[!NOTE] Chat template is required for the chat models.
Use
export USE_MODELSCOPE_HUB=1
to download model from modelscope.
Test Server
python tests/test_openai.py -c config/example.yaml
Roadmap
- Response choices.
- Rerank model support.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
imitater-0.2.3.tar.gz
(20.3 kB
view hashes)
Built Distribution
imitater-0.2.3-py3-none-any.whl
(24.5 kB
view hashes)