Access llamafile localhost models via LLM
Project description
llm-llamafile
Access llamafile localhost models via LLM
Installation
Install this plugin in the same environment as LLM.
llm install llm-llamafile
Usage
Make sure you have a llamafile
running on localhost
, serving an OpenAI compatible API endpoint on port 8080.
You can then use llm
to interact with that model like so:
llm -m llamafile "3 neat characteristics of a pelican"
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-llamafile
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llm_llamafile-0.1.tar.gz
(6.2 kB
view details)
Built Distribution
File details
Details for the file llm_llamafile-0.1.tar.gz
.
File metadata
- Download URL: llm_llamafile-0.1.tar.gz
- Upload date:
- Size: 6.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e7fee71cac12f1b3230b47f1aa15e8872c8efd764ac886a90a20ab11e72e7549 |
|
MD5 | e2a236b41fedf564a67864789b5bbcf8 |
|
BLAKE2b-256 | cb911b7e844fddccd7e1b03ffecb2bfd70bb2eca0383d19604df21015cded932 |
File details
Details for the file llm_llamafile-0.1-py3-none-any.whl
.
File metadata
- Download URL: llm_llamafile-0.1-py3-none-any.whl
- Upload date:
- Size: 6.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 85feab4b87c24f060a920101c1700a7fb00ea8b9b6b0ee9bf423c8e83286e5e2 |
|
MD5 | 6ceb7f3aa91e133f80149e76c6ca99c8 |
|
BLAKE2b-256 | 11cd500b8c2ecf640ca13178ceebfb1de6576b4ee642022b7c41634e4dde0a44 |