Skip to main content

LLM plugin for running models using llama.cpp

Project description

llm-llama-cpp

PyPI Changelog Tests License

LLM plugin for running models using llama.cpp

Installation

Install this plugin in the same environment as llm.

llm install llm-llama-cpp

The plugin has an additional dependency on llama-cpp-python which needs to be installed separately.

If you have a C compiler available on your system you can install that like so:

llm install llama-cpp-python

You could also try installing one of the wheels made available in their latest release on GitHub. Find the URL to the wheel for your platform, if one exists, and run:

llm install https://...

If you are on an Apple Silicon Mac you can try this command, which should compile the package with METAL support for running on your GPU:

CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 llm install llama-cpp-python

Adding models

After installation you will need to add or download some models.

This tool should work with any model that works with llama.cpp.

The plugin can download models for you. Try running this command:

llm llama-cpp download-model \
  https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q6_K.gguf \
  --alias llama2-chat --alias l2c --llama2-chat

This will download the Llama 2 7B Chat GGUF model file (this one is 5.53GB), save it and register it with the plugin - with two aliases, llama2-chat and l2c.

The --llama2-chat option configures it to run using a special Llama 2 Chat prompt format. You should omit this for models that are not Llama 2 Chat models.

If you have already downloaded a llama.cpp compatible model you can tell the plugin to read it from its current location like this:

llm llama-cpp add-model path/to/llama-2-7b-chat.Q6_K.gguf \
  --alias l27c --llama2-chat

The model filename (minus the .gguf extension) will be registered as its ID for executing the model.

You can also set one or more aliases using the --alias option.

You can see a list of models you have registered in this way like this:

llm llama-cpp models

Models are registered in a models.json file. You can find the path to that file in order to edit it directly like so:

llm llama-cpp models-file

For example, to edit that file in Vim:

vim "$(llm llama-cpp models-file)"

To find the directory with downloaded models, run:

llm llama-cpp models-dir

Here's how to change to that directory:

cd "$(llm llama-cpp models-dir)"

Running a prompt through a model

Once you have downloaded and added a model, you can run a prompt like this:

llm -m llama-2-7b-chat.Q6_K 'five names for a cute pet skunk'

Or if you registered an alias you can use that instead:

llm -m llama2-chat 'five creative names for a pet hedgehog'

More models to try

Llama 2 7B

This model is Llama 2 7B GGML without the chat training. You'll need to prompt it slightly differently:

llm llama-cpp download-model \
  https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q6_K.gguf \
  --alias llama2

Try prompts that expect to be completed by the model, for example:

llm -m llama2 'Three fancy names for a posh albatross are:'

Llama 2 Chat 13B

This model is the Llama 2 13B Chat GGML model - a 10.7GB download:

llm llama-cpp download-model \
  'https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/resolve/main/llama-2-13b-chat.Q6_K.gguf'\
  -a llama2-chat-13b --llama2-chat

Llama 2 Python 13B

This model is the Llama 2 13B Python GGML model - a 9.24GB download:

llm llama-cpp download-model \
  'https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/resolve/main/codellama-13b-python.Q5_K_M.gguf'\
  -a llama2-python-13b --llama2-chat

Options

The following options are available:

  • -o verbose 1 - output more verbose logging
  • -o max_tokens 100 - max tokens to return. Defaults to 4000.
  • -o no_gpu 1 - remove the default `n_gpu_layers=1`` argument, which should disable GPU usage
  • -o num_gpu_layers 10 - increase the n_gpu_layers argument to a higher value (the default is 1)
  • -o n_ctx 1024 - set the n_ctx argument to 1024 (the default is 4000)

For example:

llm chat -m llama2-chat-13b -o n_ctx 1024

These are mainly provided to support experimenting with different ways of executing the underlying model.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-llama-cpp
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-llama-cpp-0.2b1.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

llm_llama_cpp-0.2b1-py3-none-any.whl (11.0 kB view details)

Uploaded Python 3

File details

Details for the file llm-llama-cpp-0.2b1.tar.gz.

File metadata

  • Download URL: llm-llama-cpp-0.2b1.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for llm-llama-cpp-0.2b1.tar.gz
Algorithm Hash digest
SHA256 b00cd90c5f1804dc74a826e3bd57fa77bbdcb146966e626775400cab5d258898
MD5 8fd0caab389fa447886547ee2e760ac3
BLAKE2b-256 10bfd8145f969d45d9df8243b02f60f9cb5ef045b740c24c5097158c87c037e4

See more details on using hashes here.

File details

Details for the file llm_llama_cpp-0.2b1-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_llama_cpp-0.2b1-py3-none-any.whl
Algorithm Hash digest
SHA256 6714d1580e94063277202890edac3579f2ff927281a457440bf42b086910485e
MD5 d5955c57d371ecbf3a03d7ee92c0158f
BLAKE2b-256 703b94dda8440a64f219ac466573e248789f83ec619c9f86b21d8e0c655e9155

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page