A CLI utility and Python library for interacting with Large Language Models, including OpenAI, PaLM and local models installed on your own machine.
Project description
LLM
A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
Run prompts from the command-line, store the results in SQLite, generate embeddings and more.
Consult the LLM plugins directory for plugins that provide access to remote and local models.
Full documentation: llm.datasette.io
Background on this project:
- llm, ttok and strip-tags—CLI tools for working with ChatGPT and other LLMs
- The LLM CLI tool now supports self-hosted language models via plugins
- Accessing Llama 2 from the command-line with the llm-replicate plugin
- Run Llama 2 on your own Mac using LLM and Homebrew
- Catching up on the weird world of LLMs
- LLM now provides tools for working with embeddings
- Build an image search engine with llm-clip, chat with models with llm chat
- Many options for running Mistral models in your terminal using LLM
Installation
Install this tool using pip
:
pip install llm
Or using Homebrew:
brew install llm
Detailed installation instructions.
Getting started
If you have an OpenAI API key you can get started using the OpenAI models right away.
As an alternative to OpenAI, you can install plugins to access models by other providers, including models that can be installed and run on your own device.
Save your OpenAI API key like this:
llm keys set openai
This will prompt you for your key like so:
Enter key: <paste here>
Now that you've saved a key you can run a prompt like this:
llm "Five cute names for a pet penguin"
1. Waddles
2. Pebbles
3. Bubbles
4. Flappy
5. Chilly
Read the usage instructions for more.
Installing a model that runs on your own machine
LLM plugins can add support for alternative models, including models that run on your own machine.
To download and run Mistral 7B Instruct locally, you can install the llm-gpt4all plugin:
llm install llm-gpt4all
Then run this command to see which models it makes available:
llm models
gpt4all: all-MiniLM-L6-v2-f16 - SBert, 43.76MB download, needs 1GB RAM
gpt4all: orca-mini-3b-gguf2-q4_0 - Mini Orca (Small), 1.84GB download, needs 4GB RAM
gpt4all: mistral-7b-instruct-v0 - Mistral Instruct, 3.83GB download, needs 8GB RAM
...
Each model file will be downloaded once the first time you use it. Try Mistral out like this:
llm -m mistral-7b-instruct-v0 'difference between a pelican and a walrus'
You can also start a chat session with the model using the llm chat
command:
llm chat -m mistral-7b-instruct-v0
Chatting with mistral-7b-instruct-v0
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
>
Using a system prompt
You can use the -s/--system
option to set a system prompt, providing instructions for processing other input to the tool.
To describe how the code in a file works, try this:
cat mycode.py | llm -s "Explain this code"
Help
For help, run:
llm --help
You can also use:
python -m llm --help
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llm-0.18.tar.gz
.
File metadata
- Download URL: llm-0.18.tar.gz
- Upload date:
- Size: 41.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1c4243dea5ec3e7ee50f4fd0b8ccd6475b8f5a6cca2b9aac8270c0e843c0e490 |
|
MD5 | 9a66cc455f1de4893c1f5ff2364940be |
|
BLAKE2b-256 | 8df10fb7b1fd468eb9d80ddb583cc1badcadbebf4549c3cdf4a7ff26f62803a6 |
Provenance
The following attestation bundles were made for llm-0.18.tar.gz
:
Publisher:
publish.yml
on simonw/llm
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
llm-0.18.tar.gz
- Subject digest:
1c4243dea5ec3e7ee50f4fd0b8ccd6475b8f5a6cca2b9aac8270c0e843c0e490
- Sigstore transparency entry: 149375301
- Sigstore integration time:
- Predicate type:
File details
Details for the file llm-0.18-py3-none-any.whl
.
File metadata
- Download URL: llm-0.18-py3-none-any.whl
- Upload date:
- Size: 43.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bf1ad0b41425909a5aecd855ccb1ad2fb64d78fb2ac0d1ab07ddc2eba4b58c42 |
|
MD5 | fe92afeefb552493faa405cbfdef0e7e |
|
BLAKE2b-256 | 962756aac0e5aa3b97cb4e06246beb8ad66c174d60ba0413f318c4fd15f50a5e |
Provenance
The following attestation bundles were made for llm-0.18-py3-none-any.whl
:
Publisher:
publish.yml
on simonw/llm
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
llm-0.18-py3-none-any.whl
- Subject digest:
bf1ad0b41425909a5aecd855ccb1ad2fb64d78fb2ac0d1ab07ddc2eba4b58c42
- Sigstore transparency entry: 149375302
- Sigstore integration time:
- Predicate type: