A CLI utility and Python library for interacting with Large Language Models, including OpenAI, PaLM and local models installed on your own machine.
Project description
LLM
A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
Run prompts from the command-line, store the results in SQLite, generate embeddings and more.
Full documentation: llm.datasette.io
Background on this project:
- llm, ttok and strip-tags—CLI tools for working with ChatGPT and other LLMs
- The LLM CLI tool now supports self-hosted language models via plugins
- Accessing Llama 2 from the command-line with the llm-replicate plugin
- Run Llama 2 on your own Mac using LLM and Homebrew
- Catching up on the weird world of LLMs
- LLM now provides tools for working with embeddings
- Build an image search engine with llm-clip, chat with models with llm chat
Installation
Install this tool using pip
:
pip install llm
Or using pipx:
pipx install llm
Detailed installation instructions.
Getting started
If you have an OpenAI API key you can get started using the OpenAI models right away.
As an alternative to OpenAI, you can install plugins to access models by other providers, including models that can be installed and run on your own device.
Save your OpenAI API key like this:
llm keys set openai
This will prompt you for your key like so:
Enter key: <paste here>
Now that you've saved a key you can run a prompt like this:
llm "Five cute names for a pet penguin"
1. Waddles
2. Pebbles
3. Bubbles
4. Flappy
5. Chilly
Read the usage instructions for more.
Installing a model that runs on your own machine
LLM plugins can add support for alternative models, including models that run on your own machine.
To download and run Llama 2 13B locally, you can install the llm-mlc plugin:
llm install llm-mlc
llm mlc pip install --pre --force-reinstall \
mlc-ai-nightly \
mlc-chat-nightly \
-f https://mlc.ai/wheels
llm mlc setup
Then download the 15GB Llama 2 13B model like this:
llm mlc download-model Llama-2-13b-chat --alias llama2
And run a prompt through it:
llm -m llama2 'difference between a llama and an alpaca'
You can also start a chat session with the model using the llm chat
command:
llm chat -m llama2
Chatting with mlc-chat-Llama-2-13b-chat-hf-q4f16_1
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
>
Using a system prompt
You can use the -s/--system
option to set a system prompt, providing instructions for processing other input to the tool.
To describe how the code a file works, try this:
cat mycode.py | llm -s "Explain this code"
Help
For help, run:
llm --help
You can also use:
python -m llm --help
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llm-0.11.2.tar.gz
.
File metadata
- Download URL: llm-0.11.2.tar.gz
- Upload date:
- Size: 35.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 269107bff602ddea8eac679a12be12b9f41c6b6dcfb3e95ea3593a089fd0c14e |
|
MD5 | f2f48aced5dfd8848faa36c262808d11 |
|
BLAKE2b-256 | ed7a4403dc0b8d9bfca33619f5a4bcaf811fb82b08e01e867521b66b179d08cc |
File details
Details for the file llm-0.11.2-py3-none-any.whl
.
File metadata
- Download URL: llm-0.11.2-py3-none-any.whl
- Upload date:
- Size: 36.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2cb294a6b2896daa1d9aad079663f0fc72544e7b8e9b8fbca045872743877523 |
|
MD5 | 8436d1be9b9c2e097d5592dc27181a74 |
|
BLAKE2b-256 | 62d8d76907e8d96873adac9e28662a84dbdad6ce21fbec20386946a768ccfed8 |