Skip to main content

CLI utility and Python library for interacting with Large Language Models from organizations like OpenAI, Anthropic and Gemini plus local models installed on your own machine.

Project description

LLM

GitHub repo PyPI Changelog Tests License Discord Homebrew

A CLI tool and Python library for interacting with OpenAI, Anthropic’s Claude, Google’s Gemini, Meta’s Llama and dozens of other Large Language Models, both via remote APIs and with models that can be installed and run on your own machine.

Watch Language models on the command-line on YouTube for a demo or read the accompanying detailed notes.

With LLM you can:

Quick start

First, install LLM using pip or Homebrew or pipx or uv:

pip install llm

Or with Homebrew (see warning note):

brew install llm

Or with pipx:

pipx install llm

Or with uv

uv tool install llm

If you have an OpenAI API key key you can run this:

# Paste your OpenAI API key into this
llm keys set openai

# Run a prompt (with the default gpt-4o-mini model)
llm "Ten fun names for a pet pelican"

# Extract text from an image
llm "extract text" -a scanned-document.jpg

# Use a system prompt against a file
cat myfile.py | llm -s "Explain this code"

Run prompts against Gemini or Anthropic with their respective plugins:

llm install llm-gemini
llm keys set gemini
# Paste Gemini API key here
llm -m gemini-2.0-flash 'Tell me fun facts about Mountain View'

llm install llm-anthropic
llm keys set anthropic
# Paste Anthropic API key here
llm -m claude-4-opus 'Impress me with wild facts about turnips'

You can also install a plugin to access models that can run on your local device. If you use Ollama:

# Install the plugin
llm install llm-ollama

# Download and run a prompt against the Orca Mini 7B model
ollama pull llama3.2:latest
llm -m llama3.2:latest 'What is the capital of France?'

To start an interactive chat with a model, use llm chat:

llm chat -m gpt-4.1
Chatting with gpt-4.1
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
Type '!edit' to open your default editor and modify the prompt.
Type '!fragment <my_fragment> [<another_fragment> ...]' to insert one or more fragments
> Tell me a joke about a pelican
Why don't pelicans like to tip waiters?

Because they always have a big bill!

More background on this project:

See also the llm tag on my blog.

Contents

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-0.28.tar.gz (85.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm-0.28-py3-none-any.whl (82.6 kB view details)

Uploaded Python 3

File details

Details for the file llm-0.28.tar.gz.

File metadata

  • Download URL: llm-0.28.tar.gz
  • Upload date:
  • Size: 85.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm-0.28.tar.gz
Algorithm Hash digest
SHA256 e3d8bcc0f016fae8aeca1d702491a1891523702983459cb66afec99d74cabfc1
MD5 b41065732142aab60763bd372266f09b
BLAKE2b-256 cb915071c6e0e7eabbf3a95870a4b2ec0fc585e0d4d57532d305d08d6595f8a3

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm-0.28.tar.gz:

Publisher: publish.yml on simonw/llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm-0.28-py3-none-any.whl.

File metadata

  • Download URL: llm-0.28-py3-none-any.whl
  • Upload date:
  • Size: 82.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm-0.28-py3-none-any.whl
Algorithm Hash digest
SHA256 7d70c22f2dbfe47123a67328eea9c25add8f0a3c6a439f7d452219edf2e4dd76
MD5 9381dcceb8a95c77f60a4cdba64d3956
BLAKE2b-256 d19fb22de67c44c8fdac517889feb6f45a90cbc2783f71c70ea9e2614e815126

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm-0.28-py3-none-any.whl:

Publisher: publish.yml on simonw/llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page