Skip to main content

CLI utility and Python library for interacting with Large Language Models from organizations like OpenAI, Anthropic and Gemini plus local models installed on your own machine.

Project description

LLM

GitHub repo PyPI Changelog Tests License Discord Homebrew

A CLI tool and Python library for interacting with OpenAI, Anthropic’s Claude, Google’s Gemini, Meta’s Llama and dozens of other Large Language Models, both via remote APIs and with models that can be installed and run on your own machine.

Watch Language models on the command-line on YouTube for a demo or read the accompanying detailed notes.

With LLM you can:

Quick start

First, install LLM using pip or Homebrew or pipx or uv:

pip install llm

Or with Homebrew (see warning note):

brew install llm

Or with pipx:

pipx install llm

Or with uv

uv tool install llm

If you have an OpenAI API key key you can run this:

# Paste your OpenAI API key into this
llm keys set openai

# Run a prompt (with the default gpt-4o-mini model)
llm "Ten fun names for a pet pelican"

# Extract text from an image
llm "extract text" -a scanned-document.jpg

# Use a system prompt against a file
cat myfile.py | llm -s "Explain this code"

Run prompts against Gemini or Anthropic with their respective plugins:

llm install llm-gemini
llm keys set gemini
# Paste Gemini API key here
llm -m gemini-2.0-flash 'Tell me fun facts about Mountain View'

llm install llm-anthropic
llm keys set anthropic
# Paste Anthropic API key here
llm -m claude-4-opus 'Impress me with wild facts about turnips'

You can also install a plugin to access models that can run on your local device. If you use Ollama:

# Install the plugin
llm install llm-ollama

# Download and run a prompt against the Orca Mini 7B model
ollama pull llama3.2:latest
llm -m llama3.2:latest 'What is the capital of France?'

To start an interactive chat with a model, use llm chat:

llm chat -m gpt-4.1
Chatting with gpt-4.1
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
Type '!edit' to open your default editor and modify the prompt.
Type '!fragment <my_fragment> [<another_fragment> ...]' to insert one or more fragments
> Tell me a joke about a pelican
Why don't pelicans like to tip waiters?

Because they always have a big bill!

More background on this project:

See also the llm tag on my blog.

Contents

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-0.31.tar.gz (86.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm-0.31-py3-none-any.whl (83.9 kB view details)

Uploaded Python 3

File details

Details for the file llm-0.31.tar.gz.

File metadata

  • Download URL: llm-0.31.tar.gz
  • Upload date:
  • Size: 86.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llm-0.31.tar.gz
Algorithm Hash digest
SHA256 c7701408fdc53cbdf1db6a43f35c7dd410c291dda36cc38a14db4b482b274fa4
MD5 789722876f86327976f906a01b9703e2
BLAKE2b-256 7df23a81744fdaf3a92fe9020dc298dd2e4c144e2e7fcab863e1a132ea537cab

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm-0.31.tar.gz:

Publisher: publish.yml on simonw/llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm-0.31-py3-none-any.whl.

File metadata

  • Download URL: llm-0.31-py3-none-any.whl
  • Upload date:
  • Size: 83.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llm-0.31-py3-none-any.whl
Algorithm Hash digest
SHA256 10fabf61f60869a3ff90383f020ad8c787503c53f706eef4477a262d856c18ff
MD5 ce239dc64e0089d6be63b48ee7ac37a5
BLAKE2b-256 415d033f0a2ba2fd952323d9b6434f653755ac1ce2e42f23c43579d47b008660

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm-0.31-py3-none-any.whl:

Publisher: publish.yml on simonw/llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page