Skip to main content

CLI utility and Python library for interacting with Large Language Models from organizations like OpenAI, Anthropic and Gemini plus local models installed on your own machine.

Project description

LLM

GitHub repo PyPI Changelog Tests License Discord Homebrew

A CLI tool and Python library for interacting with OpenAI, Anthropic’s Claude, Google’s Gemini, Meta’s Llama and dozens of other Large Language Models, both via remote APIs and with models that can be installed and run on your own machine.

Watch Language models on the command-line on YouTube for a demo or read the accompanying detailed notes.

With LLM you can:

Quick start

First, install LLM using pip or Homebrew or pipx or uv:

pip install llm

Or with Homebrew (see warning note):

brew install llm

Or with pipx:

pipx install llm

Or with uv

uv tool install llm

If you have an OpenAI API key key you can run this:

# Paste your OpenAI API key into this
llm keys set openai

# Run a prompt (with the default gpt-4o-mini model)
llm "Ten fun names for a pet pelican"

# Extract text from an image
llm "extract text" -a scanned-document.jpg

# Use a system prompt against a file
cat myfile.py | llm -s "Explain this code"

Run prompts against Gemini or Anthropic with their respective plugins:

llm install llm-gemini
llm keys set gemini
# Paste Gemini API key here
llm -m gemini-2.0-flash 'Tell me fun facts about Mountain View'

llm install llm-anthropic
llm keys set anthropic
# Paste Anthropic API key here
llm -m claude-4-opus 'Impress me with wild facts about turnips'

You can also install a plugin to access models that can run on your local device. If you use Ollama:

# Install the plugin
llm install llm-ollama

# Download and run a prompt against the Orca Mini 7B model
ollama pull llama3.2:latest
llm -m llama3.2:latest 'What is the capital of France?'

To start an interactive chat with a model, use llm chat:

llm chat -m gpt-4.1
Chatting with gpt-4.1
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
Type '!edit' to open your default editor and modify the prompt.
Type '!fragment <my_fragment> [<another_fragment> ...]' to insert one or more fragments
> Tell me a joke about a pelican
Why don't pelicans like to tip waiters?

Because they always have a big bill!

More background on this project:

See also the llm tag on my blog.

Contents

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-0.32a0.tar.gz (100.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm-0.32a0-py3-none-any.whl (98.6 kB view details)

Uploaded Python 3

File details

Details for the file llm-0.32a0.tar.gz.

File metadata

  • Download URL: llm-0.32a0.tar.gz
  • Upload date:
  • Size: 100.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llm-0.32a0.tar.gz
Algorithm Hash digest
SHA256 b1ba97298b88867407d1cfe52c43ab33c4b82de5803835f5237e181cbc36d305
MD5 285c09b8d15cb2dd399a94e244d95c61
BLAKE2b-256 697451542ddcc67dbf2118d329cb21adfd760862d2cbc00f9780439316b7ca65

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm-0.32a0.tar.gz:

Publisher: publish.yml on simonw/llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm-0.32a0-py3-none-any.whl.

File metadata

  • Download URL: llm-0.32a0-py3-none-any.whl
  • Upload date:
  • Size: 98.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llm-0.32a0-py3-none-any.whl
Algorithm Hash digest
SHA256 9c72e23cfbfdf8034362e1f5d8cb932b10d7bedce33197dfb2d0911bed4a0e2f
MD5 23d056d083d4c405c680df79946aab83
BLAKE2b-256 d4094c5cb49e653547d4efe7776a3d4550b7859c666590262ba3741752a5445b

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm-0.32a0-py3-none-any.whl:

Publisher: publish.yml on simonw/llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page