LLM plugin providing access to Grok models using the xAI API
Project description
llm-grok
Plugin for LLM providing access to Grok models using the xAI API
Installation
Install this plugin in the same environment as LLM:
llm install llm-grok
Usage
First, obtain an API key from xAI.
Configure the key using the llm keys set grok command:
llm keys set grok
# Paste your xAI API key here
You can also set it via environment variable:
export XAI_API_KEY="your-api-key-here"
You can now access the Grok model. Run llm models to see it in the list.
To run a prompt through grok-4-1-fast (default model):
llm -m grok-4-1-fast 'What is the meaning of life, the universe, and everything?'
To start an interactive chat session:
llm chat -m grok-4-1-fast
Example chat session:
Chatting with grok-4-1-fast
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Tell me a joke about programming
To use a system prompt to give Grok specific instructions:
cat example.py | llm -m grok-4-1-fast -s 'explain this code in a humorous way'
To set your default model:
llm models default grok-3-mini-latest
# Now running `llm ...` will use `grok-3-mini-latest` by default
Available Models
The following Grok models are available:
grok-4-1-fast(default)grok-4-1-fast-reasoning-latestgrok-4-1-fast-non-reasoning-latestgrok-4-latestgrok-4-fastgrok-4-fast-reasoning-latestgrok-4-fast-non-reasoning-latestgrok-code-fast-1grok-3-latestgrok-3-mini-fast-latestgrok-3-mini-latestgrok-3-fast-latestgrok-2-latestgrok-2-vision-latest
You can check the available models using:
llm grok models
Model Options
The grok models accept the following options, using -o name value syntax:
Basic Options
-o temperature 0.7: The sampling temperature, between 0 and 1 (default: 0.0). Higher values like 0.8 increase randomness, while lower values like 0.2 make the output more focused and deterministic.-o max_completion_tokens 100: Maximum number of tokens to generate in the completion (includes both visible tokens and reasoning tokens).
Live Search Options
All Grok models support live search functionality to access real-time information:
-o search_mode auto: Live search mode. Options:auto,on,off(default: disabled)-o max_search_results 20: Maximum number of search results to consider (default: 20)-o return_citations true: Whether to return citations for search results (default: true)-o search_from_date 2025-01-01: Start date for search results in ISO8601 format (YYYY-MM-DD)-o search_to_date 2025-01-15: End date for search results in ISO8601 format (YYYY-MM-DD)
X Platform Search Options
-o excluded_x_handles "@spam_account,@another": Comma-separated list of X handles to exclude (max 10)-o included_x_handles "@elonmusk,@openai": Comma-separated list of X handles to include (cannot be used with excluded_x_handles)-o post_favorite_count 100: Minimum number of favorites for X posts to be included-o post_view_count 1000: Minimum number of views for X posts to be included
Examples
Basic usage with options:
llm -m grok-4-1-fast -o temperature 0.2 -o max_completion_tokens 50 'Write a haiku about AI'
Using live search to get current information:
llm -m grok-4-1-fast -o search_mode on 'What are the latest developments in AI today?'
Searching with date constraints:
llm -m grok-4-1-fast -o search_mode on -o search_from_date 2025-01-01 -o search_to_date 2025-01-15 'What happened in AI this month?'
Filtering X posts by engagement:
llm -m grok-4-1-fast -o search_mode on -o post_favorite_count 1000 -o post_view_count 10000 'Show me popular AI discussions on X'
Excluding specific X accounts:
llm -m grok-4-1-fast -o search_mode on -o excluded_x_handles "@spam_account" 'Latest AI news from X'
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
git clone https://github.com/hiepler/llm-grok.git
cd llm-grok
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Available Commands
List available Grok models:
llm grok models
API Documentation
This plugin uses the xAI API. For more information about the API, see:
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
Apache License 2.0
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_grok-1.4.2.tar.gz.
File metadata
- Download URL: llm_grok-1.4.2.tar.gz
- Upload date:
- Size: 14.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7a54b00ad9bfc141d3e5dd4042699275367effdf72679bde2972bfa6ac4dec83
|
|
| MD5 |
333fcfa3c06bdc57c13ed3409355528a
|
|
| BLAKE2b-256 |
0cf8a6bca9f2e2fcd1c9db2545e5f626e79d955f4b65d5fa8b7a7e17e84de0eb
|
Provenance
The following attestation bundles were made for llm_grok-1.4.2.tar.gz:
Publisher:
publish.yml on Hiepler/llm-grok
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_grok-1.4.2.tar.gz -
Subject digest:
7a54b00ad9bfc141d3e5dd4042699275367effdf72679bde2972bfa6ac4dec83 - Sigstore transparency entry: 1145182521
- Sigstore integration time:
-
Permalink:
Hiepler/llm-grok@029317c8d18aae2c7bb6465302fd13037a5889d3 -
Branch / Tag:
refs/tags/v1.4.2 - Owner: https://github.com/Hiepler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@029317c8d18aae2c7bb6465302fd13037a5889d3 -
Trigger Event:
release
-
Statement type:
File details
Details for the file llm_grok-1.4.2-py3-none-any.whl.
File metadata
- Download URL: llm_grok-1.4.2-py3-none-any.whl
- Upload date:
- Size: 12.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
21db535bdb37ce6d12f4ff6b3881aef6a5f63f1e01e74621f1de38cab59a8fd5
|
|
| MD5 |
88d426635c6447fc8656225757da509c
|
|
| BLAKE2b-256 |
0f6e303050770ac99b9e10c62ec44bcc8fd8f453820eed67c5e90ec8dd06e4bc
|
Provenance
The following attestation bundles were made for llm_grok-1.4.2-py3-none-any.whl:
Publisher:
publish.yml on Hiepler/llm-grok
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_grok-1.4.2-py3-none-any.whl -
Subject digest:
21db535bdb37ce6d12f4ff6b3881aef6a5f63f1e01e74621f1de38cab59a8fd5 - Sigstore transparency entry: 1145182609
- Sigstore integration time:
-
Permalink:
Hiepler/llm-grok@029317c8d18aae2c7bb6465302fd13037a5889d3 -
Branch / Tag:
refs/tags/v1.4.2 - Owner: https://github.com/Hiepler
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@029317c8d18aae2c7bb6465302fd13037a5889d3 -
Trigger Event:
release
-
Statement type: