Debug plugin for LLM
Project description
llm-echo
Debug plugin for LLM. Adds a model which echos its input without hitting an API or executing a local LLM.
Installation
Install this plugin in the same environment as LLM.
llm install llm-echo
Usage
The plugin adds a echo model which simply echos the prompt details back to you as JSON.
llm -m echo prompt -s 'system prompt'
Output:
{
"prompt": "prompt",
"system": "system prompt",
"attachments": [],
"stream": true,
"previous": []
}
You can also add one example option like this:
llm -m echo prompt -o example_bool 1
Output:
{
"prompt": "prompt",
"system": "",
"attachments": [],
"stream": true,
"previous": [],
"options": {
"example_bool": true
}
}
Tool calling
You can use llm-echo to test tool calling without needing to run prompts through an actual LLM. In your prompt, send something like this:
{
"prompt": "This will be treated as the prompt",
"tool_calls": [
{
"name": "example",
"arguments": {
"input": "Hello, world!"
}
}
]
}
You can assemble a test that looks like this:
def example(input: str) -> str:
return f"Example output for {input}"
model = llm.get_model("echo")
chain_response = model.chain(
json.dumps(
{
"tool_calls": [
{
"name": "example",
"arguments": {"input": "test"},
}
],
"prompt": "prompt",
}
),
system="system",
tools=[example],
)
responses = list(chain_response.responses())
tool_calls = responses[0].tool_calls()
assert tool_calls == [
llm.ToolCall(name="example", arguments={"input": "test"}, tool_call_id=None)
]
assert responses[1].prompt.tool_results == [
llm.models.ToolResult(
name="example", output="Example output for test", tool_call_id=None
)
]
Or you can read the JSON from the last response in the chain:
response_info = json.loads(responses[-1].text())
And run assertions against the "tool_results" key, which should look something like this:
{
"prompt": "",
"system": "",
"...": "...",
"tool_results": [
{
"name": "example",
"output": "Example output for test",
"tool_call_id": null
}
]
}
Take a look at the test suite for llm-tools-simpleeval for an example of how to write tests against tools.
echo-needs-key model
The plugin also provides an echo-needs-key model which behaves identically to echo but requires an API key. This is useful for testing key resolution logic in plugins like datasette-llm.
The resolved key is included in the JSON output:
LLM_ECHO_NEEDS_KEY_KEY=sk-test-123 llm -m echo-needs-key 'hello'
Output:
{
"prompt": "hello",
"system": "",
"attachments": [],
"stream": true,
"previous": [],
"key": "sk-test-123"
}
The model's needs_key is "echo-needs-key" and its key_env_var is LLM_ECHO_NEEDS_KEY_KEY.
Raw responses
Sometimes it can be useful to output an exact string, for example if you are testing the --extract option in LLM.
If your prompt is JSON with a "raw" key that string is the only thing that will be returned. For example:
{
"raw": "This is the raw response"
}
Will return:
This is the raw response
Development
To set up this plugin locally, first checkout the code. Then run the tests:
cd llm-echo
uv run pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_echo-0.4.tar.gz.
File metadata
- Download URL: llm_echo-0.4.tar.gz
- Upload date:
- Size: 8.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10f9fa0e57ebc995d0a39f88c9972238760ac6d86501062ae9eeee75ec838bba
|
|
| MD5 |
7e1919e640e984f33a475207bb387843
|
|
| BLAKE2b-256 |
c854c103996593deaebed71b24fbae789ed0b094215b5d5e1eba4227d7a1ac49
|
Provenance
The following attestation bundles were made for llm_echo-0.4.tar.gz:
Publisher:
publish.yml on simonw/llm-echo
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_echo-0.4.tar.gz -
Subject digest:
10f9fa0e57ebc995d0a39f88c9972238760ac6d86501062ae9eeee75ec838bba - Sigstore transparency entry: 1203561183
- Sigstore integration time:
-
Permalink:
simonw/llm-echo@6eda801e10c16be8290a3cbceaffcacf80317aba -
Branch / Tag:
refs/tags/0.4 - Owner: https://github.com/simonw
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@6eda801e10c16be8290a3cbceaffcacf80317aba -
Trigger Event:
release
-
Statement type:
File details
Details for the file llm_echo-0.4-py3-none-any.whl.
File metadata
- Download URL: llm_echo-0.4-py3-none-any.whl
- Upload date:
- Size: 8.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b4b5865eae88f4a987bfb6ead487d6f14389fa0ff81b16b23526e5e0a6cc29ca
|
|
| MD5 |
4dfcdeb884c4a8ea115b5bca73a81540
|
|
| BLAKE2b-256 |
f6aa26c3e00e5240bba3c603c90cb4ae5655d2015e706653f553e7e897316171
|
Provenance
The following attestation bundles were made for llm_echo-0.4-py3-none-any.whl:
Publisher:
publish.yml on simonw/llm-echo
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_echo-0.4-py3-none-any.whl -
Subject digest:
b4b5865eae88f4a987bfb6ead487d6f14389fa0ff81b16b23526e5e0a6cc29ca - Sigstore transparency entry: 1203561185
- Sigstore integration time:
-
Permalink:
simonw/llm-echo@6eda801e10c16be8290a3cbceaffcacf80317aba -
Branch / Tag:
refs/tags/0.4 - Owner: https://github.com/simonw
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@6eda801e10c16be8290a3cbceaffcacf80317aba -
Trigger Event:
release
-
Statement type: