Skip to main content

Debug plugin for LLM

Project description

llm-echo

PyPI Changelog Tests License

Debug plugin for LLM. Adds a model which echos its input without hitting an API or executing a local LLM.

Installation

Install this plugin in the same environment as LLM.

llm install llm-echo

Usage

The plugin adds a echo model which simply echos the prompt details back to you as JSON.

llm -m echo prompt -s 'system prompt'

Output:

{
  "prompt": "prompt",
  "system": "system prompt",
  "attachments": [],
  "stream": true,
  "previous": []
}

You can also add one example option like this:

llm -m echo prompt -o example_bool 1

Output:

{
  "prompt": "prompt",
  "system": "",
  "attachments": [],
  "stream": true,
  "previous": [],
  "options": {
    "example_bool": true
  }
}

Tool calling

You can use llm-echo to test tool calling without needing to run prompts through an actual LLM. In your prompt, send something like this:

{
  "prompt": "This will be treated as the prompt",
  "tool_calls": [
    {
      "name": "example",
      "arguments": {
        "input": "Hello, world!"
      }
    }
  ]
}

You can assemble a test that looks like this:

def example(input: str) -> str:
    return f"Example output for {input}"

model = llm.get_model("echo")
chain_response = model.chain(
    json.dumps(
        {
            "tool_calls": [
                {
                    "name": "example",
                    "arguments": {"input": "test"},
                }
            ],
            "prompt": "prompt",
        }
    ),
    system="system",
    tools=[example],
)
responses = list(chain_response.responses())
tool_calls = responses[0].tool_calls()
assert tool_calls == [
    llm.ToolCall(name="example", arguments={"input": "test"}, tool_call_id=None)
]
assert responses[1].prompt.tool_results == [
    llm.models.ToolResult(
        name="example", output="Example output for test", tool_call_id=None
    )
]

Or you can read the JSON from the last response in the chain:

response_info = json.loads(responses[-1].text())

And run assertions against the "tool_results" key, which should look something like this:

{
  "prompt": "",
  "system": "",
  "...": "...",
  "tool_results": [
    {
      "name": "example",
      "output": "Example output for test",
      "tool_call_id": null
    }
  ]
}

Take a look at the test suite for llm-tools-simpleeval for an example of how to write tests against tools.

echo-needs-key model

The plugin also provides an echo-needs-key model which behaves identically to echo but requires an API key. This is useful for testing key resolution logic in plugins like datasette-llm.

The resolved key is included in the JSON output:

LLM_ECHO_NEEDS_KEY_KEY=sk-test-123 llm -m echo-needs-key 'hello'

Output:

{
  "prompt": "hello",
  "system": "",
  "attachments": [],
  "stream": true,
  "previous": [],
  "key": "sk-test-123"
}

The model's needs_key is "echo-needs-key" and its key_env_var is LLM_ECHO_NEEDS_KEY_KEY.

Raw responses

Sometimes it can be useful to output an exact string, for example if you are testing the --extract option in LLM.

If your prompt is JSON with a "raw" key that string is the only thing that will be returned. For example:

{
  "raw": "This is the raw response"
}

Will return:

This is the raw response

Development

To set up this plugin locally, first checkout the code. Then run the tests:

cd llm-echo
uv run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_echo-0.4.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_echo-0.4-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file llm_echo-0.4.tar.gz.

File metadata

  • Download URL: llm_echo-0.4.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_echo-0.4.tar.gz
Algorithm Hash digest
SHA256 10f9fa0e57ebc995d0a39f88c9972238760ac6d86501062ae9eeee75ec838bba
MD5 7e1919e640e984f33a475207bb387843
BLAKE2b-256 c854c103996593deaebed71b24fbae789ed0b094215b5d5e1eba4227d7a1ac49

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_echo-0.4.tar.gz:

Publisher: publish.yml on simonw/llm-echo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm_echo-0.4-py3-none-any.whl.

File metadata

  • Download URL: llm_echo-0.4-py3-none-any.whl
  • Upload date:
  • Size: 8.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_echo-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 b4b5865eae88f4a987bfb6ead487d6f14389fa0ff81b16b23526e5e0a6cc29ca
MD5 4dfcdeb884c4a8ea115b5bca73a81540
BLAKE2b-256 f6aa26c3e00e5240bba3c603c90cb4ae5655d2015e706653f553e7e897316171

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_echo-0.4-py3-none-any.whl:

Publisher: publish.yml on simonw/llm-echo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page