Open ended tool use evaluation framework
Project description
mcpx-eval
A framework for evaluating open-ended tool use across various large language models.
mcpx-eval can be used to compare the output of different LLMs with the same prompt for a given task using mcp.run tools.
This means we're not only interested in the quality of the output, but also curious about the helpfulness of various models
when presented with real world tools.
Test configs
The tests/ directory contains pre-defined evals
Installation
uv tool install mcpx-eval
Or from git:
uv tool install git+https://github.com/dylibso/mcpx-eval
Or using uvx without installation:
uvx mcpx-eval
mcp.run Setup
You will need to get an mcp.run session ID by running:
npx --yes -p @dylibso/mcpx gen-session --write
This will generate a new session and write the session ID to a configuration file that can be used
by mcpx-eval.
If you need to store the session ID in an environment variable you can run gen-session
without the --write flag:
npx --yes -p @dylibso/mcpx gen-session
which should output something like:
Login successful!
Session: kabA7w6qH58H7kKOQ5su4v3bX_CeFn4k.Y4l/s/9dQwkjv9r8t/xZFjsn2fkLzf+tkve89P1vKhQ
Then set the MCP_RUN_SESSION_ID environment variable:
$ export MCP_RUN_SESSION_ID=kabA7w6qH58H7kKOQ5su4v3bX_CeFn4k.Y4l/s/9dQwkjv9r8t/xZFjsn2fkLzf+tkve89P1vKhQ
Usage
Run an eval comparing all mcp.task runs for my-task:
mcpx-eval test --task my-task --task-run all
Only evaluate the latest task run:
mcpx-eval test --task my-task --task-run latest
Or trigger a new task run:
mcpx-eval test --task my-task --task-run new
Run an mcp.run task locally with a different set of models:
mcpx-eval test --model .. --model .. --task my-task --iter 10
Generate an HTML scoreboard for all evals:
mcpx-eval gen --html results.html --show
Test file
A test file is a TOML file containing the following fields:
name- name of the testtask- optional, the name of the mcp.run task to usetask-run- optional, one oflatest,new,allor the name/index of the task run to analyzeprompt- prompt to test, this is passed to the LLM under test, this can be left blank iftaskis setcheck- prompt for the judge, this is used to determine the quality of the test outputexpected-tools- list of tool names that might be usedignored-tools- optional, list of tools to ignore, they will not be available to the LLMimport- optional, includes fields from another test TOML filevars- optional, a dict of variables that will be used to format the prompt
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcpx_eval-0.4.3.tar.gz.
File metadata
- Download URL: mcpx_eval-0.4.3.tar.gz
- Upload date:
- Size: 26.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.7.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b8673844c48c4ca49247759c308ebb98cd4a745501b5c3582552c7e7723eb929
|
|
| MD5 |
7f7165f841d35df15b39a7d4cd0874e7
|
|
| BLAKE2b-256 |
a3c535ae23bf361a332f992e7958710bb472af019581923ff93946c0ed8439c9
|
File details
Details for the file mcpx_eval-0.4.3-py3-none-any.whl.
File metadata
- Download URL: mcpx_eval-0.4.3-py3-none-any.whl
- Upload date:
- Size: 30.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.7.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4cb3f610d52a21fa92d1e97873a985bf9e6e00b803b934d1cf066ca18a61f39a
|
|
| MD5 |
8c67f79d32fb52038083934f4c91ccc8
|
|
| BLAKE2b-256 |
814ae6c1970599414adf2de401710ab91fa2e58c4e56b316cb3bbafda95d1385
|