Manager for Tableau - Custom Task Framework for building workflow tasks
Project description
MFT - Manager for Tableau Custom Task Framework
Python framework for building custom workflow tasks that run inside Manager for Tableau.
Quick Start
- Clone the Custom Task Template repository and open it in VS Code.
- Reopen in Dev Container — the container has Python, .NET runtime, and the
mftpackage pre-installed. - Edit
src/task-meta.json— define your task's inputs, outputs, and service dependencies. - Generate a test input file:
mft generate-input - Fill in
dev-files/input.jsonwith test values (replace theTODO:markers). - Validate your input file:
mft validate-input - Write your task logic in
main.pyinside therun()function. - Test in dev mode:
python main.py dev --server https://your-mft-server --token YOUR_TOKEN --server-id SERVER_ID --site SITE
- Package for upload:
mft package - Upload the
.mftfile to Manager for Tableau.
Installation
pip install mft-fortableau
The package includes the JsonConverter binary — no additional setup required.
Project Structure
my-custom-task/
├── .devcontainer/
│ ├── Dockerfile # Python 3.13 + .NET 8 runtime + mft
│ └── devcontainer.json # VS Code dev container config
├── src/
│ ├── task-meta.json # Task definition (inputs, outputs, uses)
│ └── requirements.txt # Your pip dependencies
├── dev-files/
│ └── input.json # Generated test input (gitignored)
└── main.py # Task entry point
task-meta.json Reference
{
"display_name": "My Task",
"description": "What this task does",
"group": "Custom Tasks",
"group_index": null,
"major_version": 1,
"version_description": null,
"uses": ["RestApi", "Repository"],
"inputs": [ ... ],
"outputs": [ ... ]
}
Top-Level Fields
| Field | Type | Required | Description |
|---|---|---|---|
display_name |
string | Yes | Name shown in the workflow editor |
description |
string | No | Description of what the task does |
group |
string | No | Category group (default: "Custom Tasks") |
group_index |
integer/null | No | Sort order within the group (default: null) |
major_version |
integer | No | Task version number (default: 1) |
version_description |
string/null | No | Description of changes in this version (default: null) |
uses |
string[] | No | Services the task needs (see below) |
inputs |
object[] | No | Input parameter definitions |
outputs |
object[] | No | Output parameter definitions |
Versioning
When you change a task's inputs or outputs or you implement a more significant logical change, increment major_version in task-meta.json. The new version is uploaded alongside existing versions — workflows using the old version continue to work, while new workflows can use the new version. Use version_description to document what changed.
Bug fixes that don't change the input/output schema keep the same major_version — delete the existing version in the Manager for Tableau UI first, then re-upload. Re-uploading without deleting first is rejected to prevent accidental overwrites.
Uses (Service Dependencies)
| Value | Description |
|---|---|
RestApi |
Tableau Server REST API via tableauserverclient |
Repository |
Read-only Tableau Repository (PostgreSQL) via psycopg2 |
Parameter Definition
Each entry in inputs or outputs:
{
"id": "myParam",
"display_name": "My Parameter",
"type": "String",
"description": "Optional description",
"required": true,
"is_list": false,
"enum_values": ["A", "B", "C"],
"object_properties": [ ... ]
}
| Field | Type | Required | Description |
|---|---|---|---|
id |
string | Yes | Identifier (must match ^[a-zA-Z][a-zA-Z\d_]*$) |
display_name |
string | Yes | Label shown in the UI |
type |
string | Yes | One of the 11 parameter types (see below) |
description |
string | No | Help text |
required |
boolean | No | Whether the parameter must be set (default: false) |
is_list |
boolean | No | Whether the parameter holds multiple values (default: false) |
enum_values |
string[] | Enum only | Allowed values for Enum type |
object_properties |
object[] | Object only | Nested parameter definitions for Object type |
Parameter Types
| Type | JSON Value | Python Type | Input Method | Output Method |
|---|---|---|---|---|
String |
"hello" |
str |
get_string() |
set_string() |
Integer |
42 |
int |
get_integer() |
set_integer() |
Double |
3.14 |
float |
get_double() |
set_double() |
Boolean |
true |
bool |
get_boolean() |
set_boolean() |
Date |
"2025-01-15" |
datetime.date |
get_date() |
set_date() |
DateTime |
"2025-01-15T10:30:00" |
datetime.datetime |
get_datetime() |
set_datetime() |
TimeSpan |
"01:30:00" |
datetime.timedelta |
get_timespan() |
set_timespan() |
Guid |
"550e8400-..." |
uuid.UUID |
get_guid() |
set_guid() |
Enum |
"ValueA" |
str |
get_enum() |
set_enum() |
Binary |
"base64..." |
bytes |
get_binary() |
set_binary() |
Object |
{ ... } |
ParameterObjectHandler |
get_object() |
set_object() |
Every type also has a list variant: get_string_list() / set_string_list(), etc.
Writing Your Task
Basic Pattern
from mft import MFT
def run(mft: MFT) -> None:
# Read inputs
name = mft.input.get_string("name")
count = mft.input.get_integer("count")
tags = mft.input.get_string_list("tags")
# Your logic here
result = f"Processed {name} with {count} items"
# Write outputs
mft.output.set_string("result", result)
mft.output.set_integer("processedCount", count)
# --- Boilerplate (do not modify) ---
if __name__ == "__main__":
try:
task = MFT.init()
except Exception as e:
MFT.Err(str(e), error_id="InitError")
try:
run(task)
MFT.Ok()
except Exception as e:
MFT.Err(str(e))
Using the Tableau REST API
Add "RestApi" to uses in task-meta.json, then:
def run(mft: MFT) -> None:
server = mft.tableau_api.server
all_workbooks, pagination = server.workbooks.get()
for wb in all_workbooks:
print(f"{wb.name} (owner: {wb.owner_id})")
The server object is a connected tableauserverclient.Server instance (included with mft).
Using the Repository
Add "Repository" to uses in task-meta.json, then:
def run(mft: MFT) -> None:
rows = mft.repository.execute(
"SELECT name, owner_name FROM workbooks WHERE site_id = %s",
(site_id,),
)
for name, owner in rows:
print(f"{name} owned by {owner}")
# Or get results as dicts:
dicts = mft.repository.execute_dict(
"SELECT name, owner_name FROM workbooks LIMIT 10"
)
psycopg2-binary is included with mft.
Working with Objects
For nested Object parameters, use callback functions:
# Reading nested objects
address = mft.input.get_object("address")
city = address.get_string("city")
zip_code = address.get_string("zipCode")
# Reading a list of objects
for item in mft.input.get_objects("items"):
name = item.get_string("name")
qty = item.get_integer("quantity")
# Writing nested objects
def write_address(obj):
obj.set_string("city", "New York")
obj.set_string("zipCode", "10001")
mft.output.set_object("address", write_address)
# Writing a list of objects
writers = []
for item in items:
def make_writer(i):
def writer(obj):
obj.set_string("name", i["name"])
obj.set_integer("quantity", i["qty"])
return writer
writers.append(make_writer(item))
mft.output.set_object_list("items", writers)
Dev Mode
Dev mode lets you test your task locally without deploying to Manager for Tableau.
Running in Dev Mode
python main.py dev \
--server https://your-mft-server \
--token YOUR_REFRESH_TOKEN \
--server-id SERVER_ID \
--site SITE_CONTENT_URL
Optional: specify a custom input file path:
python main.py dev --server ... --input-file ./my-input.json
What Happens in Dev Mode
input.jsonis validated againsttask-meta.json(type checks, format checks, required fields, leftover TODO markers).input.jsonis converted from JSON to the binary parameter format.- Your
run()function executes, reading inputs and writing outputs. - The binary output file is converted back to
output.jsonfor inspection. - A
status.jsonfile is written with the completion status.
If validation fails in step 1, the task exits immediately with a clear error listing every problem. This is the same validation that mft validate-input performs.
All files are written to ./dev-files/ by default.
Token Persistence
The --token refresh token is single-use — each API call consumes it and returns a new one. The mft package handles this automatically: after each token refresh, the new token is saved to dev-files/.mft_token. On subsequent runs with the same --token value, the persisted token is used. This means you provide the token once and can re-run your task as many times as needed.
If you generate a new PAT in the MFT web UI, pass the new value via --token and the old persisted token is replaced automatically.
Debugging with Logs
To enable MFT internal logs (token refreshes, API connections, warnings), add this at the top of your main.py:
import logging
logging.basicConfig()
logging.getLogger("mft").setLevel(logging.DEBUG)
This will print debug output to the console without noise from third-party libraries.
Prod Mode
Prod mode is how tasks run when deployed to Manager for Tableau. You do not invoke prod mode yourself — the MFT workflow engine handles it automatically.
When your packaged .mft task is uploaded and triggered by a workflow, Manager for Tableau:
- Starts a Docker container with your task code and
mftpre-installed. - Sets environment variables with the runtime configuration.
- Runs
python main.py(without thedevargument), which activates prod mode.
How It Differs from Dev Mode
| Dev Mode | Prod Mode | |
|---|---|---|
| Activation | python main.py dev --server ... |
python main.py (no arguments) |
| Configuration | CLI arguments | Environment variables |
| Input/Output | JSON files converted to/from binary | Binary parameter files used directly |
| Authentication | Refresh token (long-lived) | Action token (short-lived, set by engine) |
| Status output | Written to ./dev-files/status.json + stdout |
Written to MFT_STATUS_FILE (or stdout) |
Environment Variables (set by the engine)
| Variable | Description |
|---|---|
MFT_SERVER_URL |
Manager for Tableau server URL |
MFT_AUTH_TOKEN |
Short-lived action token for API calls |
MFT_SERVER_ID |
Tableau Server ID |
MFT_SITE_CONTENT_URL |
Tableau site content URL |
MFT_TASK_NODE_ID |
Workflow task node identifier |
MFT_JOB_ID |
Current workflow job identifier |
MFT_INPUT_FILE |
Path to the binary input parameter file (created by the engine before the task starts) |
MFT_OUTPUT_FILE |
Path where the task writes the binary output parameter file (read by the engine after completion) |
MFT_STATUS_FILE |
(optional) Path to write the status JSON |
Your task code does not need to read these variables directly — MFT.init() handles everything. The same run() function works identically in both modes.
CLI Commands
mft validate-meta
Validate a task-meta.json file and print a summary.
mft validate-meta # auto-detect in ./src/ or ./
mft validate-meta path/to/task-meta.json # explicit path
mft generate-input
Generate a sample input.json template from task metadata.
mft generate-input # auto-detect meta, output to ./dev-files/input.json
mft generate-input path/to/task-meta.json ./my-input.json # explicit paths
The generated file contains TODO: markers for string values and type-appropriate defaults for other types. A parameter guide is printed to stdout.
mft validate-input
Validate an input.json file against task metadata. Catches type mismatches, leftover TODO: markers, missing required fields, invalid formats (dates, GUIDs, etc.), unknown keys, and bad enum values -- all before the data reaches the converter or binary layer.
mft validate-input # auto-detect meta, validate ./dev-files/input.json
mft validate-input ./dev-files/input.json # explicit input path, auto-detect meta
mft validate-input ./dev-files/input.json path/to/task-meta.json # explicit both
Typical workflow:
mft generate-input # creates ./dev-files/input.json with TODO markers
# edit input.json...
mft validate-input # catches mistakes before running the task
Auto-validation in dev mode: MFT.init() automatically validates the input JSON before converting it to binary format. If validation fails, the task exits immediately with a clear error message listing every problem. This means you get the same validation whether you run mft validate-input manually or just start your task in dev mode -- bad input never reaches the converter.
mft package
Package the task into a .mft ZIP archive for upload to Manager for Tableau.
mft package # filename derived from display_name
mft package my_task.mft # custom filename
The archive contains the files the MFT engine needs to run the task:
task-meta.json— task definition (always at archive root)main.py— entry point (from project root)requirements.txt— pip dependencies (fromsrc/or project root)- Additional
.pyfiles at the project root (helper modules) - All files under
src/excepttask-meta.jsonandrequirements.txt(subdirectory modules)
Non-runtime directories (.devcontainer, dev-files, docs, __pycache__, .git) are excluded automatically.
API Reference
MFT (class)
| Method | Description |
|---|---|
MFT.init(meta_path=None) |
Initialize the runtime, returns an MFT instance |
MFT.Ok(message="...") |
Signal success, write status, exit with code 0 |
MFT.Err(message, error_id="TaskError") |
Signal failure, write status, exit with code 1 |
| Property | Type | Description |
|---|---|---|
mft.input |
Input |
Read-only input parameters |
mft.output |
Output |
Writable output parameters |
mft.tableau_api |
TableauApi |
Tableau REST API client (requires RestApi in uses) |
mft.repository |
Repository |
Read-only Tableau Repository (requires Repository in uses) |
Input (read methods)
All methods take a param_id: str argument matching the id in task-meta.json.
| Method | Returns |
|---|---|
get_string(id) |
str | None |
get_string_list(id) |
list[str] |
get_integer(id) |
int | None |
get_integer_list(id) |
list[int] |
get_double(id) |
float | None |
get_double_list(id) |
list[float] |
get_boolean(id) |
bool | None |
get_boolean_list(id) |
list[bool] |
get_date(id) |
datetime.date | None |
get_date_list(id) |
list[datetime.date] |
get_datetime(id) |
datetime.datetime | None |
get_datetime_list(id) |
list[datetime.datetime] |
get_timespan(id) |
datetime.timedelta | None |
get_timespan_list(id) |
list[datetime.timedelta] |
get_guid(id) |
uuid.UUID | None |
get_guid_list(id) |
list[uuid.UUID] |
get_enum(id) |
str | None |
get_enum_list(id) |
list[str] |
get_binary(id) |
bytes | None |
get_binary_list(id) |
list[bytes] |
get_object(id) |
ParameterObjectHandler | None |
get_objects(id) |
Iterator[ParameterObjectHandler] |
Output (write methods)
All methods take a param_id: str and the value to write.
| Method | Value Type |
|---|---|
set_string(id, value) |
str |
set_string_list(id, values) |
list[str] |
set_integer(id, value) |
int |
set_integer_list(id, values) |
list[int] |
set_double(id, value) |
float |
set_double_list(id, values) |
list[float] |
set_boolean(id, value) |
bool |
set_boolean_list(id, values) |
list[bool] |
set_date(id, value) |
datetime.date |
set_date_list(id, values) |
list[datetime.date] |
set_datetime(id, value) |
datetime.datetime |
set_datetime_list(id, values) |
list[datetime.datetime] |
set_timespan(id, value) |
datetime.timedelta |
set_timespan_list(id, values) |
list[datetime.timedelta] |
set_guid(id, value) |
uuid.UUID |
set_guid_list(id, values) |
list[uuid.UUID] |
set_enum(id, value) |
str |
set_enum_list(id, values) |
list[str] |
set_binary(id, value) |
bytes |
set_binary_list(id, values) |
list[bytes] |
set_object(id, writer) |
Callable[[ParameterObjectHandler], None] |
set_object_list(id, writers) |
list[Callable[[ParameterObjectHandler], None]] |
TableauApi
| Member | Description |
|---|---|
.server |
Connected tableauserverclient.Server instance |
.connect() |
Authenticate (called automatically) |
.disconnect() |
Sign out (called automatically on Ok/Err) |
Repository
| Method | Description |
|---|---|
.execute(sql, params=None) |
Execute query, return rows as list[tuple] |
.execute_dict(sql, params=None) |
Execute query, return rows as list[dict] |
.connection |
Raw psycopg2 connection |
.connect() |
Open connection (called automatically) |
.disconnect() |
Close connection (called automatically on Ok/Err) |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mft_fortableau-1.0.768.tar.gz.
File metadata
- Download URL: mft_fortableau-1.0.768.tar.gz
- Upload date:
- Size: 803.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7a2e20dd82e68d72fbc2d84af85b0251a214dfc6fe5eff45e3e2af18728d4d8b
|
|
| MD5 |
30bc685ec28c79e44f36097f3c6b8ebd
|
|
| BLAKE2b-256 |
c58c161b62368aabf616b6b61eb8f9128b5c0f50a5ec50926750b3fee26362db
|
File details
Details for the file mft_fortableau-1.0.768-py3-none-any.whl.
File metadata
- Download URL: mft_fortableau-1.0.768-py3-none-any.whl
- Upload date:
- Size: 806.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
88567f1fd61e6e9e52824ecaef594946a8fad02aa3dcfc2a3ca116c451c71ab6
|
|
| MD5 |
8f19e45bc76e6dae83e7b221430b5eb4
|
|
| BLAKE2b-256 |
3823ec6a839fcb5f679b166a5b9546e947cbe4fa376703dbe48cb33998041cf8
|