Skip to main content

Command Line Interface for cortexapps

Project description

Installation

pypi.org

pip install cortexapps-cli

Using a python virtual environment:

VENV_DIR=~/.venv/cortex
python3 -m venv $VENV_DIR
source $VENV_DIR/bin/activate
pip install cortexapps-cli

homebrew

brew tap cortexapps/tap
brew install cortexapps-cli

docker

docker run -e CORTEX_API_KEY=<your API key> cortexapp/cli <Cortex CLI arguments>

Usage

Config file

The CLI requires an API key for all operations. This key is stored in a config file whose default location is ~/.cortex/config. This path can be overridden with the -c flag. You will be prompted to create the file if it does not exist.

Minimal contents of the file:

[default]
api_key = REPLACE_WITH_YOUR_CORTEX_API_KEY

If you have multiple Cortex instances, you can create a section for each, for example:

[default]
api_key = REPLACE_WITH_YOUR_CORTEX_API_KEY

[my-test]
api_key = REPLACE_WITH_YOUR_CORTEX_API_KEY
base_url = https://app.cortex.mycompany.com

NOTE: if not supplied, base_url defaults to https://api.getcortexapp.com.

The CLI will retrieve configuration data from the [default] section unless you pass the -t/--tenant flag.

For example, to list all entities in the my-test tenant, run the following command:

cortex -t my-test catalog list

If the config file does not exist, the CLI will prompt you to create it.

Environment Variables

The CLI supports the following environment variables. If provided, the Cortex config file will not be read.

  • CORTEX_API_KEY

  • CORTEX_BASE_URL - this is optional if using Cortex cloud; defaults to https://api.getcortexapp.com

Example:

export CORTEX_API_KEY=<YOUR_API_KEY>

Commands

Run cortex to see a list of options and sub-commands.

Run cortex <subcommand> -h to see a list of all commands for each subcommand.

Examples

Almost all CLI responses return JSON or YAML. Tools like jq and yq will be helpful to extract content from these responses.

Export from one tenant; import into another

This example shows how to export from a tenant named myTenant-dev and import those contents into a tenant named myTenant.

Your cortex config file will require api keys for both tenants. It would look like this:

[myTenant]
api_key = <your API Key for myTenant>

[myTenant-dev]
api_key = <your API Key for myTenant-dev>

Export

cortex -t myTenant-dev backup export
Getting catalog
-->  my-domain-1
-->  my-service-1
-->  my-service-2
Getting entity-types
-->  my-entity-type-1
Getting ip-allowlist
--> ip-allowlist
Getting plugins
--> my-plugin-1
Getting scorecards
-->  my-scorecard-1
Getting workflows
-->  my-workflow-1

Export complete!
Contents available in /Users/myUser/.cortex/export/2025-06-12-14-58-14

Import

cortex -t myTenant backup import -d <directory created by export>

NOTE: some content will not be exported, including integration configurations and resources that are automatically imported by Cortex. Cortex does not have access to any keys, so it cannot export any integration configurations.

Iterate over all domains

for domain in `cortex catalog list -t domain --csv -C tag --sort tag:asc`; do echo "domain = $domain"; done

Iterate over all teams

NOTE: as of June 2025, requires a feature flag enabled to return team entities in the catalog API. Work with your CSM if you need assistance.

for team in `cortex catalog list -t team --csv -C tag --sort tag:asc`; do echo "team = $team"; done

Iterate over all services

for service in `cortex catalog list -t service --csv -C tag --sort tag:asc`; do echo "service = $service"; done

Get git details for a service

cortex catalog details -t my-service-1 | jq ".git"
{
  "repository": "my-org/my-service-1",
  "alias": null,
  "basepath": null,
  "provider": "github"
}

Add a suffix to all x-cortex-tag values for services

for service in `cortex catalog list -t service --csv -C tag --sort tag:asc`; do
   cortex catalog descriptor -y -t ${service} | yq '.info.x-cortex-tag |= . + "-suffix"' | cortex catalog create -f-
done

This example combines several CLI commands:

  • the for loop iterates over all services

  • the descriptor for each service is retrieved in YAML format

  • the YAML descriptor is piped to yq where the value of x-cortex-tag is retrieved and modified to add “-suffix” to the end

  • the modified YAML is then piped to the cortex catalog command to update the entity in cortex

NOTE: Any cortex commands that accept a file as input can also receive input from stdin by specifying a “-” after the -f parameter.

Add a group to all domains

for domain in `cortex catalog list -t domain | jq -r ".entities[].tag" | sort`; do
   cortex catalog descriptor -y -t ${domain} | yq -e '.info.x-cortex-groups += [ "my-new-group" ]' | cortex catalog create -f-
done

Remove a group from domains

for domain in `cortex catalog list -t domain --csv -C tag --sort tag:asc`; do
   cortex catalog descriptor -y -t ${domain} | yq -e '.info.x-cortex-groups -= [ "my-old-group" ]' | cortex catalog create -f-
done

Add a domain parent to a single service

cortex catalog descriptor -y -t my-service | yq -e '.info.x-cortex-domain-parents += { "tag": "my-new-domain" }' | cortex catalog create -f-

Add a github group as an owner to a service

cortex catalog descriptor -y -t my-service | yq -e '.info.x-cortex-owners += { "name": "my-org/my-team", "type": "GROUP", "provider": "GITHUB" }' | cortex catalog create -f-

Modify all github basepath values for domain entitities, changing ‘-’ to ‘_’

for domain in `cortex catalog list -t domain --csv -C tag --sort tag:asc`; do
   cortex catalog descriptor -y -t ${domain} | yq ".info.x-cortex-git.github.basepath |= sub(\"-\", \"_\")" | cortex catalog create -f-
done

Modify deploys based on selection criteria

This example fixes a typo in the deployment environment field, changing PYPI.org to PyPI.org.

It loops over each selected array element based on the search criteria, removes the uuid attribute (because that is not included in the payload), assigns the environment attribute to the correct value and invokes the CLI with that input.

cortex deploys list -t cli > /tmp/deploys.json
for uuid in `cat /tmp/deploys.json | jq -r '.deployments[] | select(.environment=="PYPI.org") | .uuid'`
do
   cat /tmp/deploys.json | jq ".deployments[] | select (.uuid==\"${uuid}\") | del(.uuid) | .environment = \"PyPI.org\"" | cortex deploys update-by-uuid -t cli -u ${uuid} -f-
done

Create a backup of all scorecards

for tag in `cortex scorecards list --csv -C tag`
do
   echo "backing up: ${tag}"
   cortex scorecards descriptor -t ${tag} > ${tag}.yaml
done

Create a copy of all scorecards in draft mode

This recipe creates a draft scorecard for all existing scorecards. It creates each scorecard with a suffix for the scorecard tag of “-draft” and it appends “ Draft” to the end of the existing title.

for tag in `cortex scorecards list --csv -C tag`
do
   cortex scorecards descriptor -t ${tag} | yq '.draft = true | .tag += "-draft" | .name += " Draft"' | cortex scorecards create -f-
done

Replace scorecards with draft versions and delete the draft versions

This recipe is a companion to the above recipe. This recipe will replace the versions from which the drafts were created and delete the drafts.

for tag in `cortex scorecards list --csv -C tag --filter tag=.*-draft`
do
   cortex scorecards descriptor -t ${tag} | yq '.draft = false | .tag |= sub("-draft","") | .name |= sub(" Draft", "")' | cortex scorecards create -f- && cortex scorecards delete -t ${tag}
done

Get draft scorecards, change draft to false and save on disk

This recipe is similar to the one above, but it does not create a new scorecard in Cortex. Rather, it makes the changes and saves to a file.

for tag in `cortex scorecards list --csv -C tag --filter tag=.*-draft`
do
   cortex scorecards descriptor -t ${tag} | yq '.draft = false | .tag |= sub("-draft","") | .name |= sub(" Draft", "")' > ${tag}.yaml
done

Delete all draft scorecards

WARNING: This recipe will delete all draft scorecards.

for tag in `cortex scorecards list -s | jq -r ".scorecards[].tag"`
do
   cortex scorecards delete -t ${tag}
done

If you only want to delete some drafts, for example if you followed a recipe that creates draft versions of all existing scorecards, you will likely want to run this instead:

for tag in `cortex scorecards list -s | jq -r ".scorecards[].tag" | grep "\-draft$"`
do
   cortex scorecards delete -t ${tag}
done

Compare scorecard scores and levels for two scorecards

This could be helpful for changing CQL rules (for example for CQL v1 -> CQL v2) and ensuring that scorecards produce the same results.

The following command get all scores for a scorecard, pipes the JSON output to jq and filters it to create a CSV file of the form:

service,score,ladderLevel
cortex scorecards scores -t myScorecard | jq -r '.serviceScores[] | [ .service.tag, .score.ladderLevels[].level.name // "noLevel", .score.summary.score|tostring] | join(",")' | sort > /tmp/scorecard-output.csv

Run this command for two different scorecards and diff the csv files to compare results

export SCORECARD=scorecard1
cortex scorecards scores -t ${SCORECARD} | jq -r '.serviceScores[] | [ .service.tag, .score.ladderLevels[].level.name // "noLevel", .score.summary.score|tostring] | join(",")' | sort > /tmp/${SCORECARD}.csv

export SCORECARD=scorecard2
cortex scorecards scores -t ${SCORECARD} | jq -r '.serviceScores[] | [ .service.tag, .score.ladderLevels[].level.name // "noLevel", .score.summary.score|tostring] | join(",")' | sort > /tmp/${SCORECARD}.csv

sdiff -s /tmp/scorecard1.csv /tmp/scorecard2.csv

Add provider for all group type owners where provider is not listed

This recipe adds the value of variable named provider to any owner for which type = GROUP and the provider field is not listed. This recipe can be used to address this issue from Cortex release notes: Starting July 2nd (2024), any group type owners in the x-cortex-owners section of an entity descriptor will require a provider to be explicitly defined.

Adjust the value of provider accordingly. It must be one of the providers listed in our public docs.

This recipe does the following:

  • It runs the Cortex query as documented in the release notes to find all group type owners where the provider is not defined. The cortex queries parameter -f- indicates that the query input comes from stdin, provided by the here document (the content provided between the two ‘EOF’ delimiters).

  • The recipe waits 10 minutes (denoted by parameter -x 600) for the query to complete.

  • It loops over the results of the Cortex query, adding the provider listed in the provider variable for any group owner where the provider is not defined in the entity.

  • The contents of the entity descriptor are changed using yq and then passed as stdin to the cortex catalog subcommand to update the entity.

provider="GITHUB"
query_output="query.json"

cortex queries run -f- -w -x 600 > ${query_output} << EOF
jq(entity.descriptor(), "[.info.\"x-cortex-owners\" | .[] | select(.type | ascii_downcase == \"group\") | select(.provider == null)] | length") > 0
EOF

for entity in `cat ${query_output} | jq -r ".result[].tag"`
do
   echo "entity = $entity"
   cortex catalog descriptor -y -t ${entity} | yq "with(.info.x-cortex-owners[]; select(.type | downcase == \"group\") | select(.provider == null) | .provider = \"${provider}\" )" | cortex catalog create -f-
done

Export all workflows

This recipe creates YAML files for each Workflow. This may be helpful if you are considering enabling GitOps for Workflows and you want to export current Workflows as a starting point.

for workflow in `cortex workflows list --csv --no-headers --columns tag`
do
   echo "workflow = $workflow"
   cortex workflows get --tag $workflow --yaml > $workflow.yaml
done

Obfuscating a Cortex export

This script will obfuscate a Cortex backup. This can be helpful for on-premise customers who may need to provide data to Cortex to help identify performance or usability issues.

# Works off an existing cortex CLI backup.
# - Create a backup with cortex CLI command: cortex backup export -z 10000
set -e
input=$1
output=$2

all_file=${output}/all.yaml
obfuscated_file=${output}/obfuscated.yaml

echo "Output directory: ${output}"
translate_file="${output}/translate.csv"

if [ ! -d ${output} ]; then
   mkdir -p ${output}
fi

for yaml in `ls -1 ${input}/catalog/*`
do
   entity=$(yq ${yaml} | yq ".info.x-cortex-tag")
   new_entity=$(echo ${entity} | md5sum | cut -d' ' -f 1)
   echo "${entity},${new_entity}" >> ${translate_file}
   echo "Creating: $new_entity"
   cat ${yaml} |\
      yq ".info.\"x-cortex-tag\" = \"${new_entity}\" | \
          .info.title=\"${new_entity}\" | \
          del(.info.description) | \
          del(.info.\"x-cortex-link\") | \
          del(.info.\"x-cortex-links\") | \
          del(.info.\"x-cortex-groups\") | \
          del(.info.\"x-cortex-custom-metadata\") | \
          del(.info.\"x-cortex-issues\") | \
          del(.info.\"x-cortex-git\") | \
          del(.info.\"x-cortex-slack\") | \
          del(.info.\"x-cortex-oncall\") | \
          with(.info; \
             select(.\"x-cortex-team\".members != null) | .\"x-cortex-team\".members = {\"name\": \"Cortex User\", \"email\": \"user@example.com\"} \
              )" >> ${all_file}
   echo "---" >> ${all_file}
done

# The longer strings are translated first preventing substrings from being replaced in a longer string
cat ${translate_file} | sort -r > ${translate_file}.tmp && echo "entity,new_entity" > ${translate_file} && cat ${translate_file}.tmp >> ${translate_file} && rm ${translate_file}.tmp

python3 - ${all_file} ${translate_file} ${obfuscated_file} << EOF
import csv
import re
import sys

yaml_file = sys.argv[1]
translate_file = sys.argv[2]
output = sys.argv[3]

with open(yaml_file, 'r') as f:
     bytes = f.read() # read entire file as bytes
     with open(translate_file, newline='') as csvfile:
         reader = csv.DictReader(csvfile)
         for row in reader:
             entity = row['entity']
             new_entity = row['new_entity']
             print("entity = " + entity + ", new_entity = " + new_entity)
             bytes = bytes.replace("tag: " + entity, "tag: " + new_entity)
             bytes = bytes.replace("name: " + entity, "name: " + new_entity)

f = open(output, "w")
f.write(bytes)
f.close()
EOF

# change all email addresses
sed -i 's/email:.*/email: user@example.com/' ${obfuscated_file}

# change all slack channel names
sed -i 's/channel:.*/channel: my-slack-channel/' ${obfuscated_file}

# copy export directory to new directory, without catalog YAML
rsync -av --exclude='catalog' ${input}/ ${output}
mkdir -p ${output}/catalog

# now split single file into multiple that can be passed as parameter to cortex catalog create -f
cd ${output}/catalog
yq --no-doc -s '"file_" + $index' ${obfuscated_file}

# tar it up
tar_file=$(basename ${output}).tar
cd ${output}
rm ${all_file}
rm ${translate_file}
tar -cvf ${tar_file} ./*

echo "Created: ${output}/${tar_file}"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cortexapps_cli-1.10.0.tar.gz (52.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cortexapps_cli-1.10.0-py3-none-any.whl (90.1 kB view details)

Uploaded Python 3

File details

Details for the file cortexapps_cli-1.10.0.tar.gz.

File metadata

  • Download URL: cortexapps_cli-1.10.0.tar.gz
  • Upload date:
  • Size: 52.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.11.14 Linux/6.11.0-1018-azure

File hashes

Hashes for cortexapps_cli-1.10.0.tar.gz
Algorithm Hash digest
SHA256 3c9e8e35f712b7bd5b69d3edffc3358a83ff1ee5ca6905e236138569f12b46c7
MD5 836a09d21b4568c1e9210c943eb7ce44
BLAKE2b-256 86c357a28645c737fb1e262575684df0c6891491358c5d97f3b8b757e2f7e4c9

See more details on using hashes here.

File details

Details for the file cortexapps_cli-1.10.0-py3-none-any.whl.

File metadata

  • Download URL: cortexapps_cli-1.10.0-py3-none-any.whl
  • Upload date:
  • Size: 90.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.11.14 Linux/6.11.0-1018-azure

File hashes

Hashes for cortexapps_cli-1.10.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ccc9f43da7ac74f10b45d7c9337ceafcb59699fdef72d3456d8da1d81790f27a
MD5 1f2fd97bbc434029a4f3c0ff3c63becd
BLAKE2b-256 6fd3917fc66952c724f2765e349eac702d3065ea2265577e15d61a50d52d4203

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page