Skip to main content

SAP Computer Vision Package

Project description

SAP Computer Vision Package

This package helps with the implementation of Computer Vision use-cases on top of AI Core. It extends detectron2, a state-of-the-art library for object detection and image segmentation. Our package adds image classification and feature extraction (eg., for image retrieval) capabilities. For a fast development of Computer Vision solutions, the package offers training and evaluation methods and other helpful components, such as a large set of augmentation functions. The package can also be used stand-alone without AI Core, and AI Core integration can be added later to the project.

The functionalities of the package can be used on AI Core without any programming. For this purpose the package works with the ai-core-sdk to provide a command line interface to create AI Core templates for training and serving. From our experience it reduces the time for implementing a Computer Vision use-case on AI Core from several days to several hours.

Supported use-cases

  • Object Detection
  • Image Classification
  • Image Feature Extraction
  • Model Training and Deployment on SAP AI Core



Before installation, make sure that PyTorch and detectron2 are installed. Details on how to install PyTroch can be found here. After the installation of PyTorch the matching version of detectron2 has to be installed. Please check the detectron2 installation guide to select the proper version. The package is tested with detectron2=0.6.

Mac OS

On MacOS follwing commands can be used to install both:

pip install torch==1.10 torchvision
pip install


For linux pre-builds of detectron2 are available:

pip install torch==1.10 torchvision
pip install detectron2 -f

Make sure to select the url matching your torch version and cuda when GPU support is needed. Details can be found in the detectron2 installation guide.

Installation from Source

When building from source normally is the suitable setup file. It skips the process of building the model serving binary locally, which only works on linux systems. The binary is only needed in the docker images to serve models.

To include local code changes to the installation run:

python develop

This is similar to pip install -e ., except that is used instead of

Installation using pip

To install this package from pypi run:

pip install sap-computer-vision-package

Getting Started

Using the Python Library Part

If you are interested to use our package as a simple extension to detectron2.

Using the Package on AI Core

The ai-core-sdk package provides an interface to discover and access content packages like the sap-computer-vision-package.

Install SAP AI Core SDK

pip install "ai-core-sdk[aicore-content]"

Before testing the pipelines on AI Core, make sure that the items in the following checklist are fulfilled.

AI Core Checklist

Configure AWS credentials and metaflow

When templates are created metaflow pushes tarballs to the S3 bucket. Those tarballs are loaded during pipeline execution. For this to work metaflow needs writing permissions to the S3 bucket onboarded to AI Core and metaflow has to configured to use this bucket as its datastore.

Details on how to configure an aws profile can be found here. In order to enable metaflow to copy the tarballs into the bucket, the awscli must not ask for a password when starting a copy process. To achieve this either give the default profile the permissions to access the bucket or run export AWS_PROFILE=<profile-with-bucket-access> before creating templates.

Full documentation on how to configure metaflow can be found in the metaflow documentation. We only need to configure the S3 as the storage backend and do not need the configuration for AWS Batch. A mininmal configuration file (~/.metaflowconfig/config.json) looks like this:

    "METAFLOW_DATASTORE_SYSROOT_S3": "<path-in-bucket>",
    "METAFLOW_DATATOOLS_SYSROOT_S3": "<path-in-bucket>/data",

Basic Usage

To show all available templates run aicore-content list sap-cv.

The command aicore-content show sap_cv <pipeline_name> shows detailed information about a specific pipeline and its parameters.

The training pipelines are templates for AI Core execution. To run it under your tenant you need the template and the matching docker image:

Build Docker image:
  • Python: workflow.create_image(workflow_config)
  • CLI: aicore-content create-image -p sap-cv -w object-detection-train <workflow_config_file>

and push it using docker push <tag/docker-image-target>

Create Templates:
  • Python: workflow.create_template(workflow_config, out_file)

  • CLI: aicore-content create-template -p sap-cv -w object-detection-train <workflow_config> -o <out_file> The template contains several tenant specific entries like imagePullSecrets etc. Please adjust them by hand or use a pipeline config YAML (see section below).

The template has to be pushed into the onboarded git repo (consult AI Core documentation to set it up) and the container to the onboarded docker repository.

Templates are built using metaflow using a plugin to create Argo templates. Make sure that a proper metaflow version (for the argo plugin, use the sap-ai-core-metaflow version) is installed and that the storage is configured correctly (check section "Configure AWS credentials and metaflow").

Workflow Config .yaml

Tenant specific values for the template can either be provided through the CLI through additional options. For more information execute aicore-content create-template sap-cv <workflow_name> --help. To simplify the command and make the creation of the template trackable in git it is possible to use a .yaml containing the values.


.contentPackage: sap-cv
.workflow: object-detection-train
name: "your-pipeline-name"       #needs to be unique
labels: "my-scenario-id" "0.0.1"
annotations: "my-scenario-name" "Description of executable" "my-executable-name" "dataset"
image: "docker-registry/docker-repository:tag"
imagePullSecret: "my-docker-registry-secret"
objectStoreSecret: "default-object-store-secret"

To use the workflow config during the creation process pass the path to the WORKFLOW_CONFIG to create-image and create-template subcommands of aicore-content CLI, e.g.

aicore-content create-template <workflow_config> <out_file>

Common Issues

Impossible to have multiple templates for the same pipeline in a tenant.

The name for executable specified in the template has to be unique. To overwrite the default name of a pipeline use the --name option when creating the template: sap_cv create-template <pipeline_name> -o/--output-file=<choose_name>.json --name=<executable-name>.

Template creations gets stucked without an error.

When the template creation process gets stuck in this step:

$ aicore-content create-template -p sap-cv -w batch_processing <workflow_config> -o test.json
Metaflow 2.4.4 executing BatchProcessing for user:I545048
Validating your flow...
    The graph looks good!
Running pylint...
    Pylint is happy!
Deploying batchprocessing to Argo Workflow Templates...

it is most that the permissions to write to the bucket are missing. Make sure to select the correct AWS profile by running export AWS_PROFILE=<profile-with-bucket-access>. More details can be found in the section "Configure AWS credentials and metaflow".

Giving Feedback and Reporting Bugs

If you are an SAP customer you can give feedback or report bugs by creating an incident via the SAP ONE Support Launchpad using the component ID "CA-ML-CV".

If you are not an SAP customer yet, you can give feedback or report bugs by registering with SAP Community and asking a question using the tag "SAP AI Core" in the field "SAP Managed Tags".


This package is distributed under the SAP Developers License, see LICENSE file in the package. The package uses several third party open source components. Please see file DISCLAIMER for more details on their licensing.


This package is distributed under the SAP Developers License. This license information can be found in the LICENSE file in the package.

When you build the template Docker images then a third party base image and several additional open source components are loaded. Please refer to the license of the PyTorch base image at the bottom of their homepage ( As with all Docker images, this likely also contains other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained). The list of the additional open source components can be found in the requirements.txt file in the package. Please refer to the individual documentation of these components for more details on the licenses. When using pretrained models the weights are loaded either from detectron2 ( or timms (, where you can find more information on their licenses.

Note that you are responsible for checking and accepting the license terms for all above mentioned third party components as part of the build and training process.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page