Skip to main content

content package for DataRobot integration for SAP AI Core

Project description

content package for DataRobot integration for SAP AI Core

Objective

A content package to deploy DataRobot workflows in AI Core.

User Guide

1. Deploy a DataRobot model in AI Core

Pre-requisites

  1. Complete AI Core Onboarding
  2. Access to the Git repository, Docker repository and S3 bucket onboarded to AI Core
  3. You have a registered DataRobot account, trained a model, downloaded the model from DataRobot and stored the model in Object Store configured with AI Core.

Steps

  1. Install AI Core SDK
pip install ai-core-sdk[aicore_content]
  1. Install this content package
pip install sap-ai-core-datarobot
  1. Create a config file with the name model_serving_config.yaml with the following content.
serving_config = {
    '.contentPackage': 'sap_datarobot',
    '.workflow': 'model-jar-serving',
    '.dockerType': 'default',
    'name': '<YOUR SERVING TEMPLATE NAME>',
    'labels': {
        'scenarios.ai.sap.com/id': "<YOUR SCENARIO ID>",
        'ai.sap.com/version': "<YOUR SCENARIO VERSION>"
    },
    "annotations": {
        "scenarios.ai.sap.com/name": "<YOUR SCENARIO NAME>",
        "executables.ai.sap.com/name": "<YOUR EXECUTABLE NAME>",
        "executables.ai.sap.com/description": "<YOUR EXECUTABLE DESCRIPTION>",
        "scenarios.ai.sap.com/description": "<YOUR SCENARIO DESCRIPTION>"
    },
    'image': '<YOUR DOCKER IMAGE TAG>',
    "imagePullSecret": "<YOUR DOCKER IMAGE PULL SECRET NAME>"
}
  1. Fill in the desired values in the config file. An example config file is shown below.
serving_config = {
    '.contentPackage': 'sap_datarobot',
    '.workflow': 'model-jar-serving',
    '.dockerType': 'default',
    'name': 'datarobot-model-serving',
    'labels': {
        'scenarios.ai.sap.com/id': "00db4197-1538-4640-9ea9-44731041ed88",
        'ai.sap.com/version': "0.0.1"
    },
    "annotations": {
        "scenarios.ai.sap.com/name": "my-datarobot-scenario",
        "executables.ai.sap.com/name": "datarobot-model-serving",
        "executables.ai.sap.com/description": "datarobot model serving",
        "scenarios.ai.sap.com/description": "my datarobot scenario"
    },
    'image': 'docker.io/<YOUR_DOCKER_USERNAME>/model-serve:1.0',
    "imagePullSecret": "my-docker-secret"
}
  1. Generate a docker image
aicore-content create-image -p sap_datarobot -w model-jar-serving model_serving_config.yaml
  1. Generate a serving template
aicore-content create-template -p sap_datarobot -w model-jar-serving model_serving_config.yaml -o './model-serving-template.yaml'
  1. Push the docker image to your docker repository
docker push docker.io/<YOUR_DOCKER_USERNAME>/model-serve:1.0
  1. Push the serving template to your git repository
cd <repo_path>
git add <path_within_repo>/proxy-serving-template.yaml
git commit -m 'updated template model-serving-template.yaml'
git push
  1. Obtain a client credentials token to AI Core
curl --location '<YOUR AI CORE AUTH ENDPOINT URL>/oauth/token' --header 'Authorization: Basic <YOUR AI CORE CREDENTIALS>'
  1. Create an artifact to connect the DataRobot model, to make it available for use in SAP AI Core. Save the model artifact id from the response.
curl --location --request POST "<YOUR AI CORE URL>/v2/lm/artifacts" \
--header "Authorization: Bearer <CLIENT CREDENTAILS TOKEN>" \
--header "Content-Type: application/json" \
--header "AI-Resource-Group: <YOUR RESOURCE GROUP NAME>" \
--data-raw '{
 "name": "my-datarobot-model",
 "kind": "model",
 "url": "ai://<YOUR OBJECTSTORE NAME>/<YOUR MODEL PATH>",
 "description": "my datarobot model jar",
 "scenarioId": "<YOUR SCENARIO ID>"
 }'
  1. Create a configuration and save the configuration id from the response.
curl --request POST "<YOUR AI CORE URL>/v2/lm/configurations" \
--header "Authorization: Bearer <CLIENT CREDENTAILS TOKEN>" \
--header "AI-Resource-Group: <YOUR RESOURCE GROUP NAME>" \
--header "Content-Type: application/json" \
--data '{
    "name": "<CONFIGURATION NAME>",
    "executableId": "<YOUR EXECUTABLE ID>",
    "scenarioId": "<YOUR SCENARIO ID>",
    "parameterBindings": [
        {
            "key": "modelName",
            "value": "<YOUR MODEL JAR FILE NAME>"
        }
    ],
    "inputArtifactBindings": [
        {
            "key": "modeljar",
            "artifactId": "<YOUR MODEL ARTIFACT ID>"
        }
    ]
}'
  1. Create a deployment and note down the deployment id from the response
curl --location --globoff --request POST '<YOUR AI CORE URL>/v2/lm/configurations/<YOUR CONFIGURATION ID>/deployments' \
--header 'AI-Resource-Group: <YOUR RESOURCE GROUP NAME>' \
--header 'Authorization: Bearer <CLIENT CREDENTAILS TOKEN>'
  1. Check the status of the deployment. Note down the deployment URL after the status changes to RUNNING.
curl --location --globoff '<YOUR AI CORE URL>/v2/lm/deployments/<YOUR DEPLOYMENT ID>' \
--header 'AI-Resource-Group: <YOUR RESOURCE GROUP NAME>' \
--header 'Authorization: Bearer <CLIENT CREDENTAILS TOKEN>'
  1. Use your deployment. The example inference data shown below is for an employee churn prediction model
curl --location '<YOUR DEPLOYMENT URL>/v1/predict' \
--header 'AI-Resource-Group: <YOUR RESOURCE GROUP NAME>' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <CLIENT CREDENTAILS TOKEN>' \
--data '[
    {
        "EMPLOYEE_ID": 00000,
        "AGE": 33,
        "AGE_GROUP10": "(25-35]",
        "AGE_GROUP5": "(30-35]",
        "GENERATION": "Generation Y",
        "CRITICAL_JOB_ROLE": "Critical",
        "RISK_OF_LOSS": "Low",
        "IMPACT_OF_LOSS": "Medium",
        "FUTURE_LEADER": "No Future Leader",
        "GENDER": "Male",
        "MGR_EMP": "No Mgr",
        "MINORITY": "Non-Minority",
        "TENURE_MONTHS": 64,
        "TENURE_INTERVAL_YEARS": "( 5 - 10]",
        "TENURE_INTERVALL_DESC": "4 - senior",
        "SALARY": 50000,
        "EMPLOYMENT_TYPE": "Full-Time",
        "EMPLOYMENT_TYPE_2": "Regular",
        "HIGH_POTENTIAL": "No High Pot",
        "PREVIOUS_FUNCTIONAL_AREA": null,
        "PREVIOUS_JOB_LEVEL": null,
        "PREVIOUS_CAREER_PATH": null,
        "PREVIOUS_PERFORMANCE_RATING": null,
        "PREVIOUS_COUNTRY": null,
        "PREVCOUNTRYLAT": null,
        "PREVCOUNTRYLON": null,
        "PREVIOUS_REGION": null,
        "TIMEINPREVPOSITIONMONTH": 4,
        "CURRENT_FUNCTIONAL_AREA": "Service",
        "CURRENT_JOB_LEVEL": "3 - Senior",
        "CURRENT_CAREER_PATH": "Functional",
        "CURRENT_PERFORMANCE_RATING": "0 - No performance Measurement",
        "CURRENT_REGION": "EMEA",
        "CURRENT_COUNTRY": "Germany",
        "CURCOUNTRYLAT": 51.08342,
        "CURCOUNTRYLON": 10.423447,
        "PROMOTION_WITHIN_LAST_3_YEARS": "No Promotion",
        "CHANGED_POSITION_WITHIN_LAST_2_YEARS": "No Change",
        "CHANGE_IN_PERFORMANCE_RATING": "0 - Not available",
        "FUNCTIONALAREACHANGETYPE": "No change",
        "JOBLEVELCHANGETYPE": "No change",
        "HEADS": 1,
        "FLIGHT_RISK": "No",
        "ID": 00000,
        "LINKEDIN": "No",
        "HRTRAINING": "No",
        "SICKDAYS": 16
    }
]'

Security Guide

See Security in SAP AI Core for general information about how SAP AI Core handles security.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

sap_ai_core_datarobot-1.0.17-py3-none-any.whl (16.5 kB view details)

Uploaded Python 3

File details

Details for the file sap_ai_core_datarobot-1.0.17-py3-none-any.whl.

File metadata

  • Download URL: sap_ai_core_datarobot-1.0.17-py3-none-any.whl
  • Upload date:
  • Size: 16.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.8.2 requests/2.25.1 setuptools/41.4.0 requests-toolbelt/1.0.0 tqdm/4.64.1 CPython/3.5.3

File hashes

Hashes for sap_ai_core_datarobot-1.0.17-py3-none-any.whl
Algorithm Hash digest
SHA256 cf618841b7b7aecdd20edbdd1b1fb67ce36c647d73cee51a30b6f53374d97c0b
MD5 10b3ea982fedd320912acbac4fa84482
BLAKE2b-256 435af4210f69adb88466a70b063fa9adb72718a9c135af72f3ddd2d62a2d9944

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page