A Python SDK for interacting with Coherent Spark APIs
Project description
Coherent Spark Python SDK
The Coherent Spark Python SDK (currently in Beta) is designed to elevate the developer experience and provide a convenient access to the Coherent Spark APIs.
👋 Just a heads-up:
This SDK is supported by the community. If you encounter any bumps while using it, please report them here by creating a new issue.
Note: Currently being developed. Please check back soon for updates.
Installation
pip install -U cspark
🫣 This Python library requires Python 3.7+.
Usage
To use the SDK, you need a Coherent Spark account that lets you access the following:
- User authentication (API key, bearer token or OAuth2.0 client credentials details)
- Base URL (including the environment and tenant name)
- Spark service URI (to locate a specific resource):
folder
- the folder name (where the service is located)service
- the service nameversion
- the semantic version a.k.a revision number (e.g., 0.4.2)
A folder
contains one or more service
s, and a service
can have
multiple version
s. Technically speaking, when you're operating with a service,
you're actually interacting with a specific version of that service (the latest
version by default - unless specified otherwise).
Hence, there are various ways to indicate a Spark service URI:
{folder}/{service}[?{version}]
- version is optional.service/{serviceId}
version/{versionId}
IMPORTANT: Avoid using URL-encoded characters in the service URI.
Here's an example of how to execute a Spark service:
import cspark.sdk as Spark
spark = Spark.Client(env='my-env', tenant='my-tenant', api_key='my-api-key')
with spark.services as services:
response = services.execute('my-folder/my-service', inputs={'value': 42})
print(response.data)
Explore the examples and documentation folders to find out more about the SDK's capabilities.
Client Options
As shown in the examples above, the Spark.Client
is your entry point to the SDK.
It is quite flexible and can be configured with the following options:
Base URL
base_url
(default: os.getenv['CSPARK_BASE_URL']
): indicates the base URL of
the Coherent Spark APIs. It should include the tenant and environment information.
spark = Spark.Client(base_url='https://excel.my-env.coherent.global/my-tenant')
Alternatively, a combination of env
and tenant
options can be used to construct
the base URL.
spark = Spark.Client(env='my-env', tenant='my-tenant')
Authentication
The SDK supports three types of authentication mechanisms:
api_key
(default:os.getenv['CSPARK_API_KEY']
): indicates the API key (also known as synthetic key), which is sensitive and should be kept secure.
spark = Spark.Client(api_key='my-api-key')
PRO TIP: The Spark platform supports public APIs that can be accessed without any form of authentication. In that case, you need to set
api_key
toopen
in order to create aSpark.Client
.
token
(default:os.getenv['CSPARK_BEARER_TOKEN']
): indicates the bearer token. It can be prefixed with 'Bearer' or not. A bearer token is usually valid for a limited time and should be refreshed periodically.
spark = Spark.Client(token='Bearer 123')
oauth
(default:os.getenv['CSPARK_CLIENT_ID']
andos.getenv['CSPARK_CLIENT_SECRET']
oros.getenv['CSPARK_OAUTH_PATH']
): indicates the OAuth2.0 client credentials. You can either provide the client ID and secret directly or provide the file path to the JSON file containing the credentials.
spark = Spark.Client(oauth={ client_id: 'my-client-id', client_secret: 'my-client-secret' })
# or
spark = Spark.Client(oauth='path/to/oauth/credentials.json')
-
timeout
(default:60000
): indicates the maximum amount of time (in milliseconds) that the client should wait for a response from Spark servers before timing out a request. -
max_retries
(default:2
): indicates the maximum number of times that the client will retry a request in case of a temporary failure, such as a unauthorized response or a status code greater than 400. -
retry_interval
(default:1
second): indicates the delay between each retry. -
logger
(default:True
): enables or disables logs for the SDK.
spark = Spark.Client(logger=False)
Client Errors
SparkError
is the base class for all custom errors thrown by the SDK. There are
two types of it:
SparkSdkError
: usually thrown when an argument (user entry) fails to comply with the expected format. Because it's a client-side error, it will include in the majority of cases the invalid entry ascause
.SparkApiError
: when attempting to communicate with the API, the SDK will wrap any sort of failure (any error during the roundtrip) intoSparkApiError
, which includes the HTTPstatus
code of the response and therequest_id
, a unique identifier of the request.
Some of the derived SparkApiError
are:
Type | Status | When |
---|---|---|
InternetError |
0 | no internet access |
BadRequestError |
400 | invalid request |
UnauthorizedError |
401 | missing or invalid credentials |
ForbiddenError |
403 | insufficient permissions |
NotFoundError |
404 | resource not found |
ConflictError |
409 | resource already exists |
RateLimitError |
429 | too many requests |
InternalServerError |
500 | server-side error |
ServiceUnavailableError |
503 | server is down |
UnknownApiError |
undefined |
unknown error |
API Parity
The SDK aims to provide over time full parity with the Spark APIs. Below is a list of the currently supported APIs.
Authentication API - manages access tokens using OAuth2.0 Client Credentials flow:
Authorization.oauth.retrieve_token(config)
generates new access token.
Service API - manages Spark services:
Spark.services.execute(uri, inputs)
executes a Spark service.Spark.services.get_versions(uri)
lists all the versions of a service.Spark.services.get_schema(uri)
gets the schema of a service.Spark.services.get_metadata(uri)
gets the metadata of a service.
PRO TIP: A service URI locator can be combined with other parameters to locate a specific service (or version of it) when it's not a string. For example, you may execute a public service using an object containing the
folder
,service
, andpublic
properties.
import cspark.sdk as Spark
spark = Spark.Client(env='my-env', tenant='my-tenant', api_key='open')
with spark.services as services:
uri = Spark.UriParams(folder='my-folder', service='my-service', public=True)
response = services.execute(uri, inputs={ 'value': 42 })
print(response.data)
# The final URI in this case is:
# 'my-tenant/api/v3/public/folders/my-folder/services/my-service/execute'
See the Uri and UriParams class for more details.
Contributing
Feeling motivated enough to contribute? Great! Your help is always appreciated.
Please read CONTRIBUTING.md for details on the code of conduct, and the process for submitting pull requests.
Copyright and License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file cspark-0.1.0b5.tar.gz
.
File metadata
- Download URL: cspark-0.1.0b5.tar.gz
- Upload date:
- Size: 19.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6cf190f2cd6641a9a0fb27215408cdc380fc1e1d7df91806397272ad5fee1d31 |
|
MD5 | 481e89baae00a5601105a89633dbf6ae |
|
BLAKE2b-256 | 6c911caa7d950ba2f4e1cbb1c56e834c9c0a2ea0da22c8efd9df76e8160bb7d2 |
File details
Details for the file cspark-0.1.0b5-py3-none-any.whl
.
File metadata
- Download URL: cspark-0.1.0b5-py3-none-any.whl
- Upload date:
- Size: 23.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 03854e2a4fb05a6a5c9ca5cba713ff993a2fbd0d583f3cfef960a22d190eefee |
|
MD5 | e3546ff595b1587f862bd56b65f72397 |
|
BLAKE2b-256 | 47039cebdab3d8cfd474d73ce5d6d58bb0ba4166f43f8e1b7b10d19ef20c19e8 |