ts-sdk-connectors-python
Project description
TetraScience Python Connector SDK
Version
v2.1.0
Table of Contents
- Summary
- Usage
- Logging
- OpenAPI code generation
- Local testing with standalone connectors
- Running tests
- Changelog
Summary
The TetraScience Python Connectors SDK provides utilities and APIs for building TetraScience pluggable connectors in Python. Connectors are containerized applications used for transferring data between the Tetra Data Platform (TDP) and other systems. Some examples of existing connectors:
- The S3 connector receives file events from an S3 bucket via an SQS queue and pulls the corresponding objects into TDP
- The Kepware KEPServerEX connector pulls tags from KEPServerEX over MQTT and writes corresponding JSON files to TDP
- The LabX connector can connect to multiple LabX instances and retrieve completed tasks. The LabX connector was written using this Python SDK
Usage
Connector Class
The Connector class is the core component of the SDK.
It provides methods and hooks to manage the lifecycle of a connector, handle commands, perform periodic tasks, and
interact with TDP.
Creating and running a connector
To create a Connector instance, you need to provide a TdpApi instance and optional ConnectorOptions. The TdpApi instance is the class that interacts with TDP
from ts_sdk_connectors_python.connector import Connector, ConnectorOptions
from ts_sdk_connectors_python.tdp_api import TdpApi
tdp_api = TdpApi()
connector = Connector(tdp_api=tdp_api, options=ConnectorOptions())
Starting the connector and running
async def main():
tdp_api = TdpApi()
connector = Connector(tdp_api=tdp_api, options=ConnectorOptions())
await connector.start()
while True:
await asyncio.sleep(1)
asyncio.run(main())
Configuring TdpApi
Required configuration values for TdpApi can be provided either as instance args or pulled from environment variables. If any arguments are not provided, they are pulled from the environment variables.
# manually provide configuration values
tdp_api = TdpApi(
aws_region="us-east-1",
org_slug="tetrascience-yourorg",
hub_id="your-hub-id",
connector_id="your-connector-id",
datalake_bucket="your-datalake-bucket",
stream_bucket="your-stream-bucket",
tdp_certificate_key="your-tdp-certificate-key",
jwt_token_parameter="your-jwt-token-parameter",
tdp_endpoint="https://api.tetrascience.com",
outbound_command_queue="your-outbound-command-queue",
kms_key_id="your-kms-key-id",
artifact_type="connector",
connector_token="your-connector-token",
local_certificate_pem_location="path/to/your/certificate.pem"
)
# Automatically pull all args from environment variables
tdp_api = TdpApi()
# Some arguments provided and remaining args pulls from environment variables
tdp_api = TdpApi(datalake_bucket="your-datalake-bucket")
The following environment variables are used by the TdpApiConfig class to configure the TetraScience Data Platform API client. Note that not all of them are necessarily relevant:
| Variable Name | Description |
|---|---|
AWS_REGION |
The AWS region to use. |
ORG_SLUG |
The organization slug for the TetraScience Data Platform. |
HUB_ID |
The hub ID for the connector. |
CONNECTOR_ID |
The unique identifier for the connector. |
DATALAKE_BUCKET |
The name of the datalake bucket. |
STREAM_BUCKET |
The name of the stream bucket. |
TDP_CERTIFICATE_KEY |
The key for the TDP certificate. |
JWT_TOKEN_PARAMETER |
Name of the SSM parameter that contains the JWT token. Used Used by non-standalone connectors |
TDP_ENDPOINT |
The base URL for the TetraScience Data Platform API. |
OUTBOUND_COMMAND_QUEUE |
The queue name for outbound commands. |
KMS_KEY_ID |
The KMS key ID. |
ARTIFACT_TYPE |
The type of artifact (e.g., connector, data-app). |
CONNECTOR_TOKEN |
The JWT authentication token for the connector. Used for standalone connectors to request initial AWS credentials |
LOCAL_CERTIFICATE_PEM_LOCATION |
The local certificate PEM file location. |
Proxy support
In addition to the above environment variables, the connector uses proxy settings
determined from the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY.
For connectors on a Hub, the connector sets these environment variables based on
the Hub's proxy settings. For standalone connectors, the standalone installer will
set lowercase versions of these variables. The connector checks the environment,
and in the case where lowercase versions exist but uppercase ones don't, it copies
the lowercase values over to uppercase.
Initialization of TdpApi
TdpApi must initialize the AWS and HTTP clients before it can communicate with the connector's AWS services and connector endpoints in TDP.
tdp_api = TdpApi()
await tdp_api.init_client(proxy_url = "127.0.0.1:3128") # your proxy URL here, if needed
files = tdp_api.get_connector_files(...)
...
Connector example
Here is an example of a custom connector that prints "Hello World" on a scheduled interval:
from typing import Optional
from ts_sdk_connectors_python.connector import Connector, ConnectorOptions
from ts_sdk_connectors_python.custom_commands import register_command
from ts_sdk_connectors_python.tdp_api import TdpApi
from ts_sdk_connectors_python.utils import Poll
class BasicScheduledConnector(Connector):
"""Prints hello world on a scheduled interval"""
def __init__(
self,
tdp_api: TdpApi,
schedule_interval: int,
options: Optional[ConnectorOptions] = None,
):
super().__init__(tdp_api=tdp_api, options=options)
self.poll: Optional[Poll] = None
self.schedule_interval = schedule_interval
async def on_start(self):
await super().on_start()
self._start_polling()
async def on_stop(self):
await super().on_stop()
self._stop_polling()
@register_command("TetraScience.Connector.PollingExample.SetScheduleInterval")
async def set_schedule_interval(self, schedule_interval: str):
self.schedule_interval = float(schedule_interval)
self._stop_polling()
self._start_polling()
def _start_polling(self):
if not self.poll:
self.poll = Poll(self.execute_on_schedule, self.schedule_interval)
self.poll.start()
def _stop_polling(self):
if self.poll:
self.poll.stop()
self.poll = None
async def execute_on_schedule(self):
print("HELLO WORLD")
# Usage
import asyncio
async def main():
tdp_api = TdpApi()
await api.init_client()
connector = BasicScheduledConnector(tdp_api=tdp_api, schedule_interval=5)
await connector.start()
await asyncio.sleep(10)
await connector.shutdown()
asyncio.run(main())
This example demonstrates how to create a custom connector that prints "Hello World" every 5 seconds and allows the schedule interval to be updated via a custom command. For more information on registering commands, see below.
SQL Connector
The SDK provides a SqlConnector abstract base class that simplifies building SQL-based connectors using an inheritance pattern. This allows you to easily connect to any SQL database (PostgreSQL, MySQL, SQL Server, Oracle, etc.) and incrementally retrieve data using watermarks.
For detailed documentation on the SQL Connector, including:
- Quick Start guide
- Configuration parameters and timeout configuration
- Watermark strategies with examples
- Best practices (read-only grants, query performance, indexes)
- Error codes for
manifest.json - Troubleshooting
See the SQL Connector Documentation.
For technical design details (synchronous SQLAlchemy rationale, connection pooling, runtime configuration updates), see SQL Connector Design.
Commands
TDP communicates with connectors via the command service. The data acquisition service in TDP uses a set of commands we refer to as "lifecycle commands". The Connector class implements a command listener and has several methods that are invoked when lifecycle commands come in.
Lifecycle methods and hooks
Starting and Initializing methods
- start: Starts the connector and its main activities.
- Triggers:
- None. Almost all connectors will call this from
main.py, the default entrypoint of the container
- None. Almost all connectors will call this from
- Default implementation:
- calls
on_initializinghook - loads connector details (does not call
on_connector_updated) - starts metrics collection, heartbeat, and command listener tasks
- if the connector's operating status is
RUNNING, callson_starthook - calls
on_initializedhook
- calls
- Triggers:
- on_initializing: A developer-defined hook that is called at the beginning of the default implementation of
Connector.start- Triggers:
- None. In default implementation, called once by
Connector.start
- None. In default implementation, called once by
- Default implementation:
- None
- Triggers:
- on_initialized: A developer-defined hook that is called at the end of the default implementation of
Connector.start- Triggers:
- None. In default implementation, called once by
Connector.start
- None. In default implementation, called once by
- Default implementation:
- None
- Triggers:
- on_start: A developer-defined hook that runs when the connector's operating status is set to
RUNNING- Triggers (any of the following):
- A command with action
TetraScience.Connector.Startis received- this corresponds to setting the connector operating status to
RUNNING
- this corresponds to setting the connector operating status to
- During
Connector.startif the connector's operating status isRUNNING- this typically happens when a disabled connector is "enabled as
RUNNING"
- this typically happens when a disabled connector is "enabled as
- A command with action
- Default implementation:
- reloads connector config, which subsequently calls
on_connector_updated
- reloads connector config, which subsequently calls
- Triggers (any of the following):
Running methods
- on_connector_updated: A developer-defined hook that gets called when the connector
details are updated. Because this corresponds to config changes and is also triggered indirectly by
on_start, it is the most common place to initialize resources for the connector to work with third-party systems. Since it is also triggered byon_stop, checking that the connector's operating status isRUNNINGbefore starting any data ingestion is important- Triggers (any of the following):
- A command with action
TetraScience.Connector.UpdateConfigis received.- sent by the data acquisition service after valid configuration is applied
- A command with action
TetraScience.Connector.Startis received.- invoked by base implementation of
Connector.on_start
- invoked by base implementation of
- A command with action
TetraScience.Connector.Stopis received.- invoked by base implementation of
Connector.on_stop
- invoked by base implementation of
- A command with action
- Default Implementation:
- None
- Triggers (any of the following):
- validate_config: A developer defined method that determines if a given connector config is valid:
- Triggers:
- A command with action
TetraScience.Connector.ValidateConfigis received- sent by the platform when a user attempts to save connector config
- A command with action
- Default implementation:
- always returns
{"valid": true}
- always returns
- Triggers:
Idle and Disable/Shutdown methods
- shutdown: A method called when the connector and its container will be stopped
- Triggers:
- A command with action
TetraScience.Connector.Shutdownis received.- sent by the platform when a user disables the connector
- A command with action
- Default implementation:
- calls
on_shutdown - stops metrics collection, heartbeat, and command listener tasks
- calls
- Triggers:
- on_shutdown: A developer-defined hook that runs when the connector is stopping. Connector specific cleanup can usually be implemented here without overriding
shutdown- Triggers:
- A command with action
TetraScience.Connector.Shutdownis received.- sent by the platform when a user disables the connector
- A command with action
- Default Implementation:
- None
- Triggers:
- on_stop: A developer-defined hook that runs when the connector's operating status is set to
IDLE- Triggers (any of the following):
- A command with action
TetraScience.Connector.Stopis received.- this corresponds to setting the connector operating status to
IDLE
- this corresponds to setting the connector operating status to
- A command with action
- Default Implementation:
- reloads connector config, which subsequently calls
on_connector_updated
- reloads connector config, which subsequently calls
- Triggers (any of the following):
Additional Connector commands
The Connector class also supports some other commands. These can be sent using the commands API
| Action Name | Connector Method Called |
Description | Triggers | Side-Effects |
|---|---|---|---|---|
| TetraScience.Connector.ListCustomCommands | handle_get_available_custom_commands |
This method returns a list of available custom commands registered to the connector. | ||
| TetraScience.Connector.SetLogLevel | set_log_level |
This method is used to set the log level of the connector dynamically. Supported levels DEBUG, INFO, WARNING, ERROR, CRITICAL. |
User sends a command to set the log level | Adjusts the verbosity of logs without restarting the connector |
| custom commands | specified by @register_command decorators |
Custom commands registered using the @register_command decorator. |
Custom commands
Custom commands are user-defined commands that can be registered by developers to extend the functionality of a connector. This can be useful for implementing capabilities that you want the connector to have, but only to use on demand. One example would be ingesting historical data from a given time window for a connector that typically only receives new data. Another example might be requests for the connector to send information to a third-party system. This gives TDP pipelines a way to call upon the connector to act on their behalf.
Connector implements a command listener that both listens to all the previously
mentioned standard commands, and also checks a connector-specific registry for
custom commands. Custom commands are registered using the @register_command decorator.
The decorator takes a string argument that corresponds to the action of the
command. The convention for action names is TetraScience.Connector.<ConnectorName>.<CustomActionName>,
as in the following example:
from ts_sdk_connectors_python.custom_commands import register_command
from ts_sdk_connectors_python.connector import Connector
class MyConnector(Connector):
@register_command("TetraScience.Connector.ExampleConnector.MyCustomAction")
def my_custom_action(self, body: dict):
print(f"Action called with body: {body}")
return None
Commands contains further technical details on custom command and command registration.
The return type of the custom command method should be either None, a dictionary, or a string that can be converted into a dictionary.
Specifically, you can refer to the ts_sdk_connectors_python.models.CommandResponseBody type for more details.
Polling
The SDK provides a Poll class that allows repeated execution of a target function at a specified interval.
This is useful for tasks that need to be performed periodically, such as checking the status of a resource or polling an API.
For more details on how to use the Poll class, refer to the Polling Documentation.
Logging
The SDK provides a logger to facilitate structured logging. This logger supports multi-threading, asynchronous operations, logger inheritance, and upload to CloudWatch.
Log messages are in JSON format for uptake into CloudWatch. The following are examples of log messages.
{"level":"debug","message":"Loading TDP certificates from local volume /etc/tetra/tdp-cert-chain.pem","extra":{"context":"ts_sdk_connectors_python.AuthenticatedClientCreator"}}
{"level":"info","message":"TDP certificates loaded from local volume /etc/tetra/tdp-cert-chain.pem","extra":{"context":"ts_sdk_connectors_python.AuthenticatedClientCreator"}}
{"level":"info","message":"Client initialized","extra":{"context":"ts_sdk_connectors_python.tdp_api_base","orgSlug":"tetrascience-yourorg","connectorId":"3eca48c9-3eb2-4414-a491-a8dda151da50"}}
{"level":"info","message":"Starting metrics task: cpu_usage_metrics_provider","extra":{"context":"ts_sdk_connectors_python.metrics"}}
{"level":"info","message":"Starting metrics task: memory_used_metrics_provider","extra":{"context":"ts_sdk_connectors_python.metrics"}}
Logger usage
The CloudWatch logger supports logger inheritance, allowing you to create child loggers that inherit the configuration and context of their parent loggers. This is useful for organizing log messages by component or module.
The get_logger method will return a logger that inherits from the connector SDK's root logger. Simply give the logger
a useful name and begin using the logger. A new logger name will created by adding new suffix (see below).
Providing the extra argument will add additional information present in all log messages with the logger. The extra
argument can also be provided to any of the log methods (info, debug, warning, error, critical).
Example usage:
from ts_sdk_connectors_python.logger import get_logger
# Create a parent logger
parent_logger = get_logger("parent_logger", extra={"foo": "bar"})
assert parent_logger.name == 'ts_sdk_connectors_python.parent_logger'
parent_logger.info('my message', extra={'baz': 'bazoo'})
# expected log message
# note that 'foo' and 'baz' are included as 'extra'
# note that the logger name is also given in the 'extra.context'
{"level":"info","message":"my message","extra":{"context":"ts_sdk_connectors_python.parent_logger","foo":"bar","baz":"bazoo"}}
The following methods are provided to create logs at various levels:
logger.debug('Use this for detailed debug information. This is the lowest level and by default not
emitted by the logger')
logger.info('Use this for general info. This is the default level for connectors')
logger.warning('Use this for warnings')
logger.error('Use this for errors. Note the exc_info which can provide stack trace info', exc_info=True)
logger.critical('Use this for critical errors that cause failure')
You may also create child loggers by using the get_child method, which will just add another suffix to an existing
logger and merge provided extra.
# Create a child logger that inherits from the parent logger
child_logger = parent_logger.get_child("child", extra={"baz": "qux"})
assert child_logger.name == 'ts_sdk_connectors_python.parent_logger.child_logger'
# Log messages using the child logger
child_logger.info("This is a message from the child logger")
# note the extra is merged with the parent extra
{"level":"info","message":"my message","extra":{"context":"ts_sdk_connectors_python.parent_logger.child_logger","foo":"bar","baz":"qux"}}
Setting log levels
To reduce the volume of logs, you can set the log level. The supported levels are NOTSET, DEBUG, INFO, WARNING, ERROR, CRITICAL.
This can be done via the set_root_connector_sdk_log_level method:
from ts_sdk_connectors_python.logger import set_root_connector_sdk_log_level
set_root_connector_sdk_log_level("DEBUG")
By default, all loggers made by get_logger will have an NOTSET log level, meaning they all inherit their effective
log level from the root connector SDK. It is therefore not recommended to set the log level for any child loggers.
Instead, use the set_root_connector_sdk_log_level method.
Connector also implements the command TetraScience.Connector.SetLogLevel which allows you to set the log level of the connector dynamically. Here is an example command request:
{
"payload": {
"level": "DEBUG"
},
"action": "TetraScience.Connector.SetLogLevel",
"targetId": "your-connector-id"
}
CloudWatch logging and standalone connector support
If the connector is in standalone mode (meaning CONNECTOR_TOKEN env var is set), logs will get uploaded to AWS
CloudWatch in addition to getting logged to console. Logging and CloudWatch reporting occurs on a separate processing
thread apart from the main connector processing thread. Uploading to CloudWatch occurs in batches and whenever a batch
size is hit or on a set interval. Relevant envvars related to these settings can be found in constants.py.
The CloudWatchReporter class is responsible for managing the buffering and flushing of log events to AWS CloudWatch.
It handles the following tasks:
- Buffering log events.
- Flushing buffered log events to CloudWatch based on certain conditions (e.g., buffer size limit, flush interval).
- Managing the CloudWatch log stream and log group.
- Handling errors during the flushing process.
Flushing log events to AWS CloudWatch occurs for a number of reasons:
- The buffer reaches its size limit.
- The flush interval is reached.
- The flush limit is reached.
- The connector is started.
- The connector is stopped.
- An explicit flush is triggered.
OpenAPI code generation
This project uses code generation to build client libraries for interacting with the data acquisition service in TDP based on the OpenAPI specification of the service. The generated code is placed in the ts_sdk_connectors_python/openapi_codegen directory.
Typical users of the SDK will not need to generate this code. For Tetra developers, details are available here
Using the generated API client
The generated API client is used within the TdpApi class to interact with the TDP REST API.
The TdpApi class provides asynchronous methods, while the TdpApiSync class provides synchronous methods.
Example usage
from ts_sdk_connectors_python.tdp_api import TdpApi
from ts_sdk_connectors_python.openapi_codegen.connectors_api_client.models import SaveConnectorValueRequest
async def main():
api = TdpApi()
await api.init_client()
connector_id = "your_connector_id"
raw_data = [
SaveConnectorValueRequest(
key="a-string",
value={"some_json_field": "some_secret_value"},
secure=True
)
]
response = await api.save_connector_data(connector_id, raw_data)
print(response)
# Run the main function in an async event loop
import asyncio
asyncio.run(main())
For synchronous usage, use the TdpApiSync class:
from ts_sdk_connectors_python.tdp_api_sync import TdpApiSync
from ts_sdk_connectors_python.openapi_codegen.connectors_api_client.models import SaveConnectorValueRequest
def main():
api = TdpApiSync()
api.init_client()
connector_id = "your_connector_id"
raw_data = [
SaveConnectorValueRequest(
key="a-string",
value={"some_json_field": "some_secret_value"},
secure=True
)
]
response = api.save_connector_data(connector_id, raw_data)
print(response)
# retrieve parsed DTO object, if available
print(response.parsed)
# Run the main function
main()
Retrieving connector data with filtering
The Python SDK supports server-side filtering when retrieving connector data. This allows you to efficiently retrieve only the data you need by specifying keys at the API level.
Using TdpApi (async)
from ts_sdk_connectors_python.tdp_api import TdpApi
async def main():
api = TdpApi()
await api.init_client()
connector_id = "your_connector_id"
# Get all connector data (no filtering)
all_data = await api.get_connector_data(connector_id)
print(f"All data: {len(all_data.parsed.values)} items")
# Get specific keys only (server-side filtering)
filtered_data = await api.get_connector_data(
connector_id,
keys="key1,key2,key3" # Comma-separated list of keys
)
print(f"Filtered data: {len(filtered_data.parsed.values)} items")
asyncio.run(main())
Using TdpApiSync (synchronous)
from ts_sdk_connectors_python.tdp_api_sync import TdpApiSync
def main():
api = TdpApiSync()
api.init_client()
connector_id = "your_connector_id"
# Get specific keys only (server-side filtering)
filtered_data = api.get_connector_data(
connector_id,
keys="key1,key2"
)
print(f"Filtered data: {len(filtered_data.parsed.values)} items")
main()
Using Connector class methods
The Connector class provides convenient methods that automatically use server-side filtering:
from ts_sdk_connectors_python.connector import Connector
from ts_sdk_connectors_python.tdp_api import TdpApi
async def main():
api = TdpApi()
await api.init_client()
connector = Connector(api)
# Get specific values (uses server-side filtering automatically)
values = await connector.get_values(["key1", "key2"])
print(f"Retrieved {len(values)} values")
# Get single value
single_value = await connector.get_value("key1")
if single_value:
print(f"Value for key1: {single_value.value}")
# Direct access to connector data with filtering
data = await connector.get_values(
keys=["key1", "key2"],
)
print(f"Direct access: {len(data)} items")
asyncio.run(main())
Refer to the TdpApi and TdpApiSync class methods for more details on available API interactions.
Local testing with standalone connectors
For local development and testing, you can use standalone connectors to test your connector implementation against TDP resources while running your code locally. This approach allows you to:
- Test your connector logic without deploying to a Hub
- Debug and iterate quickly during development
- Validate your connector against real TDP services
Prerequisites
- Build your local Docker image: Ensure you have built a local Docker image of your connector
- Access to TDP environment: You need access to a TDP organization and appropriate permissions
- Standalone installer: Access to the TetraScience standalone connector installer
Setup process
Step 1: build your local connector image
First, build your connector as a Docker image locally. This typically involves:
# Example build command (adjust based on your connector's Dockerfile)
docker build -t my-connector:local .
Step 2: use the standalone installer
-
Run the standalone connector installer provided by TetraScience
-
When prompted for the connector image during installation, provide your local image name instead of an official image:
# Instead of using an official image like: # tetrascience/my-connector:v1.0.0 # Use your local build: my-connector:local -
The installer will:
- Set up the necessary TDP resources (tokens, certificates, etc.)
- Configure environment variables for standalone operation
- Create the appropriate Docker run configuration pointing to your local image
Step 3: environment configuration
The standalone installer will configure the following key environment variables:
CONNECTOR_TOKEN: Authentication token for standalone deploymentTDP_ENDPOINT: TDP API endpointORG_SLUG: Your organization identifierCONNECTOR_ID: The connector instance ID- Other TDP-specific configuration as needed
Step 4: running your local connector
After setup, your connector will run using your local Docker image but connect to real TDP services for authentication, data storage, and command processing.
Alternative: running containerless locally
It is also possible for testing to run the connector locally without Docker:
- Skip Step 1 above
- Get the connector token and standalone installer in Step 2, but do not run it
- From the
installer.shfile, you can extract values for the environment variables (other thanCONNECTOR_TOKEN, which you can get as part of Step 2) and export them. The necessary environment variables are those mentioned in ConfiguringTdpApi - In Step 4, run the entrypoint of the connector directly
Important notes
- Ensure your local Docker image is built before running the standalone installer
- The standalone installer handles all TDP resource provisioning and configuration
- Your local connector will have the same capabilities as a Hub-deployed connector
- Use this method for development and testing; production deployments should use Hub or standalone deployment of the official image
Running tests
Unit tests and integration tests are located in the __tests__/unit and __tests__/integration directories,
respectively. To run unit tests, execute:
poetry run pytest
To run integration tests against the TDP API, export the connector environment variables for tests and run:
poetry run pytest --integration
It is easiest to run the integration tests by following the procedure described in Local testing with standalone connectors.
Changelog
v2.1.0
- Adds support to create SQL connectors
- Updates supported Python versions to 3.11+ (including 3.12, 3.13, 3.14)
- Updates openapi-python-client to v0.27.1
- Updates black to v25.11.0
- Updates pylint to v3.3.0
v2.0.0
- Breaking: change connector helper methods to throw
ConnectorErroron unexpected API behavior instead of returningNone - For standalone connectors, capture logs before CloudWatch reporter initialization in buffer to be uploaded later
- Sort log events by timestamp before sending to CloudWatch
- Add filtering to
Connector.get_values - Add option to disable TLS verification to
TdpApi.create_httpx_instance
v1.0.1
- Add automatic batching to
Connector.get_filesandConnector.save_files - Add necessary S3 metadata to support
destination_idfor file uploads
v1.0.0
- Fix bug where proxy settings were loaded at wrong time from Hub
- Add synchronous init_client method for
TdpApiSync
v0.9.0
- Add enum of health status to
models.py - Add ability to read connector manifest file
- Add support for user agent strings
- Refactor SDK to use AWS class
- Update aioboto3 to use upstream fix
- Add health reporting to CloudWatch logger
- Move some methods from Node SDK
TdpClientto PythonConnectorclass - Fix crash loops for connectors unable to start in RUNNING
v0.8.0
- Add CI Pipeline to release PR builds to JFrog
- Add consistent AWS sessions through AWS class; fix cloud/hub/standalone deployment issues in v0.7.0
- ELN-661: Update Poll class to add default error handling and logging features
v0.7.0
- Update README to include logger practices
- Add HTTP request timeouts and SQS request timeouts
- Fix bugs causing tdpApi.upload_file to fail with error when using additional checksums
- Fix bug where the SDK ignore envar AWS_REGION in client init
- Support missing ConnectorFileDto.errorCount
v0.6.0
- Fix a 400 Bad Request error caused by the TDP API client having an
Authorizationheader that conflicted with the S3 presigned URL authentication
v0.5.0
- Fix bug where Connector.start fails when given an uninitialized TdpApi
- Improve logging in connector.start()
v0.4.0
- Implement CloudWatchReporter and logger to provide consistent logging by the SDK for local and cloudwatch logs
- Fix bugs in parsing ConnectorFileDto objects which formerly resulted in raised exceptions
- Introduce partial standalone deployment support for AWS and logger initialization
v0.3.0
- Add support to fetch connector JWT from AWS, allowing cloud connector deployment
- Use type SaveConnectorFilesRequest the signature of TdpApi.update_connector_files() and TdpApiSync.update_connector_files()
- Make CommandResponse.status optional to help with parsing messages from the command queue
v0.2.0
- Add
upload_filemethod toTdpApi,TdpApiSyncandConnectorclasses - Bug fix for command request and response data validation
- Bug fix for parsing the incoming command body for
validate_config_by_version
v0.1.0
- Initial version
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ts_sdk_connectors_python-2.1.0.tar.gz.
File metadata
- Download URL: ts_sdk_connectors_python-2.1.0.tar.gz
- Upload date:
- Size: 150.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
687d148887d3791616865c6d3819b105804c88169f1787f25d1562f2a820d836
|
|
| MD5 |
16499c84ac640c231f3851a6d4eb6e3b
|
|
| BLAKE2b-256 |
6d8c6722b2da666fc1fbadab2e0d8a4706d20d2138507d0a7d93ee32710c0277
|
File details
Details for the file ts_sdk_connectors_python-2.1.0-py3-none-any.whl.
File metadata
- Download URL: ts_sdk_connectors_python-2.1.0-py3-none-any.whl
- Upload date:
- Size: 271.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ce8ecb4f9595c1f4c36d2aafcf0f98273259b31a56eaf2b9ab13895a44225f1
|
|
| MD5 |
a7ba90210d06fde411c7118810164870
|
|
| BLAKE2b-256 |
1d18d7ca1f20d6d5c7511ab1713d82ee2a012f45c69f97060707346350323886
|