A complete package for data ingestion into the Switch Automation Platform.
Project description
Switch Automation library for Python
This is a package for data ingestion into the Switch Automation software platform.
You can find out more about the platform on Switch Automation
Getting started
Prerequisites
- Python 3.8 or later is required to use this package.
- You must have a Switch Automation user account to use this package.
Install the package
Install the Switch Automation library for Python with pip:
pip install switch_api
History
0.5.8
Fixed
In the analytics
module:
- Fixed bug on AU datacentre API endpoint construction.
0.5.7
Modified
In the controls
module:
- Modified
submit_control
method- Returns sensor control values upon control request acknowledgement
0.5.5
Added
In the pipeline
module:
- Added a new task
IQTask
- Additional abstract property
module_type
that accepts strings. This should define the type of IQ module. - Abstract method
process
must be instantiated for the task to be registered.
- Additional abstract property
Modified
In the pipeline
module:
- Updated the
Automation.register_task()
method to accept tasks that subclass the newIQTask
.
0.5.4
Added
In the integration
module:
- Added new function
upsert_reservations()
- Upserts data to the ReservationHistory table
- Two attributes added to assist with creation of the input dataframe:
upsert_reservations.df_required_columns
- returns list of required columns for the inputdf
upsert_reservations.df_optional_columns
- returns list of required columns for the inputdf
- The following datetime fields are required and must use the
local_date_time_cols
andutc_date_time_cols
parameters to define whether their values are in site-local timezone or UTC timezone:CreatedDate
LastModifiedDate
ReservationStart
ReservationEnd
- Added new function
upsert_device_sensors_iq
- Same functionality with
upsert_device_sensors
but modified/simplified to work with Switch IQ - 'Tags' are included in each row of passing dataframe for upsert instead of a separate list in the original
- Same functionality with
In the authentication
module:
- Customized
get_switch_credentials
with custom port instead of a fixed oneinitialize
function now hascustom_port
parameter for custom port settings when authenticating
In the controls
module:
- Modified
submit_control
functon to return consolidated dataframe with added columns:status
andwriteStatus
that flags whether the control request was successful or not. Instead of the previous 2 separate dataframes - Modified
submit_control
function as well to have paginated processing of dataframe to submit control instead of sending them all in one go - Modified
_mqtt
class to add Gateway Connected check before sending/submitting control request to the MQTT Broker.
0.5.3
Added
- In the
integration
module:- Added
override_existing
parameter inupsert_discovered_records
- Flag if it the values passed to df will override existing integration records. Only valid if running locally, not on a deployed task where it is triggered via UI.
- Defaults to False
- Added
0.5
Added
- In the
pipeline
module:- Added a new task type called
Guide
.- this task type should be sub-classed in concert with one of the Task sub-classes when deploying a guide to the marketplace.
- Added a new method to the
Automation
class calledregister_guide_task()
- this method is used to register tasks that sub-class the
Guide
task and also posts form files to blob and registers the guide to the Marketplace.
- this method is used to register tasks that sub-class the
- Added a new task type called
- New
_guide
module - only to be referenced when doing initial development of a Guideguide
's `local_start' method- Allows to run mock guides engine locally that ables to debug
Guide
task types with Form Kit playground.
- Allows to run mock guides engine locally that ables to debug
Fixed
- In
controls
module:- modify
submit_control
method parameters - typings - remove extra columns from payload to IoT API requests
- modify
0.4.9
Added
- New method added in
automation
module:run_data_feed()
- Run python job based on data feed id. This will be sent to the queue for processing and will undergo same procedure as the rest of the datafeed.- Required parameters are
api_inputs
anddata_feed_id
- This has a restriction of only allowing an AnalyticsTask type datafeed to be run and deployed as a Timer
- Required parameters are
- New method added in
analytics
module:upsert_performance_statistics
- this method should only be used by tasks used to populate the Portfolio Benchmarking feature in the Switch Automation platform
- New
controls
module added and new method added to this module:submit_control()
- method to submit control of sensors- this method returns a tuple:
(control_response, missing_response)
:control_response
- is the list of sensors that are acknowledged and process by the MQTTT message brokermissing_response
= is the list of sensors that are sensors that were caught by the connectiontime_out
- default to 30 secs - meaning the response were no longer waited to be received by the python package. Increasing the time out can potentially help with this.
- this method returns a tuple:
Fixed
- In the
integration
module, minor fixes to:- An unhandled exception when using
pandas==2.1.1
on the following functions:upsert_sites()
upsert_device_sensors()
upsert_device_sensors_ext()
upsert_workorders()
upsert_timeseries_ds()
upsert_timeseries()
- Handle deprecation of
pandas.DataFrame.append()
on the following functions:upsert_device_sensors()
upsert_device_sensors_ext()
- An unhandled exception for
connect_to_sql()
function when the internal API call within_get_sql_connection_string()
fails.
- An unhandled exception when using
0.4.8
Added
- New class added to the
pipeline
module:BlobTask
- This class is used to create integrations that post data to the Switch Automation Platform using a blob container & Event Hub Queue as the source.- Please Note: This task type requires external setup in Azure by Switch Automation Developers before a task can be registered or deployed.
- requires
process_file()
abstract method to be created when sub-classing
- New method,
deploy_as_on_demand_data_feed()
added to theAutomation
class of thepipeline
module- this new method is only applicable for tasks that subclass the
BlobTask
base class.
- this new method is only applicable for tasks that subclass the
- In the
integration
module, new helper methods have been added:connect_to_sql()
method creates a pyodbc connection object to enable easier querying of the SQL database via thepyodbc
libraryamortise_across_days()
method enables easier amortisation of data across days in a period, either inclusive or exclusive of end date.get_metadata_where_clause()
method enables creation ofsql_where_clause
for theget_device_sensors
() method where for each metadata key the sql checks its not null.
- In the
error_handlers
module:check_duplicates()
method added to check for duplicates & post appropriate errors to Task Insights UI in the Switch Automation platform.
- In the
_utils._utils
module:requests_retry_session2
helper function added to enable automatic retries of API calls
Updated
-
In the
integration
module:- New parameter
include_removed_sites
added to theget_sites()
function.- Determines whether or not to include sites marked as "IsRemoved" in the returned dataframe.
- Defaults to False, indicating removed sites will not be included.
- Updated the
get_device_sensor()
method to check if requested metadata keys or requested tag groups exist for the portfolio and exception if they don't. - New parameter
send_notification
added to theupsert_timeseries()
function.- This enables Iq Notification messages to be sent when set to
True
- Defaults to
False
- This enables Iq Notification messages to be sent when set to
- For the
get_sites()
,get_device_sensors()
andget_data()
functions, additional parameters have been added to allow customisation of the newly implemented retry logic:retries : int
- Number of retries performed beforereturning last retry instance's response status. Max retries = 10. Defaults to 0 currently for backwards compatibility.
backoff_factor
- If A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). {backoff factor} * (2 ** ({retry count} - 1)) seconds
- New parameter
-
In the
error_handlers
module:- For the
validate_datetime
function, added two new parameters to enable automatic posting of errors to the Switch Platform:errors
: boolean, defaults to False. To enable posting of errors, set to True.api_inputs
: defaults to None. Needs to be set to the object returned from switch_api.initialize() iferrors=True
.
- For the
Fixed
- In the
integration
module:- Resolved outlier scenario resulting in unhandled exception on the
upsert_sites()
function. - Minor fix to the
upsert_discovered_records()
method to handle the case when unexpected columns are present in the dataframe passed todf
input parameter
- Resolved outlier scenario resulting in unhandled exception on the
0.4.6
Added
- Task Priority and Task Framework data feed deployment settings
- Task Priority and Task Framework are now available to set when deploying data feeds
- Task Priority
- Determines the priority of the datafeed tasks when processing.
- This equates to how much resources would be alloted to run the task
- Available options are:
default
,standard
, oradvanced
.- set to
advanced
for higher resource when processing data feed task
- set to
- Defaults to 'default'.
- Task Framework
- Determines the framework of the datafeed tasks when processing.
- 'PythonScriptFramework' for the old task runner engine.
- 'TaskInsightsEngine' for the new task running in container apps.
- Defaults to 'PythonScriptFramework'
- Determines the framework of the datafeed tasks when processing.
- Task Priority
- Task Priority and Task Framework are now available to set when deploying data feeds
0.4.5
Added
- Email Sender Module
- Send emails to active users within a Portfolio in Switch Automation Platform
- Limitations:
- Emails cannot be sent to users outside of the Portfolio including other users within the platform
- Maximum of five attachments per email
- Each attachment has a maximum size of 5mb
- See function code documentation and usage example below
- New
generate_filepath
method to provide a filepath where files can be stored- Works well with the attachment feature of the Email Sender Module. Store files in the generated filepath of this method and pass into email attachments
- See function code documentation and usage example below
Email Sender Usage
import switch_api as sw
sw.email.send_email(
api_inputs=api_inputs,
subject='',
body='',
to_recipients=[],
cc_recipients=[], # Optional
bcc_recipients=[], # Optional
attachments=['/file/path/to/attachment.csv'], # Optional
conversation_id='' # Optional
)
generate_filepath Usage
import switch_api as sw
generated_attachment_filepath = sw.generate_filepath(api_inputs=api_inputs, filename='generated_attachment.txt')
# Example of where it could be used
sw.email.send_email(
...
attachments=[generated_attachment_filepath]
...
)
Fixed
- Issue where
upsert_device_sensors_ext
method was not posting metadata and tag_columns to API
0.3.3
Added
- New
upsert_device_sensors_ext
method to theintegration
module.- Compared to existing
upsert_device_sensors
following are supported:- Installation Code or Installation Id may be provided
- BUT cannot provide mix of the two, all must have either code or id and not both.
- DriverClassName
- DriverDeviceType
- PropertyName
- Installation Code or Installation Id may be provided
- Compared to existing
Added Feature - Switch Python Extensions
- Extensions may be used in Task Insights and Switch Guides for code reuse
- Extensions maybe located in any directory structure within the repo where the usage scripts are located
- May need to adjust your environment to detect the files if you're not running a project environment
- Tested on VSCode and PyCharm - contact Switch Support for issues.
Extensions Usage
import switch_api as sw
# Single import line per extension
from extensions.my_extension import MyExtension
@sw.extensions.provide(field="some_extension")
class MyTask:
some_extension: MyExtension
if __name__ == "__main__":
task = MyTask()
task.some_extension.do_something()
Extensions Registration
import uuid
import switch_api as sw
class SimpleExtension(sw.extensions.ExtensionTask):
@property
def id(self) -> uuid.UUID:
# Unique ID for the extension.
# Generate in CLI using:
# python -c 'import uuid; print(uuid.uuid4())'
return '46759cfe-68fa-440c-baa9-c859264368db'
@property
def description(self) -> str:
return 'Extension with a simple get_name function.'
@property
def author(self) -> str:
return 'Amruth Akoju'
@property
def version(self) -> str:
return '1.0.1'
def get_name(self):
return "Simple Extension"
# Scaffold code for registration. This will not be persisted in the extension.
if __name__ == '__main__':
task = SimpleExtension()
api_inputs = sw.initialize(api_project_id='<portfolio-id>')
# Usage test
print(task.get_name())
# =================================================================
# REGISTER TASK & DATAFEED ========================================
# =================================================================
register = sw.pipeline.Automation.register_task(api_inputs, task)
print(register)
Updated
- get_data now has an optional parameter to return a pandas.DataFrame or JSON
0.2.27
Fix
- Issue where Timezone DST Offsets API response of
upsert_timeseries
inintegration
module was handled incorrectly
0.2.26
Updated
- Optional
table_def
parameter onupsert_data
,append_data
, andreplace_data
inintegration
module- Enable clients to specify the table structure. It will be merged to the inferred table structure.
list_deployments
in Automation module now providesSettings
andDriverId
associated with the deployments
0.2.25
Updated
- Update handling of empty Timezone DST Offsets of
upsert_timeseries
inintegration
module
0.2.24
Updated
- Fix default
ingestion_mode
parameter value to 'Queue' instead of 'Queued' onupsert_timeseries
inintegration
module
0.2.23
Updated
- Optional
ingestion_mode
parameter onupsert_timeseries
inintegration
module- Include
ingestionMode
in json payload passed to backend API IngestionMode
type must beQueue
orStream
- Default
ingestion_mode
parameter value inupsert_timeseries
isQueue
- To enable table streaming ingestion, please contact helpdesk@switchautomation.com for assistance.
- Include
0.2.22
Updated
- Optional
ingestion_mode
parameter onupsert_data
inintegration
module- Include
ingestionMode
in json payload passed to backend API IngestionMode
type must beQueue
orStream
- Default
ingestion_mode
parameter value inupsert_data
isQueue
- To enable table streaming ingestion, please contact helpdesk@switchautomation.com for assistance.
- Include
Fix
- sw.pipeline.logger handlers stacking
0.2.21
Updated
- Fix on
get_data
method indataset
module- Sync parameter structure to backend API for
get_data
- List of dict containing properties of
name
,value
, andtype
items type
property must be one of subset of the new LiteralDATA_SET_QUERY_PARAMETER_TYPES
- Sync parameter structure to backend API for
0.2.20
Added
- Newly supported Azure Storage Account: GatewayMqttStorage
- An optional property on QueueTask to specific QueueType
- Default: DataIngestion
0.2.19
Fixed
- Fix on
upsert_timeseries
method inintegration
module- Normalized TimestampId and TimestampLocalId seconds
- Minor fix on
upsert_entities_affected
method inintegration
utils module- Prevent upsert entities affected count when data feed file status Id is not valid
- Minor fix on
get_metadata_keys
method inintegration
helper module- Fix for issue when a portfolio does not contain any values in the ApiMetadata table
0.2.18
Added
-
Added new
is_specific_timezone
parameter inupsert_timeseries
method ofintegration
module-
Accepts a timezone name as the specific timezone used by the source data.
-
Can either be of type str or bool and defaults to the value of False.
-
Cannot have value if 'is_local_time' is set to True.
-
Retrieve list of available timezones using 'get_timezones' method in
integration
moduleis_specific_timezone is_local_time Description False False Datetimes in provided data is already in UTC and should remain as the value of Timestamp. The TimestampLocal (conversion to site-local Timezone) is calculated. False True Datetimes in provided data is already in the site-local Timezone & should be used to set the value of the TimestampLocal field. The UTC Timestamp is calculated Has Value True NOT ALLOWED Has Value False Both Timestamp and TimestampLocal fields will are calculated. Datetime is converted to UTC then to Local. True NOT ALLOWED '' (empty string) NOT ALLOWED
-
Fixed
- Minor fix on
upsert_tags
andupsert_device_metadata
methods inintegration
module- List of required_columns was incorrectly being updated when these functions were called
- Minor fix on
upsert_event_work_order_id
method inintegration
module when attempting to update status of an Event
Updated
- Update on
DiscoveryIntegrationInput
namedtuple - addedjob_id
- Update
upsert_discovered_records
method required columns inintegration
module- add required
JobId
column for Data Frame parameter
- add required
0.2.17
Fixed
- Fix on
upsert_timeseries()
method inintegration
module for duplicate records in ingestion files- records whose Timestamp falls in the exact DST start created 2 records with identical values but different TimestampLocal
- one has the TimestampLocal of a DST and the other does not
- records whose Timestamp falls in the exact DST start created 2 records with identical values but different TimestampLocal
Updated
- Update on
get_sites()
method inintegration
module forInstallationCode
column- when the `InstallationCode' value is null in the database it returns an empty string
InstallationCode
column is explicity casted to dtype 'str'
0.2.16
Added
- Added new 5 minute interval for
EXPECTED_DELIVERY
Literal inautomation
module- support for data feed deployments Email, FTP, Upload, and Timer
- usage: expected_delivery='5min'
Fixed
- Minor fix on
upsert_timeseries()
method usingdata_feed_file_status_id
parameter inintegration
module.data_feed_file_status_id
parameter value now synced between process records and ingestion files when supplied
Updated
- Reduced ingestion files records chunking by half in
upsert_timeseries()
method inintegration
module.- from 100k records chunk down to 50k records chunk
0.2.15
Updated
- Optimized
upsert_timeseries()
method memory upkeep inintegration
module.
0.2.14
Fixed
- Minor fix on
invalid_file_format()
method creating structured logs inerror_handlers
module.
0.2.13
Updated
- Freeze Pandera[io] version to 0.7.1
- PandasDtype has been deprecated since 0.8.0
Compatibility
- Ensure local environment is running Pandera==0.7.1 to match cloud container state
- Downgrade/Upgrade otherwise by running:
- pip uninstall pandera
- pip install switch_api
0.2.12
Added
- Added
upsert_tags()
method to theintegration
module.- Upsert tags to existing sites, devices, and sensors
- Upserting of tags are categorised by the tagging level which are Site, Device, and Sensor level
- Input dataframe requires `Identifier' column whose value depends on the tagging level specified
- For Site tag level, InstallationIds are expected to be in the
Identifier
column - For Device tag level, DeviceIds are expected to be in the
Identifier
column - For Sensor tag level, ObjectPropertyIds are expected to be in the
Identifier
column
- For Site tag level, InstallationIds are expected to be in the
- Added
upsert_device_metadata()
method to theintegration
module.- Upsert metadata to existing devices
Usage
upsert_tags()
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Device')
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Sensor')
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Site')
upsert_device_metadata()
- sw.integration.upsert_device_metadata(api_inputs=api_inputs, df=raw_df)
0.2.11
Added
- New
cache
module that handles cache data related transactionsset_cache
method that stores data to cacheget_cache
method that gets stored data from cache- Stored data can be scoped / retrieved into three categories namely Task, Portfolio, and DataFeed scopes
- For Task scope,
- Data cache can be retrieved by any Portfolio or Datafeed that runs in same Task
- provide TaskId (self.id when calling from the driver)
- For DataFeed scope,
- Data cache can be retrieved (or set) within the Datafeed deployed in portfolio
- Provide UUID4 for local testing. api_inputs.data_feed_id will be used when running in the cloud.
- For Portfolio scope:
- Data cache can be retrieved (or set) by any Datafeed deployed in portfolio
- scope_id will be ignored and api_inputs.api_project_id will be used.
- For Task scope,
0.2.10
Fixed
- Fixed issue with
upsert_timeseries_ds()
method in theintegration
module where required fields such asTimestamp
,ObjectPropertyId
,Value
were being removed.
0.2.9
Added
- Added
upsert_timeseries()
method to theintegration
module.- Data ingested into table storage in addition to ADX Timeseries table
- Carbon calculation performed where appropriate
- Please note: If carbon or cost are included as fields in the
Meta
column then no carbon / cost calculation will be performed
- Please note: If carbon or cost are included as fields in the
Changed
- Added
DriverClassName
to required columns forupsert_discovered_records()
method in theintegration
module
Fixed
- A minor fix to 15-minute interval in
upsert_timeseries_ds()
method in theintegration
module.
0.2.8
Changed
- For the
EventWorkOrderTask
class in thepipeline
module, thecheck_work_order_input_valid()
and thegenerate_work_order()
methods expect an additional 3 keys to be included by default in the dictionary passed to thework_order_input
parameter:InstallationId
EventLink
EventSummary
Fixed
- Issue with the header/payload passed to the API within the
upsert_event_work_order_id()
function of theintegration
module.
0.2.7
Added
- New method,
deploy_as_on_demand_data_feed()
added to theAutomation
class of thepipeline
module- this new method is only applicable for tasks that subclass the
EventWorkOrderTask
base class.
- this new method is only applicable for tasks that subclass the
Changed
- The
data_feed_id
is now a required parameter, not optional, for the following methods on theAutomation
class of thepipeline
module:deploy_on_timer()
deploy_as_email_data_feed()
deploy_as_ftp_data_feed()
deploy_as_upload_data_feed()
- The
email_address_domain
is now a required parameter, not optional, for thedeploy_as_email_data_feed()
method on theAutomation
class of thepipeline
module.
Fixed
- issue with payload on
switch_api.pipeline.Automation.register_task()
method forAnalyticsTask
andEventWorkOrderTask
base classes.
0.2.6
Fixed
- Fixed issues on 2 methods in the
Automation
class of thepipeline
module:delete_data_feed()
cancel_deployed_data_feed()
Added
In the pipeline
module:
- Added new class
EventWorkOrderTask
- This task type is for generation of work orders in 3rd party systems via the Switch Automation Platform's Events UI.
Changed
In the pipeline
module:
AnalyticsTask
- added a new method & a new abstract property:analytics_settings_definition
abstract property - defines the required inputs (& how these are displayed in the Switch Automation Platform UI) for the task to successfully run- added
check_analytics_settings_valid()
method that should be used to validate theanalytics_settings
dictionary passed to thestart()
method contains the required keys for the task to successfully run (as defined by theanalytics_settings_definition
)
In the error_handlers
module:
- In the
post_errors()
function, the parametererrors_df
is renamed toerrors
and now accepts strings in addition to pandas.DataFrame
Removed
Due to cutover to a new backend, the following have been removed:
run_clone_modules()
function from theanalytics
module- the entire
platform_insights
module including the :get_current_insights_by_equipment()
function
0.2.5
Added
- The
Automation
class of thepipeline
module has 2 new methods added: -delete_data_feed()
- Used to delete an existing data feed and all related deployment settings
cancel_deployed_data_feed()
- used to cancel the specified
deployment_type
for a givendata_feed_id
- replaces and expands the functionality previously provided in the
cancel_deployed_timer()
method which has been removed.
- used to cancel the specified
Removed
- Removed the
cancel_deployed_timer()
method from theAutomation
class of thepipeline
module- this functionality is available through the new
cancel_deployed_data_feed()
method whendeployment_type
parameter set to['Timer']
- this functionality is available through the new
0.2.4
Changed
- New parameter
data_feed_name
added to the 4 deployment methods in thepipeline
module'sAutomation
classdeploy_as_email_data_feed()
deploy_as_ftp_data_feed()
deploy_as_upload_data_feed()
deploy_on_timer()
0.2.3
Fixed
- Resolved minor issue on
register_task()
method for theAutomation
class in thepipeline
module.
0.2.2
Fixed
- Resolved minor issue on
upsert_discovered_records()
function inintegration
module related to device-level and sensor-level tags.
0.2.1
Added
- New class added to the
pipeline
moduleDiscoverableIntegrationTask
- for API integrations that are discoverable.- requires
process()
&run_discovery()
abstract methods to be created when sub-classing - additional abstract property,
integration_device_type_definition
, required compared to baseTask
- requires
- New function
upsert_discovered_records()
added to theintegration
module- Required for the
DiscoverableIntegrationTask.run_discovery()
method to upsert discovery records to Build - Discovery & Selection UI
- Required for the
Fixed
- Set minimum msal version required for the switch_api package to be installed.
0.2.0
Major overhaul done of the switch_api package. A complete replacement of the API used by the package was done.
Changed
- The
user_id
parameter has been removed from theswitch_api.initialise()
function.- Authentication of the user is now done via Switch Platform SSO. The call to initialise will trigger a web browser
window to open to the platform login screen.
- Note: each call to initialise for a portfolio in a different datacentre will open up browser and requires user to input their username & password.
- for initialise on a different portfolio within the same datacentre, the authentication is cached so user will not be asked to login again.
- Authentication of the user is now done via Switch Platform SSO. The call to initialise will trigger a web browser
window to open to the platform login screen.
api_inputs
is now a required parameter for theswitch_api.pipeline.Automation.register_task()
- The
deploy_on_timer()
,deploy_as_email_data_feed()
,deploy_as_upload_data_feed()
, anddeploy_as_ftp_data_feed()
methods on theswitch_api.pipeline.Automation
class have an added parameter:data_feed_id
- This new parameter allows user to update an existing deployment for the portfolio specified in the
api_inputs
. - If
data_feed_id
is not supplied, a new data feed instance will be created (even if portfolio already has that task deployed to it)
- This new parameter allows user to update an existing deployment for the portfolio specified in the
0.1.18
Changed
- removed rebuild of the ObjectProperties table in ADX on call to
upsert_device_sensors()
- removed rebuild of the Installation table in ADX on call to
upsert_sites()
0.1.17
Fixed
- Fixed issue with
deploy_on_timer()
method of theAutomation
class in thepipeline
module. - Fixed column header issue with the
get_tag_groups()
function of theintegration
module. - Fixed missing Meta column on table generated via
upsert_workorders()
function of theintegration
module.
Added
- New method for uploading custom data to blob
Blob.custom_upload()
Updated
- Updated the
upsert_device_sensors()
to improve performance and aid release of future functionality.
0.1.16
Added
To the pipeline
module:
- New method
data_feed_history_process_errors()
, to theAutomation
class.- This method returns a dataframe containing the distinct set of error types encountered for a specific
data_feed_file_status_id
- This method returns a dataframe containing the distinct set of error types encountered for a specific
- New method
data_feed_history_errors_by_type
, to theAutomation
class.- This method returns a dataframe containing the actual errors identified for the specified
error_type
anddata_feed_file_status_id
- This method returns a dataframe containing the actual errors identified for the specified
Additional logging was also incorporated in the backend to support the Switch Platform UI.
Fixed
- Fixed issue with
register()
method of theAutomation
class in thepipeline
module.
Changed
For the pipeline
module:
- Standardised the following methods of the
Automation
class to return pandas.DataFrame objects. - Added additional error checks to ensure only allowed values are passed to the various
Automation
class methods for the parameters:expected_delivery
deploy_type
queue_name
error_type
For the integration
module:
- Added additional error checks to ensure only allowed values are passed to
post_errors
function for the parameters:error_type
process_status
For the dataset
module:
- Added additional error check to ensure only allowed values are provided for the
query_language
parameter of theget_data
function.
For the _platform
module:
- Added additional error checks to ensure only allowed values are provided for the
account
parameter.
0.1.14
Changed
- updated get_device_sensors() to not auto-detect the data type - to prevent issues such as stripping leading zeroes, etc from metadata values.
0.1.13
Added
To the pipeline
module:
- Added a new method,
data_feed_history_process_output
, to theAutomation
class
0.1.11
Changed
- Update to access to
logger
- now available asswitch_api.pipeline.logger()
- Update to function documentation
0.1.10
Changed
- Updated the calculation of min/max date (for timezone conversions) inside the
upsert_device_sensors
function as the previous calculation method will not be supported in a future release of numpy.
Fixed
- Fixed issue with retrieval of tag groups and tags via the functions:
get_sites
get_device_sensors
0.1.9
Added
- New module
platform_insights
In the integration
module:
- New function
get_sites
added to lookup site information (optionally with site-level tags) - New function
get_device_sensors
added to assist with lookup of device/sensor information, optionally including either metadata or tags. - New function
get_tag_groups
added to lookup list of sensor-level tag groups - New function
get_metadata_keys
added to lookup list of device-level metadata keys
Changed
- Modifications to connections to storage accounts.
- Additional parameter
queue_name
added to the following methods of theAutomation
class of thepipeline
module:deploy_on_timer
deploy_as_email_data_feed
deploy_as_upload_data_feed
deploy_as_ftp_data_feed
Fixed
In the pipeline
module:
- Addressed issue with the schema validation for the
upsert_workorders
function
0.1.8
Changed
In the integrations
module:
- Updated to batch upserts by DeviceCode to improve reliability & performance of the
upsert_device_sensors
function.
Fixed
In the analytics
module:
- typing issue that caused error in the import of the switch_api package for python 3.8
0.1.7
Added
In the integrations
module:
- Added new function
upsert_workorders
- Provides ability to ingest work order data into the Switch Automation Platform.
- Documentation provides details on required & optional fields in the input dataframe and also provides information on allowed values for some fields.
- Two attributes available for function, added to assist with creation of scripts by providing list of required &
optional fields:
upsert_workorders.df_required_columns
upsert_workorders.df_optional_columns
- Added new function
get_states_by_country
:- Retrieves the list of states for a given country. Returns a dataframe containing both the state name and abbreviation.
- Added new function
get_equipment_classes
:- Retrieves the list of allowed values for Equipment Class.
- EquipmentClass is a required field for the upsert_device_sensors function
- Retrieves the list of allowed values for Equipment Class.
Changed
In the integrations
module:
- For the
upsert_device_sensors
function:- New attributes added to assist with creation of tasks:
upsert_device_sensors.df_required_columns
- returns list of required columns for the inputdf
- Two new fields required to be present in the dataframe passed to function by parameter
df
:EquipmentClass
EquipmentLabel
- Fix to documentation so required fields in documentation match.
- New attributes added to assist with creation of tasks:
- For the
upsert_sites
function:- New attributes added to assist with creation of tasks:
upsert_sites.df_required_columns
- returns list of required columns for the inputdf
upsert_sites.df_optional_columns
- returns list of required columns for the inputdf
- New attributes added to assist with creation of tasks:
- For the
get_templates
function:- Added functionality to filter by type via new parameter
object_property_type
- Fixed capitalisation issue where first character of column names in dataframe returned by the function had been converted to lowercase.
- Added functionality to filter by type via new parameter
- For the
get_units_of_measure
function:- Added functionality to filter by type via new parameter
object_property_type
- Fixed capitalisation issue where first character of column names in dataframe returned by the function had been converted to lowercase.
- Added functionality to filter by type via new parameter
In the analytics
module:
- Modifications to type hints and documentation for the functions:
get_clone_modules_list
run_clone_modules
- Additional logging added to
run_clone_modules
0.1.6
Added
- Added new function
upsert_timeseries_ds()
to theintegrations
module
Changed
- Additional logging added to
invalid_file_format()
function from theerror_handlers
module.
Removed
- Removed
append_timeseries()
function
0.1.5
Fixed
- bug with
upsert_sites()
function that caused optional columns to be treated as required columns.
Added
Added additional functions to the error_handlers
module:
validate_datetime()
- which checks whether the values of the datetime column(s) of the source file are valid. Any datetime errors identified by this function should be passed to thepost_errors()
function.post_errors()
- used to post errors (apart from those identified by theinvalid_file_format()
function) to the data feed dashboard.
0.1.4
Changed
Added additional required properties to the Abstract Base Classes (ABC): Task, IntegrationTask, AnalyticsTask, LogicModuleTask. These properties are:
- Author
- Version
Added additional parameter query_language
to the switch.integration.get_data()
function. Allowed values for this
parameter are:
sql
kql
Removed the name_as_filename
and treat_as_timeseries
parameter from the following functions:
switch.integration.replace_data()
switch.integration.append_data()
switch.integration.upload_data()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for switch_api-0.5.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e9dcdb119e0460bc11b557d7cebb92dad1685c3b94f1a481b4e69c8e635beabb |
|
MD5 | a074b021fa868438c2fb47dd335bdef3 |
|
BLAKE2b-256 | c730dc43b4f94bb0db7e400a8cec2a132b5617e4c8c14eb5d658709ff2085f96 |