A complete package for data ingestion into the Switch Automation Platform.
Project description
Switch Automation library for Python
This is a package for data ingestion into the Switch Automation software platform.
You can find out more about the platform on Switch Automation
Getting started
Prerequisites
- Python 3.8 or later is required to use this package.
- You must have a Switch Automation user account to use this package.
Install the package
Install the Switch Automation library for Python with pip:
pip install switch_api
History
0.2.15
Updated
- Optimized
upsert_timeseries()
method memory upkeep inintegration
module.
0.2.14
- Fixed
- Minor fix on
invalid_file_format()
method creating structured logs inerror_handlers
module.
- Minor fix on
0.2.13
Updated
- Freeze Pandera[io] version to 0.7.1
- PandasDtype has been deprecated since 0.8.0
Compatibility
- Ensure local environment is running Pandera==0.7.1 to match cloud container state
- Downgrade/Upgrade otherwise by running:
- pip uninstall pandera
- pip install switch_api
0.2.12
Added
- Added
upsert_tags()
method to theintegration
module.- Upsert tags to existing sites, devices, and sensors
- Upserting of tags are categorised by the tagging level which are Site, Device, and Sensor level
- Input dataframe requires `Identifier' column whose value depends on the tagging level specified
- For Site tag level, InstallationIds are expected to be in the
Identifier
column - For Device tag level, DeviceIds are expected to be in the
Identifier
column - For Sensor tag level, ObjectPropertyIds are expected to be in the
Identifier
column
- For Site tag level, InstallationIds are expected to be in the
- Added
upsert_device_metadata()
method to theintegration
module.- Upsert metadata to existing devices
Usage
upsert_tags()
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Device')
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Sensor')
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Site')
upsert_device_metadata()
- sw.integration.upsert_device_metadata(api_inputs=api_inputs, df=raw_df)
0.2.11
Added
- New
cache
module that handles cache data related transactionsset_cache
method that stores data to cacheget_cache
method that gets stored data from cache- Stored data can be scoped / retrieved into three categories namely Task, Portfolio, and DataFeed scopes
- For Task scope,
- Data cache can be retrieved by any Portfolio or Datafeed that runs in same Task
- provide TaskId (self.id when calling from the driver)
- For DataFeed scope,
- Data cache can be retrieved (or set) within the Datafeed deployed in portfolio
- Provide UUID4 for local testing. api_inputs.data_feed_id will be used when running in the cloud.
- For Portfolio scope:
- Data cache can be retrieved (or set) by any Datafeed deployed in portfolio
- scope_id will be ignored and api_inputs.api_project_id will be used.
- For Task scope,
0.2.10
Fixed
- Fixed issue with
upsert_timeseries_ds()
method in theintegration
module where required fields such asTimestamp
,ObjectPropertyId
,Value
were being removed.
0.2.9
Added
- Added
upsert_timeseries()
method to theintegration
module.- Data ingested into table storage in addition to ADX Timeseries table
- Carbon calculation performed where appropriate
- Please note: If carbon or cost are included as fields in the
Meta
column then no carbon / cost calculation will be performed
- Please note: If carbon or cost are included as fields in the
Changed
- Added
DriverClassName
to required columns forupsert_discovered_records()
method in theintegration
module
Fixed
- A minor fix to 15-minute interval in
upsert_timeseries_ds()
method in theintegration
module.
0.2.8
Changed
- For the
EventWorkOrderTask
class in thepipeline
module, thecheck_work_order_input_valid()
and thegenerate_work_order()
methods expect an additional 3 keys to be included by default in the dictionary passed to thework_order_input
parameter:InstallationId
EventLink
EventSummary
Fixed
- Issue with the header/payload passed to the API within the
upsert_event_work_order_id()
function of theintegration
module.
0.2.7
Added
- New method,
deploy_as_on_demand_data_feed()
added to theAutomation
class of thepipeline
module- this new method is only applicable for tasks that subclass the
EventWorkOrderTask
base class.
- this new method is only applicable for tasks that subclass the
Changed
- The
data_feed_id
is now a required parameter, not optional, for the following methods on theAutomation
class of thepipeline
module:deploy_on_timer()
deploy_as_email_data_feed()
deploy_as_ftp_data_feed()
deploy_as_upload_data_feed()
- The
email_address_domain
is now a required parameter, not optional, for thedeploy_as_email_data_feed()
method on theAutomation
class of thepipeline
module.
Fixed
- issue with payload on
switch_api.pipeline.Automation.register_task()
method forAnalyticsTask
andEventWorkOrderTask
base classes.
0.2.6
Fixed
- Fixed issues on 2 methods in the
Automation
class of thepipeline
module:delete_data_feed()
cancel_deployed_data_feed()
Added
In the pipeline
module:
- Added new class
EventWorkOrderTask
- This task type is for generation of work orders in 3rd party systems via the Switch Automation Platform's Events UI.
Changed
In the pipeline
module:
AnalyticsTask
- added a new method & a new abstract property:analytics_settings_definition
abstract property - defines the required inputs (& how these are displayed in the Switch Automation Platform UI) for the task to successfully run- added
check_analytics_settings_valid()
method that should be used to validate theanalytics_settings
dictionary passed to thestart()
method contains the required keys for the task to successfully run (as defined by theanalytics_settings_definition
)
In the error_handlers
module:
- In the
post_errors()
function, the parametererrors_df
is renamed toerrors
and now accepts strings in addition to pandas.DataFrame
Removed
Due to cutover to a new backend, the following have been removed:
run_clone_modules()
function from theanalytics
module- the entire
platform_insights
module including the :get_current_insights_by_equipment()
function
0.2.5
Added
- The
Automation
class of thepipeline
module has 2 new methods added: -delete_data_feed()
- Used to delete an existing data feed and all related deployment settings
cancel_deployed_data_feed()
- used to cancel the specified
deployment_type
for a givendata_feed_id
- replaces and expands the functionality previously provided in the
cancel_deployed_timer()
method which has been removed.
- used to cancel the specified
Removed
- Removed the
cancel_deployed_timer()
method from theAutomation
class of thepipeline
module- this functionality is available through the new
cancel_deployed_data_feed()
method whendeployment_type
parameter set to['Timer']
- this functionality is available through the new
0.2.4
Changed
- New parameter
data_feed_name
added to the 4 deployment methods in thepipeline
module'sAutomation
classdeploy_as_email_data_feed()
deploy_as_ftp_data_feed()
deploy_as_upload_data_feed()
deploy_on_timer()
0.2.3
Fixed
- Resolved minor issue on
register_task()
method for theAutomation
class in thepipeline
module.
0.2.2
Fixed
- Resolved minor issue on
upsert_discovered_records()
function inintegration
module related to device-level and sensor-level tags.
0.2.1
Added
- New class added to the
pipeline
moduleDiscoverableIntegrationTask
- for API integrations that are discoverable.- requires
process()
&run_discovery()
abstract methods to be created when sub-classing - additional abstract property,
integration_device_type_definition
, required compared to baseTask
- requires
- New function
upsert_discovered_records()
added to theintegration
module- Required for the
DiscoverableIntegrationTask.run_discovery()
method to upsert discovery records to Build - Discovery & Selection UI
- Required for the
Fixed
- Set minimum msal version required for the switch_api package to be installed.
0.2.0
Major overhaul done of the switch_api package. A complete replacement of the API used by the package was done.
Changed
- The
user_id
parameter has been removed from theswitch_api.initialise()
function.- Authentication of the user is now done via Switch Platform SSO. The call to initialise will trigger a web browser
window to open to the platform login screen.
- Note: each call to initialise for a portfolio in a different datacentre will open up browser and requires user to input their username & password.
- for initialise on a different portfolio within the same datacentre, the authentication is cached so user will not be asked to login again.
- Authentication of the user is now done via Switch Platform SSO. The call to initialise will trigger a web browser
window to open to the platform login screen.
api_inputs
is now a required parameter for theswitch_api.pipeline.Automation.register_task()
- The
deploy_on_timer()
,deploy_as_email_data_feed()
,deploy_as_upload_data_feed()
, anddeploy_as_ftp_data_feed()
methods on theswitch_api.pipeline.Automation
class have an added parameter:data_feed_id
- This new parameter allows user to update an existing deployment for the portfolio specified in the
api_inputs
. - If
data_feed_id
is not supplied, a new data feed instance will be created (even if portfolio already has that task deployed to it)
- This new parameter allows user to update an existing deployment for the portfolio specified in the
0.1.18
Changed
- removed rebuild of the ObjectProperties table in ADX on call to
upsert_device_sensors()
- removed rebuild of the Installation table in ADX on call to
upsert_sites()
0.1.17
Fixed
- Fixed issue with
deploy_on_timer()
method of theAutomation
class in thepipeline
module. - Fixed column header issue with the
get_tag_groups()
function of theintegration
module. - Fixed missing Meta column on table generated via
upsert_workorders()
function of theintegration
module.
Added
- New method for uploading custom data to blob
Blob.custom_upload()
Updated
- Updated the
upsert_device_sensors()
to improve performance and aid release of future functionality.
0.1.16
Added
To the pipeline
module:
- New method
data_feed_history_process_errors()
, to theAutomation
class.- This method returns a dataframe containing the distinct set of error types encountered for a specific
data_feed_file_status_id
- This method returns a dataframe containing the distinct set of error types encountered for a specific
- New method
data_feed_history_errors_by_type
, to theAutomation
class.- This method returns a dataframe containing the actual errors identified for the specified
error_type
anddata_feed_file_status_id
- This method returns a dataframe containing the actual errors identified for the specified
Additional logging was also incorporated in the backend to support the Switch Platform UI.
Fixed
- Fixed issue with
register()
method of theAutomation
class in thepipeline
module.
Changed
For the pipeline
module:
- Standardised the following methods of the
Automation
class to return pandas.DataFrame objects. - Added additional error checks to ensure only allowed values are passed to the various
Automation
class methods for the parameters:expected_delivery
deploy_type
queue_name
error_type
For the integration
module:
- Added additional error checks to ensure only allowed values are passed to
post_errors
function for the parameters:error_type
process_status
For the dataset
module:
- Added additional error check to ensure only allowed values are provided for the
query_language
parameter of theget_data
function.
For the _platform
module:
- Added additional error checks to ensure only allowed values are provided for the
account
parameter.
0.1.14
Changed
- updated get_device_sensors() to not auto-detect the data type - to prevent issues such as stripping leading zeroes, etc from metadata values.
0.1.13
Added
To the pipeline
module:
- Added a new method,
data_feed_history_process_output
, to theAutomation
class
0.1.11
Changed
- Update to access to
logger
- now available asswitch_api.pipeline.logger()
- Update to function documentation
0.1.10
Changed
- Updated the calculation of min/max date (for timezone conversions) inside the
upsert_device_sensors
function as the previous calculation method will not be supported in a future release of numpy.
Fixed
- Fixed issue with retrieval of tag groups and tags via the functions:
get_sites
get_device_sensors
0.1.9
Added
- New module
platform_insights
In the integration
module:
- New function
get_sites
added to lookup site information (optionally with site-level tags) - New function
get_device_sensors
added to assist with lookup of device/sensor information, optionally including either metadata or tags. - New function
get_tag_groups
added to lookup list of sensor-level tag groups - New function
get_metadata_keys
added to lookup list of device-level metadata keys
Changed
- Modifications to connections to storage accounts.
- Additional parameter
queue_name
added to the following methods of theAutomation
class of thepipeline
module:deploy_on_timer
deploy_as_email_data_feed
deploy_as_upload_data_feed
deploy_as_ftp_data_feed
Fixed
In the pipeline
module:
- Addressed issue with the schema validation for the
upsert_workorders
function
0.1.8
Changed
In the integrations
module:
- Updated to batch upserts by DeviceCode to improve reliability & performance of the
upsert_device_sensors
function.
Fixed
In the analytics
module:
- typing issue that caused error in the import of the switch_api package for python 3.8
0.1.7
Added
In the integrations
module:
- Added new function
upsert_workorders
- Provides ability to ingest work order data into the Switch Automation Platform.
- Documentation provides details on required & optional fields in the input dataframe and also provides information on allowed values for some fields.
- Two attributes available for function, added to assist with creation of scripts by providing list of required &
optional fields:
upsert_workorders.df_required_columns
upsert_workorders.df_optional_columns
- Added new function
get_states_by_country
:- Retrieves the list of states for a given country. Returns a dataframe containing both the state name and abbreviation.
- Added new function
get_equipment_classes
:- Retrieves the list of allowed values for Equipment Class.
- EquipmentClass is a required field for the upsert_device_sensors function
- Retrieves the list of allowed values for Equipment Class.
Changed
In the integrations
module:
- For the
upsert_device_sensors
function:- New attributes added to assist with creation of tasks:
upsert_device_sensors.df_required_columns
- returns list of required columns for the inputdf
- Two new fields required to be present in the dataframe passed to function by parameter
df
:EquipmentClass
EquipmentLabel
- Fix to documentation so required fields in documentation match.
- New attributes added to assist with creation of tasks:
- For the
upsert_sites
function:- New attributes added to assist with creation of tasks:
upsert_sites.df_required_columns
- returns list of required columns for the inputdf
upsert_sites.df_optional_columns
- returns list of required columns for the inputdf
- New attributes added to assist with creation of tasks:
- For the
get_templates
function:- Added functionality to filter by type via new parameter
object_property_type
- Fixed capitalisation issue where first character of column names in dataframe returned by the function had been converted to lowercase.
- Added functionality to filter by type via new parameter
- For the
get_units_of_measure
function:- Added functionality to filter by type via new parameter
object_property_type
- Fixed capitalisation issue where first character of column names in dataframe returned by the function had been converted to lowercase.
- Added functionality to filter by type via new parameter
In the analytics
module:
- Modifications to type hints and documentation for the functions:
get_clone_modules_list
run_clone_modules
- Additional logging added to
run_clone_modules
0.1.6
Added
- Added new function
upsert_timeseries_ds()
to theintegrations
module
Changed
- Additional logging added to
invalid_file_format()
function from theerror_handlers
module.
Removed
- Removed
append_timeseries()
function
0.1.5
Fixed
- bug with
upsert_sites()
function that caused optional columns to be treated as required columns.
Added
Added additional functions to the error_handlers
module:
validate_datetime()
- which checks whether the values of the datetime column(s) of the source file are valid. Any datetime errors identified by this function should be passed to thepost_errors()
function.post_errors()
- used to post errors (apart from those identified by theinvalid_file_format()
function) to the data feed dashboard.
0.1.4
Changed
Added additional required properties to the Abstract Base Classes (ABC): Task, IntegrationTask, AnalyticsTask, LogicModuleTask. These properties are:
- Author
- Version
Added additional parameter query_language
to the switch.integration.get_data()
function. Allowed values for this
parameter are:
sql
kql
Removed the name_as_filename
and treat_as_timeseries
parameter from the following functions:
switch.integration.replace_data()
switch.integration.append_data()
switch.integration.upload_data()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for switch_api-0.2.15-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bc2f92757c1b52b191caf663b0a3d7bb60d68e381f33e86e3d395585a7b81d92 |
|
MD5 | 61b22f607d8cdf86c263203de5b147fb |
|
BLAKE2b-256 | eccbd4f139590c0b591a89f635c31d369519b2f329297ec4800e6068c543b753 |