Tools for rapid deployment of REST API serverless applications using Python 3.7
Project description
Cloud Kung Fu API Toolkit
Core Concepts
You can reference this example code to illustrate the concepts explained below.
Use Cases
To properly implement an action using this toolkit, the first step is to define a Use Case. A Use Case is a specific action that needs to be executed by the application. Most importantly, a Use Case is completely agnostic to any platform specific implementation details. It only consists of two elements:
- A name
- Input properties (if any)
For example, say we have a 'Pet' entity, and we determine we need to create a pet. For this example, let's say that a pet requires a name to be created. We now have our use case: 'Create a Pet' and it takes one input, a name (as a string).
As an application evolves, the inputs for a use case may change, but they should still remain completely independent of platform and implementation details. If later it is decided that creating a Pet also now requires say a 'type' input as well, the Use Case does not need to know how we receive or generate that 'type', just that it's now a required input (and what type of input it is, e.x.: a string, number, etc.)
Clients
A Client is an object that is responsible for communicating with a data store. This could be literally anything: a database, REST API, FTP Server, file, etc. A Client will be given an Instruction containing two pieces of information that it will use to determine how to communicate with its data store:
- An Action Type
- A Payload
If we use an HTTP service as an example data store, the action types would be the HTTP verbs (GET, PUT, POST, DELETE, etc.), and the payload would be an object containing the url, body, headers, and parameters (this library in fact includes a very basic HTTP Client that uses this exact implementation by wrapping the Requests library).
This way, connection specific details are limited to the Client without having an impact on other parts of the code. You can add retry logic, logging, pagination, etc. to the Client, without making any changes to the Action Type and Payload that get passed to it.
This toolkit contains a DynamoDB Client, a basic HTTP Client, and an AWS S3 Client
Repositories
The Repository is the core of your data access. It is responsible for three activities (in order):
- Receiving the inputs from a Use Case instance
- Creating an Instruction for the Client (by providing the Action Type and constructing a Payload based on the Use Case and Inputs)
- Defining a parser function for parsing the return from the client as needed (to change it from an implementation specific data structure to something generic like a Model)
Going back to our example of creating a 'Pet', it would take the 'Create a Pet' Use Case, with a single string input property for the name. If we say that the data store that creates the Pet is a REST API, then the Payload could be the URL endpoint for the API, a body containing the input name, and any required headers or params for this API. The Instruction would then include an Action Type of POST and the Payload.
The parser function might take our API specific response (which could include metadata, unused fields, etc.), and return an instance of an object that only contains the Pet's name and the ID generated by the API. This separates any data store specific details from our core application logic handlers.
Error Conversion
The Repository can also optionally perform error conversion: taking implementation specific exceptions, and converting them to application specific exceptions.
This means instead of your application handlers needing to know how to handle an exception that is specific to the data store you're using it instead can handle exceptions you define for your application.
For example, say instead of a REST API, our 'Create a Pet' use case was actually communicating with a database, and this database only had one generic class for errors. Suppose to know that there was a timeout, you needed to access several nested properties (like error['error']['metadata']['error_type']
) to determine it was a timeout. You don't want your application handler having to know specific details of your database like that, so the Repository is where you would provide the error conversion to parse the database error and raise something more generic like TimeoutOnPetDataStore
. This could then be caught and handled by your application handler. If you completely changed data stores, your application handler would still only need to know about this generic exception class, and your new repository would just do the error conversion for the new data store's error handling.
Actors
The Actor is the object that actually executes actions for the application. The base Actor class is provided in this toolkit, and specific implementations will simply be different instances of this class. To initialize an actor, it takes only two parameters:
- A Client
- A Repository
To perform an action, the instance of the Actor is provided an instance of a Use Case (which includes the required inputs). It will then return the parsed return from the Repository parser for that Use Case.
This allows our application handling to simply initialize our actor, and tell it to run Use Cases with arguments that are not specific to our implementation. If the Repository parser is returning generic data without any implementation specific details, this allows our application logic handlers to be completely agnostic of platform. If the API of our data changes, we can refactor the Client and/or Repository as needed without altering anything in the handler. At most, we may need to change how the Actor is initialized (by providing a new Repository and/or Client) but how that instance of the Actor behaves won't need to be updated.
Once again, let's look at our 'Create a Pet' use case. We initialize a 'pet Actor' using the Client and Repository we created for this Pet API. We can then write a handler that takes a string as the pet's name. We then simply tell the 'pet Actor' to run an instance of the 'Create a Pet' Use Case, providing it the string we received for the name. It will give us back the object as parsed by the Repository. The handler doesn't need to know anything about the REST API, HTTP connections, or anything beyond the core application logic. The handler could then take the returned object (which we've parsed to a generic object), run a different use case using that object, or another actor for a different data store might take the new object as an input for a different Use Case, simply return the data, etc.
Core Classes
Contains the core data access classes for the toolkit.
Tools
Install
$ pip install ckf-api-toolkit
Build
Install Build Tools
python3 -m pip install --user --upgrade setuptools wheel
Make sure pipenv
is installed on your system, then run:
pipenv install --dev
Create Build
python3 setup.py sdist bdist_wheel
Contributions and Feedback
We welcome all forms of feedback: bug reports, pull requests and even the occasional email! Please use issues for these, and send email to either john@johninthecloud.com or james@carignancreative.com to get in touch with the authors directly!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ckf-api-toolkit-0.7.0.tar.gz
.
File metadata
- Download URL: ckf-api-toolkit-0.7.0.tar.gz
- Upload date:
- Size: 40.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4359bd67dddbc5ed4eb162a5ff3602431f3c21d40cebefcd134fa80b5abfaf95 |
|
MD5 | a501a4fc7df26bde762ef7b752640229 |
|
BLAKE2b-256 | 99f39345b8759c94779d62adb4101ab79f9f397984a4986240cac97b0ac5df34 |
File details
Details for the file ckf_api_toolkit-0.7.0-py3-none-any.whl
.
File metadata
- Download URL: ckf_api_toolkit-0.7.0-py3-none-any.whl
- Upload date:
- Size: 48.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9cca4635106da6737a161ebc0466ea40587a2411b888d5980cd9fe2ae271f528 |
|
MD5 | fcbc55fa1ec8e907812d4741a9a3a62a |
|
BLAKE2b-256 | 2d207c5e56b284f24d2953b13b058681ed7dbc9bf6d4943806839b2e4422b147 |