Skip to main content

Tools for development with the Rendered.ai Platform.

Project description

Rendered.ai's SDK: anatools

anatools is Rendered.ai’s SDK for connecting to the Platform API. With anatools you can generate and access synthetic datasets, and much more!

>>> import anatools
>>> ana = anatools.AnaClient()
'Enter your credentials for Ana.'
'email:' example@rendered.ai
'password:' ***************
>>> ana.get_channels()
['mychannel1', 'mychannel2']
>>> graphs = ana.get_staged_graphs()
>>> datasets = ana.get_datasets()

Install the anatools Package

(Optional) Create a new Conda Environment

  1. Install conda for your operating system: https://www.anaconda.com/products/individual.
  2. Create a new conda environment and activate it.
  3. Install anatools from the Python Package Index.
$ conda create -n renderedai python=3.7
$ conda activate renderedai

Install AnaTools to the Python Environment

  1. Install AnaTools from the Python Package Index.
$ pip install anatools

Dependencies

The anatools package requires python 3.6 or higher and has dependencies on the following packages:

Package Description
pyrebase A python wrapper for Google Firebase API.
jwt A python library for encoding and decoding JSON web tokens.
keyring A python library for storing and accessing passwords securely.
docker A python library for the Docker Engine API.
sphinx A python documentation generator.
pytest A python testing framework.
pyyaml A python YAML parser and emitter.

If you have any questions or comments, contact Rendered.AI at info@rendered.ai.


Quickstart Guide

What is the Rendered.ai Platform?

The Rendered.ai Platform is a synthetic dataset generation tool where graphs describe what and how synthetic datasets are generated.

Terms Definitions
workspace A workspace is a collection of data used for a particular use-case, for example workspaces can be used to organize data for different projects.
dataset A dataset is a collection of data, for many use-cases these are images with text-based annotation files.
graph A graph is defined by nodes and links, it describes the what and the how a dataset is generated.
node A node can be described as an executable block of code, it has inputs and runs some algorithm to generate outputs.
link A link is used to transfer data from the output of one node, to the input of other nodes.
channel A channel is a collection of nodes, it is used to limit the scope of what is possible to generate in a dataset (like content from a tv channel).

How do you use the SDK?

The Rendered.ai Platform creates synthetic datasets by processing a graph, so we will need to create the client to connect to the Platform API, create a graph, then create a dataset.

  1. Execute the python command line, create a client and login to Rendered.ai. In this example we are instantiating a client with no workspace or environment variables, so it is setting our default workspace. To access the tool, you will need to use your email and password for https://deckard.rendered.ai.
>>> import anatools
>>> ana = anatools.AnaClient()
'Enter your credentials for Ana.'
'email:' example@rendered.ai
'password:' ***************
  1. Create a graph file called graph.yml with the code below. We are defining a simplistic graph for this example with multiple children's toys dropped into a container. While YAML files are used in channel development and for this example, the Platform SDK and API only support JSON. Ensure that the YAML file is valid in order for the SDK to convert YAML to JSON for you. Otherwise, provide a graph in JSON format.
version: 2
nodes:

  Rubik's Cube:
    nodeClass: "Rubik's Cube"

  Mix Cube:
    nodeClass: Mix Cube

  Bubbles:
    nodeClass: Bubbles

  Yoyo:
    nodeClass: Yo-yo

  Skateboard:
    nodeClass: Skateboard

  MouldingClay:
     nodeClass: Playdough

  ColorToys:
    nodeClass: ColorVariation
    values: {Color: "<random>"}
    links:
      Generators:
        - {sourceNode: Bubbles, outputPort: Bubbles Bottle Generator}
        - {sourceNode: Yoyo, outputPort: Yoyo Generator}
        - {sourceNode: MouldingClay, outputPort: Play Dough Generator}
        - {sourceNode: Skateboard, outputPort: Skateboard Generator}

  ObjectPlacement:
    nodeClass: RandomPlacement
    values: {Number of Objects: 20}
    links:
      Object Generators:
      - {sourceNode: ColorToys, outputPort: Generator}
      - {sourceNode: "Rubik's Cube", outputPort: "Rubik's Cube Generator"}
      - {sourceNode: Mix Cube, outputPort: Mixed Cube Generator}

  Container:
    nodeClass: Container
    values: {Container Type: "Light Wooden Box"}

  Floor:
    nodeClass: Floor
    values: {Floor Type: "Granite"}

  DropObjects:
    nodeClass: DropObjectsNode
    links:
      Objects:
        - {sourceNode: ObjectPlacement, outputPort: Objects}
      Container Generator:
        - {sourceNode: Container, outputPort: Container Generator}
      Floor Generator:
        - {sourceNode: Floor, outputPort: Floor Generator}

  Render:
    nodeClass: RenderNode
    links:
      Objects of Interest:
      - {sourceNode: DropObjects, outputPort: Objects of Interest}
  1. Create a graph using the client. To create a new graph, we load the graph defined above into a python dictionary using the yaml python package. Then we create a graph using the client. This graph is being named testgraph and is using the example channel. We will first find the channelId matching to the example channel and use that in the create_staged_graph call. The client will return a graphId so we can reference this graph later.
>>> import yaml
>>> with open('graph.yml') as graphfile:
>>>     graph = yaml.safe_load(graphfile)
>>> channels = ana.get_channels()
>>> channelId = list(filter(lambda channel: channel['name'] == 'example', channels))[0]['channelId']
>>> graphId = ana.create_staged_graph(name='testgraph', channelId=channelId, graph=graph)
>>> print(graphId)
'010f9362-daa8-4c10-a3e8-1e81e0f2e4f4'
  1. Create a dataset using the client. Using the graphId, we can create a new job to generate a dataset. The job takes some time to run.

The client will return a datasetId that can be used for reference later. You can use this datasetId to check the job status and, once the job is complete, download the dataset. You have now generated Synthetic Data!

>>> datasetId = ana.create_dataset(name='testdataset',graphId=graphId,interpretations='10',priority='1',seed='1',description='A simple dataset with cubes in a container.')
>>> datasetId
'ce66e81c-23a6-11eb-adc1-0242ac120002'

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anatools-1.0.13.tar.gz (33.9 kB view details)

Uploaded Source

Built Distribution

anatools-1.0.13-py3-none-any.whl (43.2 kB view details)

Uploaded Python 3

File details

Details for the file anatools-1.0.13.tar.gz.

File metadata

  • Download URL: anatools-1.0.13.tar.gz
  • Upload date:
  • Size: 33.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12

File hashes

Hashes for anatools-1.0.13.tar.gz
Algorithm Hash digest
SHA256 7f2207a757694c22a44ef4144ab12172460bf74bbfe769004dacc81ce1cb885e
MD5 832fbccce3608b74bedbbb8385a6dac7
BLAKE2b-256 b2d0e5dd0e58bf7787da544cc52acb5a6323ebcc7b4bd32e38e580675f15fb77

See more details on using hashes here.

File details

Details for the file anatools-1.0.13-py3-none-any.whl.

File metadata

  • Download URL: anatools-1.0.13-py3-none-any.whl
  • Upload date:
  • Size: 43.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12

File hashes

Hashes for anatools-1.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 aac7dc2d68bebbfcb8ecf3ed49ec4752364c1e8c83a2334080feaffcee9e1914
MD5 5c6f35ddc6b92c5eae80b262df7c2486
BLAKE2b-256 20345fd8a2924fc834b883c6584b9b64fde84bbeb61703b96a14a05460e6596f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page