Python API for the Geodesic Datascience Platform
Project description
geodesic-python-api
The python API for interacting with SeerAI's Geodesic system. Documentation can be found at docs.seerai.space/geodesic
Contributing
To setup a development environment for geodesic we recommend first creating a conda environment.
conda create -n geodesic-dev python=3.10
conda activate geodesic-dev
You will also need to install GDAL and arcgis for some applications. This is easiest to do through conda.
conda install gdal arcgis -c conda-forge -c esri -y
Once this finishes you can proceed to installing geodesic.
After cloning the repo you can install with pip. There are several install options depending on which packages you would like installed.
For development we recommend installing the dev packages with the dev
extras identifier. This will install all packages
needed to use all parts of the geodesic api as well as some packages used for testing.
pip install .[dev]
After installation finishes, install the pre-commit git hooks that will run the Ruff linter before every git commit.
pre-commit install
If there are any linting or formatting issues, the pre-commit hook will prevent you from committing until those issues are fixed. See the Ruff documentation for details, but many of the issues can be fixed automatically. It is also highly recommended that you install the Ruff extension into VSCode as it will highlight code that doesnt meet the linter or formatter requirements. See Code Formatting for more info on running the formatter and linter locally.
[!NOTE] This will not actually run any reformatting or linting fixes, it will simply tell you if there are any problems. See Code Formatting to have Ruff try to automatically fix the issues for you.
[!NOTE] The pre-commit hooks will only run against files that you have changed in this git commit.
When adding or modifying any code you should also add to the documentation if necessary
and make sure that it builds and renders correctly. You can find instructions for modifying
and building the docs sources in the README in the docs/
folder. The CI/CD will also build
docs when a PR is created and provide a link to them. It is a good idea to check this after
your PR finishes building to make sure any of your added documentation is displayed correctly.
Code Formatting
In geodesic
, we use the Ruff code formatter and linter. If you are developing in VSCode, the Ruff extension should
be installed and should be set to your default formattter in your python settings. Make sure that when you
are developing on the python api that you installed with the 'dev' option pip install .[dev]
. This will
automatically install the Ruff formatter for you.
If you would like to run the linter manually to check for or fix errors, this can be done with:
ruff check
This will print to the screen all linting errors the tool finds. Many of these can be fixed automatically if you prefer and that can be done with:
ruff check --fix
Ruff also works as a code formatter and should be a drop in replacement for Black. To reformat your files run:
ruff format
This will run all reformatting and tell you which files it worked on. If instead of running the reformatter you would just like to check which files it will touch you can run:
ruff format --check
Testing
To run unit tests and see coverage, in the root directory run:
coverage run -m pytest
coverage report --omit=test/*
CLI
This library installs with a command line tool geodesic
that exposes a number of useful tools for working with the
geodesic platform:
Authentication
Example:
$ geodesic authenticate
To authorize access needed by Geodesic, open the following URL in a web browser and follow the instructions. If the web browser does not start automatically, please manually browse the URL below.
https://seerai.us.auth0.com/authorize?client_id=RlCTevNLPn0oVzmwLu3R0jCF7tfakpq9&scope=email+openid+profile+picture+admin+offline_access+entanglement%3Aread+entanglement%3Awrite+spacetime%3Aread+spacetime%3Awrite+tesseract%3Aread+tesseract%3Awrite+boson%3Aread+boson%3Awrite+krampus%3Aread&redirect_uri=https%3A%2F%2Fseerai.space%2FauthPage&audience=https%3A%2F%2Fgeodesic.seerai.space&response_type=code&code_challenge=ABC&code_challenge_method=S256
The authorization workflow will generate a code, which you should paste in the box below.
Enter verification code: XXXXXXXXX
Setting And Displaying The Active Cluster
Examples:
$ geodesic get clusters
[*] seerai
$ geodesic get active-config
{
"host": "https://geodesic.seerai.space",
"name": "seerai",
"oauth2": {
"audience": "https://geodesic.seerai.space",
"authorization_uri": "https://seerai.us.auth0.com/authorize",
"client_id": "RlCTevNLPn0oVzmwLu3R0jCF7tfakpq9",
"client_secret": "EY5_-6InmoqYSy1ZEKb7vGiUrCTE1JapTtBncaP_w_0_IhuSilZw1YS6pqoJ0n75",
"redirect_uri": "https://seerai.space/authPage",
"token_uri": "https://seerai.us.auth0.com/oauth/token"
}
}
$ geodesic set cluster seerai
Project Management
The geodesic build project
command allows you to create and manage Entanglement projects using a yaml format
configuration file. For example, to create a new project, create a yaml file with the following contents:
- name: seerai-project
alias: SeerAI Example Project
description: A project for demonstrating the build project command
This file can be named anything, and saved anywhere, but we suggest a file called project.yaml
in your project root
directory. Now, to actually create the project:
$ geodesic build project
No project name provided. Using project "seerai-project"
Creating project: seerai-project
Now, if you check your project.yaml
file, you will see that a project uid has been added, which will allow future runs
of this tool to point to the same project.
- uid: <PROJECT-UID>
name: seerai-project
alias: SeerAI Example Project
description: A project for demonstrating the build project command
Note: injecting the uid into the project specification can sometimes result in unexpected changes to nonfunctional aspects of the yaml file, e.g., whitespace and comments. To avoid these changes, simply create the project through the API and add the uid yourself. If all uids are provided, your yaml will not be touched.
If you have an existing project that you would like to use, just specify the uid when writing your initial configuration file and the build tool will connect to it automatically.
Once your project has been created, you can also use the geodesic build project
command to make changes to that
project. For example, if we wanted to change the description of the project, you can do so simply by modifying the yaml
file and rerunning the command. The changes will be pushed to your Entanglement project. Note that the project uid and
name cannot be modified after project creation.
Managing Multiple Projects
You might have noticed that the project specification in the yaml above is a list item. This is because you can use the
geodesic build project
command to manage multiple Entanglement projects within the same yaml file. For example, we
frequently create both sandbox and production versions of a project, so we can stage changes without modifying a live
client-facing graph. Here's what that looks like:
- uid: <PROJECT-UID-1>
name: seerai-project-1
alias: SeerAI Example Project 1
description: A project for demonstrating the build project command
- uid: <PROJECT-UID-2>
name: seerai-project-2
alias: SeerAI Example Project 2
description: A project for demonstrating the build project command
Now, you can build/rebuild either of these projects simply using the --project
option. For example:
geodesic build project --project=seerai-project-2
will build the second project. As before, if a project specification is added without a uid, the project will be created and the uid will be added to your yaml.
Managing Permissions
geodesic build project
also allows you to manage which users have permissions for a given project. To add a user to a
project, simply use the permissions
key in your yaml file. For example:
- uid: <PROJECT-UID>
name: seerai-project
alias: SeerAI Example Project
description: A project for demonstrating the build project command
permissions:
# Add Allison to the project with read/write permissions
- {name: Allison, user: auth0|<USER-HASH>, read: true, write: true}
# Add Daniel as a read-only user
- {name: Daniel, user: auth0|<USER-HASH>, read: true, write: false}
# Remove Alex's permissions (once you have run once with this line, it can be removed)
- {name: Alex, user: auth0|<USER-HASH>, read: false, write: false}
Graph Management
The geodesic build graph
command allows you to build an Entanglement graph based on yaml specification files. This is
great for keeping the contents of an Entanglement graph under git control, or just creating a large number of nodes with
relatively little effort. Once your yaml configurations are set up, building a graph is as easy as:
$ geodesic build graph --file=graph_nodes/ --project=<project-name-or-uid>
The --project
argument is optional. The active project can also be set by setting the PROJECT_ID
environmental
variable. But a project must be provided in one of these forms. The --file
argument points to a yaml file, or
directory containing yaml files specifying graph nodes.
YAML Input Format
The input file format is fairly straightforward. Here is an example of a single entity node:
---
- name: test-node-a
alias: Test Node A
tag: node-a
description: Test Node A Description
domain: test
category: test
type: test
object_class: entity
geometry: POINT (<lon> <lat>)
The body of an input yaml file is a single list of node specifications (note the dash at the beginning of each node
spec, indicating that it is a list item), most of which are passed directly to the
geodesic.Object
constructor, which means that translating between node definitions in made with the python API and with this script is
very simple. For example, the node specified above is equivalent to the following Python code:
from shapely.geometry import Point
import geodesic
node = geodesic.Object(
name='test-node-a',
alias='Test Node A',
description='Test Node A Description',
domain='test',
category='test',
type='test',
object_class='entity',
geometry=Point(lon, lat),
).save()
As you can see, most of the keys in the node specs are equivalent to args passed directly to the constructor, but there are a few important exceptions, which add additional functionality to this command:
-
tag
: (optional) each node can optionally be given a 'tag', which is a short name (alphanumeric, plus hyphens) which can be used inside your input yaml files to more conveniently refer to nodes. The utility of this will become more clear in a moment. A few additional considerations:- Tags are expected to be unique for each set of input files; repeating tag names will throw a warning and might result in the wrong connections being made.
- Tags defined in one input file can be used in another input file, provided the script is run on all the files at the same time. A List of tags from all input files is compiled before the connection creation step. This allows input files to be more modular without sacrificing the convenience of tag referencing, but it also means that you must be mindful of potential name collisions with other input files in the same directory.
- Tags are not saved on the resulting Entanglement nodes in any way. They exist purely for use by this tool.
-
geometry
: (optional) accepts geometry in WKT format. Improper WKT input will throw a warning and that geometry will be left off of the created node. Keep in mind that geometry is only accepted for objects of object classentity
. For other object classes, this key is ignored.
Making Connections
Of course, a single graph node doesn't do us much good if it's not connected to anything. Thankfully, creating connections in this format is simple. Let's give our node a couple of connections:
- name: test-node-a
alias: Test Node A
tag: node-a
description: Test Node A Description
domain: test
category: test
type: test
object_class: entity
geometry: POINT (0 0)
connections:
- subject: self
predicate: related-to
object: node-b
- subject: concept:test:test:test:test-node-c
predicate: related-to
object: self
- subject: self
predicate: related-to
object: 0x3b88c7
The connections
key can carry a list of connections, which need to have a subject
, predicate
and object
. subject
and object
can be referenced in a few different ways:
-
tag referencing - allows you to use the tags defined in the
tag
key of a node in one of your input files. Additionally, a shortcut tagself
is available to more easily refer to the node currently being specified. In most cases, either yoursubject
or yourobject
will be set toself
, but this is not required. Any connection can be made from inside any node's specification. It is, however, recommended that you organize your connections in some way that allows you to easily trace back connections to their location in the input files. -
full name referencing - allows you to use a node's full name (
<object_class>:<domain>:<category>:<type>:<name>
, quickly accessible via theObject.full_name
property of a node) to reference any node in the active project. -
uid referencing - allows you to use the uid of any node within the active project. This method is not preferred, because the result is less readable than the other two options. If you are using uid reference, it is recommended that you include a comment in the yaml to clarify what the uid is referencing.
from_<format>
Datasets
The script also allows for creating dataset nodes through any of the geodesic.Dataset.from_<format>()
methods
available through the Python API. This looks very similar to creating other types of nodes:
- name: test-node-c
tag: node-c
domain: test
category: test
type: test
object_class: dataset
method: from_arcgis_layer
url: <arcgis_layer_url>
connections:
- subject: self
predicate: related-to
object: node-a
As with other nodes, most of these keys are passed directly to the chosen constructor. But, in this case, the constructor is
whatever from_
method was specified in the method
key. This means that the other keys required will differ depending
on your chosen method. See the docs
for more detail on how each of these methods works.
Adding Middleware
Middleware can be added to a dataset using the middleware
key. Each list item under this key is parsed into a
middleware object. Simply specify the path of the middleware constructor method that you want to use, then provide the
necessary arguments. Here's an example:
- name: test-node-f
alias: Test Node F
tag: node-f
description: Test Node F
domain: test
category: test
type: test
object_class: dataset
method: view
dataset_tag: node-e
bbox: [ -109.720459,36.438961,-101.535645,41.269550 ]
middleware:
- method: SearchTransform.buffer
distance: 0.01
segments: 32
- method: PixelsTransform.rasterize
value: 1
connections:
- subject: self
predicate: related-to
object: node-d
- subject: self
predicate: related-to
object: node-e
View Datasets
Other options for the method
key are 'view'
, 'join'
, and 'union'
. Here is an example of a view dataset
definition:
- name: test-node-f
alias: Test Node F
tag: node-f
description: Test Node F
domain: test
category: test
type: test
object_class: dataset
method: view
dataset: dataset:foundation:boundaries:boundaries:usa-counties
dataset_project: global
bbox: [-109.720459,36.438961,-101.535645,41.269550]
connections:
- subject: self
predicate: related-to
object: node-d
The target dataset of the view, join, or union can be specified using the same methods as that can be used in connection
definitions (full name, tag, or UID). You can select a target dataset
from another project using the dataset_project
key. If this key is not included, the dataset is assumed to be in the
active project.
CQL Filtering
You can also use CQL filtering while creating views. Here is an example of what that looks like:
- name: test-node-f
alias: Test Node F
tag: node-f
description: Test Node F
domain: test
category: test
type: test
object_class: dataset
# test bbox and CQL view dataset creation
method: view
dataset_tag: node-e
bbox: [ -109.720459,36.438961,-101.535645,41.269550 ]
filter:
op: and
args:
- op: ">"
args:
- property: POPULATION
- 10000
- op: "="
args:
- property: STNAME
- Colorado
connections:
- subject: self
predicate: related-to
object: node-d
- subject: self
predicate: related-to
object: node-e
Join Datasets
To create a join dataset, you will need to specify method: join
, as well as both the left and right datasets, which
can be accessed through the full name, uid, or tag, as described above, as wells as field
(left field) and right_field
.
Here is an example of a join dataset definition:
- name: test-node-h
alias: Test Node H
tag: node-h
description: Test Node H
domain: test
category: test
type: test
object_class: dataset
method: join
dataset_tag: node-g
field: COUNTY_FIPS
right_dataset: dataset:foundation:boundaries:boundaries:usa-counties
right_dataset_project: global
right_field: COUNTYFP
connections:
- subject: self
predicate: related-to
object: node-g
- subject: self
predicate: related-to
object: node-e
In addition to dataset_project
, you can also specify a right dataset from a different project using the
right_dataset_project
key.
Spatial Joins
Spatial joins are defined the same way, but with spatial_join: true
. For example:
- name: test-node-h
alias: Test Node H
tag: node-h
description: Test Node H
domain: test
category: test
type: test
object_class: dataset
method: join
dataset_tag: node-g
right_dataset_tag: node-e
spatial_join: true
connections:
- subject: self
predicate: related-to
object: node-g
- subject: self
predicate: related-to
object: node-e
Union Datasets
Unions work essentially the same way as joins, but the other datasets are specified in the others
key. Here's an
example:
- name: test-node-i
alias: Test Node I
tag: node-i
description: Test Node I
domain: test
category: test
type: test
object_class: dataset
method: union
dataset_tag: node-g
others:
- dataset_tag: node-e
- dataset: dataset:foundation:boundaries:boundaries:usa-counties
project: global
connections:
- subject: self
predicate: related-to
object: node-g
- subject: self
predicate: related-to
object: node-e
Note that you can specify a dataset from a different project by setting the project
key on the dataset item. For this
to work, you will need the dataset's full name or uid.
Additional Options
There are a couple of additional options which can be added to modify the behavior of the geodesic build graph
command:
--dry_run
: when running with this option, the tool will not actually write changes to any Entanglement projects. This is useful for validating your node configuration before committing to any actual changes in your project. Note that some features might misbehave while using this option as, for example, connections cannot be made between nodes cannot be made when they are not actually created yet.--reindex
: this option triggers a reindex operation on any dataset nodes created by the tool. This is necessary for most changes to datasets to actually take effect. If you are noticing that changes are not being reflected in the data you are receiving from a dataset, you may want to try this option.--rebuild
: when running with this option, the tool will delete all nodes in the project before it begins the graph build. This is recommended if you want your full graph to be reflected in your yaml configurations, but should be avoided if you have modified the graph through any other method, such as in a notebook. This option should be used carefully.
Managing Graphs Using geodesic build project
Finally, it is also possible to integrate the functionality of geodesic build graph
into the previously described
geodesic build project
workflow. Doing so is as easy as adding one more key to your project.yaml
file:
- uid: <PROJECT-UID>
name: seerai-project
alias: SeerAI Example Project
description: A project for demonstrating the build project command
nodes_path: graph_nodes
Now, when you run geodesic build project --project=seerai-project
, the project build will automatically look to the
graph_nodes
directory and build the resulting graph in your project. All of the additional options listed above are
also available when running a graph build through geodesic build project
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for geodesic_api-0.59.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 39933f5283b4e9369d73ac611a569a861a104a497022b68123f567914f686168 |
|
MD5 | be73427008211644a75517729780488e |
|
BLAKE2b-256 | a4b957d60fcdd5886e9de9cfaae126527e6dbb5d5d099bac5aa4e9d54382c5cc |