Skip to main content
Donate to the Python Software Foundation or Purchase a PyCharm License to Benefit the PSF! Donate Now

๐Ÿณ Ocean Brizo.

Project description



Helping publishers provide extended data services (e.g. storage and compute).

"๐Ÿ„โ€โ™€๏ธ๐ŸŒŠ Brizo is an ancient Greek goddess who was known as the protector of mariners, sailors, and fishermen. She was worshipped primarily by the women of Delos, who set out food offerings in small boats. Brizo was also known as a prophet specializing in the interpretation of dreams."

Docker Build Status Travis (.com) Codacy coverage PyPI GitHub contributors

๐Ÿฒ๐Ÿฆ‘ THERE BE DRAGONS AND SQUIDS. This is in alpha state and you can expect running into problems. If you run into them, please open up a new issue. ๐Ÿฆ‘๐Ÿฒ

Table of Contents


In the Ocean ecosystem, Brizo is the technical component executed by the Publishers allowing them to them to provide extended data services (e.g. storage and compute). Brizo, as part of the Publisher ecosystem, includes the credentials to interact with the infrastructure (initially cloud, but could be on-premise).

Running Locally, for Dev and Test

If you want to contribute to the development of Brizo, then you could do the following. (If you want to run a Brizo in production, then you will have to do something else.)

First, clone this repository:

git clone
cd brizo/

Then run some things that Brizo expects to be running:

git clone
cd barge
bash --no-brizo --no-pleuston --local-spree-node

Barge is the repository where all the Ocean Docker Compose files are located. We are running the script the easy way to have Ocean projects up and running. We run without Brizo or Pleuston instances.

To learn more about Barge, visit the Barge repository.

Note that it runs an Aquarius instance and a MongoDB instance but Aquarius can also work with BigchainDB or Elasticsearch.

The most simple way to start is:

pip install -r requirements_dev.txt
export FLASK_APP=brizo/
export CONFIG_FILE=config.ini
flask run --port=8030

That will use HTTP (i.e. not SSL/TLS).

The proper way to run the Flask application is using an application server such as Gunicorn. This allow you to run using SSL/TLS. You can generate some certificates for testing by doing:

openssl req -x509 -newkey rsa:4096 -nodes -out cert.pem -keyout key.pem -days 365

and when it asks for the Common Name (CN), answer localhost

Then edit the config file config.ini so that:

brizo.url = https://localhost:8030

Then execute this command:

gunicorn --certfile cert.pem --keyfile key.pem -b -w 1

API documentation

Once you have Brizo running you can get access to the API documentation at:

There is also some Brizo API documentation in the official Ocean docs.


To get configuration settings, Brizo first checks to see if there is a non-empty environment variable named CONFIG_FILE. It there is, it will look in a config file at that path. Otherwise it will look in a config file named config.ini. Note that some settings in the config file can be overriden by setting certain environment variables; there are more details below.

See the example config.ini file in this repo. You will see that there are three sections: [keeper-contracts], [resources] and [osmosis].

The [keeper-contracts] and [resources] Sections

The [keeper-contracts] and [resources] sections are used to configure squid-py. Details about how to configure squid-py are in the squid-py repo.

You can override the some squid-py-related settings in the config file by setting certain environment variables, such as KEEPER_URL. For details, see the squid-py repo.

There is a parameter in the resources section called validate.creator that is setup false by default, that you have to switch to true in case that you wish to run a private marketplace.

The [osmosis] Section

The [osmosis] section of the config file is where a publisher puts their own credentials for various third-party services, such as Azure Storage. At the time of writing, Brizo could support files with three kinds of URLs:

  • files in Azure Storage: files with "" in their URLs
  • files in Amazon S3 storage: files with "s3://" in their URLs
  • files in on-premise storage: all other files with resolvable URLs

Initial work has also been done to support Azure Compute but it's not officially supported yet.

A publisher can choose to support none, one, two or all of the above. It depends on which cloud providers they use.

If a publisher wants to store some files in Azure Storage (and make them available from there), then they must get and set the following config settings in the [osmosis] section of the config file. There is an Ocean tutorial about how to get all those credentials from Azure.

[osmosis] = <Azure Storage Account Name (for storing files)>
azure.account.key = <Azure Storage Account key>
azure.resource_group = <Azure resource group>
azure.location = <Azure Region> = <Azure Application ID>
azure.client.secret = <Azure Application Secret> = <Azure Tenant ID> = <Azure Subscription>
; azure.share.input and azure.share.output are only used
; for Azure Compute data assets (not for Azure Storage data assets).
; If you're not supporting Azure Compute, just leave their values
; as compute and output, respectively.
azure.share.input = compute
azure.share.output = output

You can override any of those config file settings by setting one or more of the following environment variables. You will want to do that if you're running Brizo in a container.

# Just always set AZURE_SHARE_INPUT='compute' for now
# Just always set AZURE_SHARE_OUTPUT='output' for now

If a publisher wants to store some files in Amazon S3 storage (and make them available from there), then there are no AWS-related config settings to set in the config file. AWS credentials actually get stored elsewhere. See the Ocean tutorial about how to set up Amazon S3 storage.

If a publisher wants to store some files on-premise (and make them available from there), then there are no special config settings to set in the config file. The only requirement is that the file URLs must be resolvable by Brizo. See the Ocean tutorial about how to set up on-premise storage.


Brizo relies on the following Ocean libraries:

Code Style

Information about our Python code style is documented in the python-developer-guide and the python-style-guide.


Automatic tests are setup via Travis, executing tox. Our tests use the pytest framework.


To debug Brizo using PyCharm, follow the next instructions:

  1. Clone barge repository.

  2. Run barge omitting brizo. (i.e.:bash --no-brizo --no-pleuston --local-nile-node)

  3. In PyCharm, go to Settings > Project Settings > Python Debugger, and select the option Gevent Compatible

  4. Configure a new debugger configuration: Run > Edit Configurations..., there click on Add New Configuration

  5. Configure as shown in the next image: Pycharm Debugger configuration

  6. Set the following environment variables:


    The option OBJC_DISABLE_INITIALIZE_FORK_SAFETY is needed if you run in last versions of MacOS.

  7. Now you can configure your breakpoints and debug brizo or squid-py.

New Version

The script helps to bump the project version. You can execute the script using as first argument {major|minor|patch} to bump accordingly the version.


Copyright 2018 Ocean Protocol Foundation Ltd.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
See the License for the specific language governing permissions and
limitations under the License.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
ocean_brizo-0.3.8-py2.py3-none-any.whl (18.0 kB) Copy SHA256 hash SHA256 Wheel py2.py3
ocean-brizo-0.3.8.tar.gz (21.1 kB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page