API for MaveDB, the database of Multiplexed Assays of Variant Effect.
Project description
mavedb-api
API for MaveDB. MaveDB is a biological database for Multiplex Assays of Variant Effect (MAVE) datasets. The API powers the MaveDB website at mavedb.org and can also be called separately (see instructions below).
For more information about MaveDB or to cite MaveDB please refer to the MaveDB paper in Genome Biology.
Using mavedb-api
Using the library as an API client or validator for MaveDB data sets
Simply install the package using PIP:
pip install mavedb
Or add mavedb
to your Python project's dependencies.
Building and running mavedb-api
Prerequisites
- Python 3.9 or later
- PIP
- build for building distributions. This can be installed with
pip install build
. - hatch for building distributions. This can be installed with
pip install hatch
.
Building distribution packages
To build the source distribution and wheel, run
python -m build
The build utility will look at pyproject.toml
and invoke Hatchling to build the distributions.
The distribution can be uploaded to PyPI using twine.
For use as a server, this distribution includes an optional set of dependencies, which are only invoked if the package
is installed with pip install mavedb[server]
.
Running a local version of the API server
First build the application's Docker image:
docker build --tag mavedb-api/mavedb-api .
Then start the application and its database:
docker-compose -f docker-compose-local.yml up -d
Omit -d
(daemon) if you want to run the application in your terminal session, for instance to see startup errors without having
to inspect the Docker container's log.
To stop the application when it is running as a daemon, run
docker-compose -f docker-compose-local.yml down
docker-compose-local.yml
configures four containers: one for the API server, one for the PostgreSQL database, one for the
worker node and one for the Redis cache which acts as the job queue for the worker node. The worker node stores data in a Docker
volume named mavedb-redis
and the database stores data in a Docker volume named mavedb-data
. Both these volumes will persist
after running docker-compose down
.
Notes
-
The
mavedb-api
container requires the following environment variables, which are configured indocker-compose-local.yml
:- DB_HOST
- DB_PORT
- DB_DATABASE_NAME
- DB_USERNAME
- DB_PASSWORD
- NCBI_API_KEY
- REDIS_IP
- REDIS_PORT
The database username and password should be edited for production deployments.
NCBI_API_KEY
will be removed in the future. TODO Move these to an .env file.
Running the API server in Docker for development
A similar procedure can be followed to run the API server in development mode on your local machine. There are a couple of differences:
- Your local source code directory is mounted to the Docker container, instead of copying it into the container.
- The Uvicorn web server is started with a
--reload
option, so that code changes will cause the application to be reloaded, and you will not have to restart the container. - The API uses HTTP, whereas in production it uses encrypted communication via HTTPS.
To start the Docker container for development, make sure that the mavedb-api directory is allowed to be shared with Docker. In Docker Desktop, this can be configured under Settings > Resources > File sharing.
To start the application, run
docker-compose -f docker-compose-dev.yml up --build -d
Docker integration can also be configured in IDEs like PyCharm.
Running the API server directly for development
Sometimes you may want to run the API server outside of Docker. There are two ways to do this:
Before using either of these methods, configure the environment variables described above.
- Run the server_main.py script. This script will create the FastAPI application, start up an instance of the Uvicorn, and pass the application to it.
export PYTHONPATH=${PYTHONPATH}:"`pwd`/src"
python src/mavedb/server_main.py
- Run Uvicorn and pass it the application. This method supports code change auto-reloading.
export PYTHONPATH=${PYTHONPATH}:"`pwd`/src"
uvicorn mavedb.server_main:app --reload
If you use PyCharm, the first method can be used in a Python run configuration, but the second method supports PyCharm's FastAPI run configuration.
Running the API server for production
We maintain deployment configuration options and steps within a private repository used for deploying this source code to the production MaveDB environment. The main difference between the production setup and these local setups is that the worker and api services are split into distinct environments, allowing them to scale up or down individually dependent on need.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mavedb-2024.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9bc5d91d7e7ceef83f263c512add9d3f70577e10e6a318588ec126e17b9f0f56 |
|
MD5 | 16d99c7b2c02ca3758409861a456fb93 |
|
BLAKE2b-256 | 4a31e234926d8cfe2d77306c4add550ddfc0077344fa3c3059163c88d7cb1ab0 |