Skip to main content

Python package to assist in providing quick-look preliminary petrophysical estimation.

Project description

quick_pp

Lightweight toolkit for quick-look petrophysical estimation and exploration.

This repository contains the Python backend and SvelteKit frontend used by the quick_pp application, plus utilities for running ML training and simple petrophysical workflows.

Goals of this README

  • Give developers and users the minimal, practical steps to get the app running locally (backend, frontend and optional Docker services).

Project components

  • Backend: FastAPI application, data services, model endpoints and plotting APIs (in quick_pp/app/backend).
  • Frontend: SvelteKit UI (in quick_pp/app/frontend) providing data visualisations and tools.
  • Docker: Compose assets to run backend + Postgres for development (quick_pp/app/docker).
  • CLI: quick_pp CLI wrapper that starts services, runs training, prediction and deployment tasks (quick_pp/cli.py).
  • Machine learning: training/prediction pipelines and MLflow integration (quick_pp/machine_learning).

Prerequisites

  • Python 3.11+ (for backend and CLI)
  • Node.js 18+ and npm or yarn (for frontend)
  • Docker & Docker Compose (optional, for the packaged backend + DB)

.env & Database (SQLite vs PostgreSQL)

  • The application reads DB and other secrets from environment variables. For local development create a .env file in the repo root or use quick_pp/app/docker/.env when running the bundled Docker Compose stack.

  • Minimal .env examples

    SQLite (quick local testing)

    QPP_DATABASE_URL=sqlite:///./data/local.db
    QPP_SECRET_KEY=change-this-to-a-random-string
    

    PostgreSQL (recommended for realistic usage / Docker)

    QPP_DATABASE_URL=postgresql://qpp_user:qpp_pass@postgres:5432/quick_pp
    QPP_SECRET_KEY=replace-with-secure-value
    # if you run DB externally, replace host with reachable hostname or IP
    

    Which to choose

    • SQLite: easiest for quick, single-user experiments. No external DB server required but limited in concurrency and not recommended for multi-container deployments.
    • PostgreSQL: recommended for Docker and production-like setups; the quick_pp/app/docker/docker-compose.yaml in the repo is configured to create a Postgres service and a matching .env template.

    Security note

    • Never commit secrets (QPP_SECRET_KEY, DB passwords) to version control. Use environment-specific .env files excluded via .gitignore or a secrets manager.

Quick checklist

  • Ports: backend API 6312, frontend dev 5173, MLflow UI 5015, model server 5555.
  • Backend CLI entrypoint: python main.py (or quick_pp if installed).

Clone & Python setup

  1. Clone the repo and create a venv:
git clone https://github.com/imranfadhil/quick_pp.git
cd quick_pp
python -m venv .venv
# mac/linux
source .venv/bin/activate
# windows (cmd.exe)
.venv\Scripts\activate
  1. Install Python dependencies:
pip install -r requirements.txt
# (optional) install package editable for CLI convenience
pip install -e .

Using Docker (recommended for a complete local stack)

  • The repo provides Docker assets in quick_pp/app/docker/ to start the backend and a Postgres data volume.

Quick docker compose (from repo root):

cd quick_pp/app/docker
docker-compose up -d

This will bring up services configured for development. Logs can be checked with docker-compose logs -f in the same folder.

Frontend (SvelteKit)

  1. Install frontend dependencies and run the dev server:
cd quick_pp/app/frontend
npm install
# Ensure Plotly is available for the UI components
npm install plotly.js-dist-min --save
npm run dev
  1. Open the frontend at http://localhost:5173 (SvelteKit default).

Start the app using the project CLI

  • From the repo root you can use the included CLI which orchestrates backend and frontend processes. Example (starts backend and, if available, frontend):
python main.py app
# or (if installed) the user-facing command
quick_pp app

Start backend only (dev):

python main.py backend --debug

Start frontend only (dev):

python main.py frontend

Common commands

  • Run MLflow tracking UI (local): python main.py mlflow_server
  • Deploy model server: python main.py model_deployment
  • Train/predict via CLI: see python main.py --help or quick_pp --help

Testing

  • Run unit tests with pytest in the repo root:
pytest -q

Troubleshooting & tips

  • If the frontend does not render charts, ensure plotly.js-dist-min is installed in quick_pp/app/frontend (some components do dynamic imports).
  • If the backend fails to start behind Docker, check quick_pp/app/docker/.env and the Postgres volumes under quick_pp/app/docker/data/.
  • Use the CLI python main.py for convenience; it will open browser windows for the services it starts unless --no-open is provided.

Further reading

License

  • See the LICENSE file in the repository root.

Contributions and feedback welcome — open an issue or a PR with improvements.

Jupyter Notebooks

The repository includes several example notebooks under notebooks/ that demonstrate data handling, EDA and basic petrophysical workflows. Recommended workflow for exploring the project locally:

  1. Start the backend API (see CLI commands above) if a notebook calls the API.
  2. Open a Python environment with the project dependencies installed.
  3. Launch JupyterLab or Jupyter Notebook and open the notebooks in notebooks/.

Key notebooks:

  • 01_data_handler.ipynb — create and inspect a mock qppp project file.
  • 02_EDA.ipynb — quick exploratory data analysis patterns used in demos.
  • 03_* series — interpretation examples (porosity, saturation, rock typing).

Machine learning (Train / Predict / Deploy)

The project includes ML training and prediction utilities integrated with MLflow. High-level steps and helpful details:

  1. Prepare input data

    • Training expects a Parquet file in data/input/<data_hash>___.parquet.
    • The feature set required by each modelling config is defined in quick_pp/machine_learning/config.py (or MODELLING_CONFIG used by training code). Ensure input columns match the configured features.
  2. Train a model (local)

# from repo root, with virtualenv active
python main.py train <model_config> <data_hash>
# example
python main.py train mock mock
  1. Run predictions
python main.py predict <model_config> <data_hash> [output_name] [--plot]
# example
python main.py predict mock mock results_test --plot
  1. Deploy model server (serves registered MLflow models)
python main.py model_deployment

Notes:

  • MLflow UI (tracking server) is available with python main.py mlflow_server.
  • The --plot flag in predict saves visual outputs (if supported by the predict pipeline).
  • For production or reproducible experiments, register models in MLflow and configure the model registry settings used by the deployment code.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quick_pp-0.2.79.tar.gz (33.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quick_pp-0.2.79-py3-none-any.whl (41.1 MB view details)

Uploaded Python 3

File details

Details for the file quick_pp-0.2.79.tar.gz.

File metadata

  • Download URL: quick_pp-0.2.79.tar.gz
  • Upload date:
  • Size: 33.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for quick_pp-0.2.79.tar.gz
Algorithm Hash digest
SHA256 d0d550cdcf14c33eb710b4a8d9dfa3113cc6e47a60e5cd2fdf17e37de970bd1e
MD5 37e0857de24b207067159a0fa27984df
BLAKE2b-256 bd3bb5f1bf3f2173f31ebc1ada05a37a0bedccc91c517a8baeafbd9d27d30fb9

See more details on using hashes here.

File details

Details for the file quick_pp-0.2.79-py3-none-any.whl.

File metadata

  • Download URL: quick_pp-0.2.79-py3-none-any.whl
  • Upload date:
  • Size: 41.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for quick_pp-0.2.79-py3-none-any.whl
Algorithm Hash digest
SHA256 8d511081f29508f4be2d60c8112880c8fa288c960f4b109b2b0c7d063fca8a7a
MD5 a33ee6af2cdeac779af838ed9b7a8496
BLAKE2b-256 161ea77837fed1a716a0967e49387139ce0200dc38847a626d703cf991ae015a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page