A modular toolkit for managing project paths, cloud services, and configuration in Python projects
Project description
project-toolkit
A modular Python toolkit for managing project paths, cloud services (Google Cloud, Cloudflare R2), HashiCorp Vault, and configuration — powered by pydantic-settings.
Features
- ⚙️ Auto-Initialized Settings — Just import and use, no setup needed
- 🔑 Dynamic Env Access — Read any env var or
.envvalue withsettings.env("KEY") - 🔐 Vault Integration — Read secrets like
vault kv get/vault kv get -field= - 📁 Project Path Management — Create and navigate project directory structures
- ☁️ Cloudflare — Official
cloudflare-pythonSDK wrapper, plus R2boto3andrequestsclients - 📊 Google Services — BigQuery, Drive, and Sheets integrations
- 🌐 WAN IP Tool — Fetch, track, and log public IPv4/IPv6 address changes via HTTP/2 and HTTP/3.
- 🔔 Notifications — Framework-agnostic Discord alerts for Prefect and Airflow
Installation
# Core only
pip install project-toolkit
# With specific extras
pip install "project-toolkit[google]"
pip install "project-toolkit[cloudflare]"
pip install "project-toolkit[vault]"
pip install "project-toolkit[wan-ip]"
pip install "project-toolkit[notifications]"
# Everything
pip install "project-toolkit[all]"
API Usage Guide
1. Auto-Initialized Settings (No Setup Needed)
Settings auto-initialize on first use — no boilerplate required:
from project_toolkit import settings
# Access any module setting directly
print(settings.cloudflare.r2_access_token)
print(settings.google.google_service_account_json)
print(settings.vault.is_configured)
Environment detection is automatic:
- If
AIRFLOW_HOMEis set → loadsconfig/.env.airflow - Otherwise → loads
config/.env.dev
The config/ directory is located by searching from your current working directory first,
then walking up the directory tree. This means it works whether you run from the project root,
a subdirectory, or even a Jupyter notebook inside the project.
2. Dynamic Environment Variable Access
Read any env var or .env value — no need to pre-define it in code:
from project_toolkit import settings
# Read API keys (from env vars or .env file)
openai_key = settings.env("OPENAI_API_KEY")
gemini_key = settings.env("GEMINI_API_KEY")
claude_key = settings.env("CLAUDE_API_KEY")
# With a default fallback
db_host = settings.env("DATABASE_HOST", "localhost")
debug = settings.env("DEBUG_MODE", "false")
Priority: Environment variable > .env file value > default
Your config/.env.dev can contain any key:
OPENAI_API_KEY=sk-xxxxx
GEMINI_API_KEY=AIza-xxxxx
CLAUDE_API_KEY=sk-ant-xxxxx
DATABASE_HOST=db.example.com
3. HashiCorp Vault
Read secrets from Vault — same env vars as the vault CLI.
Setup — Add to config/.env.dev:
VAULT_ADDR=http://192.168.12.2:8200
VAULT_TOKEN=hvs.your-vault-token
Read all fields (like vault kv get secret/my-credentials):
from project_toolkit import settings
secret = settings.vault.read_secret("secret/my-credentials")
print(secret)
# {'username': 'admin', 'password': 'secret-password-123'}
Read a single field (like vault kv get -field=OPENAI_API_KEY secret/api):
from project_toolkit import settings
# Single field reads
api_key = settings.vault.read_field("secret/api", "OPENAI_API_KEY")
password = settings.vault.read_field("secret/my-credentials", "password")
# With fallback default
db_pass = settings.vault.read_field("secret/db", "password", "default-pw")
Use the hvac client directly for advanced operations:
from project_toolkit import settings
client = settings.vault.get_vault_client()
if client:
# Write a secret
client.secrets.kv.v2.create_or_update_secret(
path="my-new-secret",
secret={"api_key": "abc123"},
)
# List secrets
secrets = client.secrets.kv.v2.list_secrets(path="", mount_point="secret")
print(secrets["data"]["keys"])
4. Explicit .env File Path (Fallback)
If auto-detection cannot find your config/.env.dev — for example, when
running from a Jupyter notebook, a different working directory, or a
remote environment — you can pass the path explicitly:
from project_toolkit import settings
# Point to your .env file explicitly
settings.configure(env_file="/path/to/your/config/.env.dev")
# Now use settings as normal
print(settings.cloudflare.r2_access_token)
print(settings.env("OPENAI_API_KEY"))
Or via get_settings() directly:
from project_toolkit import get_settings
get_settings.cache_clear()
settings = get_settings(env_file="/path/to/your/config/.env.dev")
5. Manual Settings Override
For testing or custom configurations, construct AppSettings directly:
from project_toolkit.settings.base import AppSettings
from project_toolkit.settings.cloudflare import CloudflareSettings
from project_toolkit.settings.google import GoogleSettings
from project_toolkit.settings.path import PathSettings
from project_toolkit.settings.vault import VaultSettings
from project_toolkit.settings.wan_ip import WanIpSettings
from project_toolkit.ip_tool import WanIP, IPChangeDetector
custom_settings = AppSettings(
path_config=PathSettings(
data_dir="/custom/data",
project_name="my-project",
),
cloudflare=CloudflareSettings(
r2_access_token="custom-token",
r2_account_id="custom-id",
),
google=GoogleSettings(),
vault=VaultSettings(),
wan_ip=WanIpSettings(),
)
# Use the custom settings
print(custom_settings.cloudflare.r2_access_token) # "custom-token"
print(custom_settings.env("R2_ACCESS_TOKEN")) # reads from env/dotenv
To reset the cached auto-initialized singleton:
from project_toolkit import settings, get_settings
# Reset cached settings (e.g., after changing env vars)
get_settings.cache_clear()
settings._reset()
# Next access will re-initialize
print(settings.cloudflare.r2_access_token)
6. Project Path Management
Manage project directory structures with automatic creation:
from project_toolkit import DataPathConfig
# Direct usage
dpc = DataPathConfig(
project_name="my-project",
subproject="etl",
data_dir="/data",
)
print(dpc.data_dir()) # /data
print(dpc.project_dir()) # /data/my-project
print(dpc.sub_project_dir()) # /data/my-project/etl
# Get a timestamped file name and full file path
filename = dpc.get_project_today_file_name("output", "csv")
filepath = dpc.sub_project_dir() / filename # /data/my-project/etl/output_20260209.csv
From auto-initialized settings (recommended):
from project_toolkit import settings, DataPathConfig
dpc = DataPathConfig.from_settings(
settings.path_config,
project_name="my-project",
)
7. Cloudflare
from project_toolkit.cloudflare import CloudflareSdkClient
# Zero-config (reads CLOUDFLARE_API_TOKEN & CLOUDFLARE_ACCOUNT_ID from env/dotenv)
cf = CloudflareSdkClient()
# DNS
records = cf.list_dns_records(zone_id="...")
cf.create_dns_record(zone_id="...", name="example.com", type="A", content="1.2.3.4")
# R2 Buckets
buckets = cf.list_r2_buckets()
# Workers KV
cf.kv_put(namespace_id="...", key_name="my_key", value="my_value")
# D1 Databases
results = cf.d1_query(database_id="...", sql="SELECT * FROM users")
For R2 file uploads via boto3 or requests:
from project_toolkit import settings
from project_toolkit.cloudflare.boto3_client import CloudflareBoto3Client
# Zero-config (reads R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY & CLOUDFLARE_ACCOUNT_ID)
client = CloudflareBoto3Client()
client.upload_file("my-bucket", "file.csv", open("/path/to/file.csv", "rb").read())
8. Google Sheets
from project_toolkit import settings
from project_toolkit.google.sheet import GoogleSheetsManager
# Zero-config (reads GOOGLE_SERVICE_ACCOUNT_JSON from env/dotenv via get_settings())
sheets = GoogleSheetsManager()
# Or explicitly from settings
sheets = GoogleSheetsManager.from_settings(settings.google)
# Read a sheet as a DataFrame
df = sheets.sheet_df(sheet_id="your-sheet-id")
print(df.head())
9. Google BigQuery
from project_toolkit import settings
from project_toolkit.google.bigquery import BigQueryManager
# Zero-config (reads GOOGLE_SERVICE_ACCOUNT_JSON from env/dotenv via get_settings())
bq = BigQueryManager()
# Or explicitly from settings
bq = BigQueryManager.from_settings(settings.google)
# Run a query
df = bq.query_table("SELECT * FROM my_dataset.my_table LIMIT 10")
10. Google Drive
from project_toolkit import settings
from project_toolkit.google.drive import GoogleDriveManager
# Zero-config (reads GOOGLE_SERVICE_ACCOUNT_JSON from env/dotenv via get_settings())
drive = GoogleDriveManager()
# Or explicitly from settings
drive = GoogleDriveManager.from_settings(settings.google)
# List files in a folder
files = drive.list_files(folder_id="your-folder-id")
11. WAN IP Tool (Fetch & Track)
The ip_tool provides a convenient wan_ip object for immediate fetching, or classes for programmatic change detection.
Quick Access (Shorthand)
from project_toolkit.ip_tool import wan_ip
# Property-style (attribute) access
print(wan_ip.ipv4) # "104.28.236.177"
print(wan_ip.ipv6) # "2a09:bac5..."
print(wan_ip.h3ipv4) # HTTP/3 result
# Tab-separated report string
print(wan_ip.report()) # "H2v4: 1.1.1.1 H3v4: 1.1.1.2 v6: 2a09..."
Programmatic Change Detection
Use the IPChangeDetector to maintain a persistent log (history.txt) and a current snapshot (current.json).
from project_toolkit.ip_tool import IPChangeDetector, WanIP
# Robust class-based usage
wan_ip_inst = WanIP(timeout=10)
detector = IPChangeDetector(
project_name="my-app",
subproject="office",
wan_ip=wan_ip_inst
)
# Compare current IPs vs last saved and record if changed
changed, results = detector.check_and_record()
# Get the formatted line that was appended to history
print(detector.get_history_line())
12. Notifications (Discord)
Rich Discord notifications with built-in adapters for Prefect and Airflow.
Each adapter supports zero-config instantiation — just set the matching
env var (DISCORD_PREFECT_WEBHOOK_URL or DISCORD_AIRFLOW_WEBHOOK_URL) in
your environment or config/.env.dev and the adapter creates a
DiscordNotifier automatically.
Prefect Integration
from project_toolkit.notifications.adapters.prefect import PrefectAdapter
# Zero-config — reads DISCORD_PREFECT_WEBHOOK_URL from env / .env
hooks = PrefectAdapter()
@flow(
on_failure=[hooks.on_failure],
on_completion=[hooks.on_success],
on_crashed=[hooks.on_crashed],
)
def my_flow():
...
Or pass an explicit notifier:
from project_toolkit.notifications.discord import DiscordNotifier
from project_toolkit.notifications.adapters.prefect import PrefectAdapter
notifier = DiscordNotifier(webhook_url="https://discord.com/api/webhooks/...")
hooks = PrefectAdapter(notifier)
Airflow Integration
from project_toolkit.notifications.adapters.airflow import AirflowAdapter
# Zero-config — reads DISCORD_AIRFLOW_WEBHOOK_URL from env / .env
hooks = AirflowAdapter()
with DAG(
dag_id="my_dag",
on_failure_callback=hooks.on_failure,
on_success_callback=hooks.on_success,
) as dag:
...
Or pass an explicit notifier:
from project_toolkit.notifications.discord import DiscordNotifier
from project_toolkit.notifications.adapters.airflow import AirflowAdapter
notifier = DiscordNotifier(webhook_url="https://discord.com/api/webhooks/...")
hooks = AirflowAdapter(notifier)
Manual Context
from project_toolkit.notifications.discord import DiscordNotifier
from project_toolkit.notifications.context import NotificationContext
notifier = DiscordNotifier(webhook_url="https://discord.com/api/webhooks/...")
notifier.notify(NotificationContext(
status="failure",
flow_name="custom_task",
run_name="manual_run_1",
error_message="Something went wrong",
))
Configuration
Environment Variables
| Variable | Module | Description |
|---|---|---|
CLOUDFLARE_API_TOKEN |
Cloudflare | Cloudflare API token (official) |
CLOUDFLARE_ACCOUNT_ID |
Cloudflare | Cloudflare account ID (official) |
CLOUDFLARE_ZONE_ID |
Cloudflare | Default zone ID for DNS operations |
{KEY}_ZONE_ID |
Cloudflare | Domain-specific zone IDs (e.g. LGNAT_ZONE_ID) |
R2_ACCESS_TOKEN |
Cloudflare | Cloudflare R2 API token (legacy) |
R2_ACCESS_KEY_ID |
Cloudflare | Cloudflare R2 AWS Access Key ID (for boto3) |
R2_SECRET_ACCESS_KEY |
Cloudflare | Cloudflare R2 AWS Secret Access Key (for boto3) |
R2_ACCOUNT_ID |
Cloudflare | Cloudflare account ID (legacy) |
R2_DOMAIN |
Cloudflare | R2 custom domain URL |
GOOGLE_SERVICE_ACCOUNT_JSON |
Path to service account JSON | |
TEST_GOOGLE_SHEET_ID |
Test sheet ID (dev only) | |
VAULT_ADDR |
Vault | Vault server address (e.g., http://192.168.12.2:8200) |
VAULT_TOKEN |
Vault | Vault authentication token |
WAN_IP_LOG_DIR |
WAN IP | Directory for IP change logs |
DISCORD_PREFECT_WEBHOOK_URL |
Notifications | Discord webhook for Prefect flows |
DISCORD_AIRFLOW_WEBHOOK_URL |
Notifications | Discord webhook for Airflow DAGs |
AIRFLOW_HOME |
Core | Auto-detected for env switching |
Any key in your
.envfile or environment can be read dynamically viasettings.env("KEY")— no need to define it in Python code.
.env Files
Place config files in the config/ directory:
config/
├── .env.dev # Local development (loaded when no AIRFLOW_HOME)
└── .env.airflow # Airflow production (loaded when AIRFLOW_HOME is set)
.env File Detection Order
The toolkit searches for config/ in this order:
<cwd>/config/— current working directory- Walk up from
cwd(up to 5 parent levels) - Walk up from the package install location (works for editable installs)
If auto-detection fails, use settings.configure(env_file=...) to specify
the path explicitly.
Priority Order
Environment variables > .env files > Vault secrets > defaults
Troubleshooting: .env File Not Found
If settings load with None values, the config/.env.dev file was not
found. This commonly happens when:
- Running from a Jupyter notebook whose working directory differs from the project root.
- Running a script from a different directory than the project root.
- The package was installed via
pip install(not editable mode) and there's noconfig/folder in the current working directory.
Fix: Call configure() before accessing settings:
from project_toolkit import settings
settings.configure(env_file="/absolute/path/to/config/.env.dev")
print(settings.env("OPENAI_API_KEY")) # ✓ works
Development
# Install with all dependencies
pip install -e ".[all]"
# Run tests
pytest
# Run specific tests
pytest tests/test_settings.py -s -vv
pytest tests/test_vault.py -s -vv
pytest tests/test_auto_settings.py -s -vv
Project Structure
project_toolkit/
├── __init__.py # Package exports + auto-initialized settings singleton
├── project_path.py # Project path management (DataPathConfig)
├── settings/
│ ├── __init__.py # Settings exports
│ ├── base.py # AppSettings + get_settings() + env()
│ ├── path.py # PathSettings
│ ├── cloudflare.py # CloudflareSettings
│ ├── google.py # GoogleSettings
│ ├── vault.py # VaultSettings + read_secret() + read_field()
│ └── wan_ip.py # WanIpSettings
├── cloudflare/
│ ├── boto3_client.py # R2 via boto3
│ └── requests_client.py # R2 via requests
├── google/
│ ├── bigquery.py # BigQuery operations
│ ├── drive.py # Google Drive operations
│ └── sheet.py # Google Sheets operations
├── ip_tool/
│ ├── __init__.py # Singleton 'wan_ip' proxy exports
│ ├── wan_ip.py # WanIP core fetcher (H2, H3, v4, v6)
│ └── detector.py # IP change detection + persistence log logic
└── notifications/
├── context.py # Shared NotificationContext dataclass
├── discord.py # Core Discord notifier
└── adapters/
├── prefect.py # Prefect (flow, flow_run, state) adapter
└── airflow.py # Airflow (context dict) adapter
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file project_toolkit-1.0.16.tar.gz.
File metadata
- Download URL: project_toolkit-1.0.16.tar.gz
- Upload date:
- Size: 56.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5089600a5d496e3da2bee14163409c943e987efa8d8f61a3e75c7573130cb447
|
|
| MD5 |
02404a9b22f8aad241a5ab8b8c80b029
|
|
| BLAKE2b-256 |
10b643edcf80db983c37cde57781b12baa86624ee066fd2340abf21ac13fff32
|
File details
Details for the file project_toolkit-1.0.16-py3-none-any.whl.
File metadata
- Download URL: project_toolkit-1.0.16-py3-none-any.whl
- Upload date:
- Size: 48.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bcb8a25e89dbc57eb180be4a0dc954c6145c1159967171f9d0710ca6a801fe1b
|
|
| MD5 |
ed70d43e93fff851697063d5f4ce1580
|
|
| BLAKE2b-256 |
b96b3002292e3a1b0ddf61dfc27931eb9af272d91c8a4422616eab859c66024d
|