Skip to main content

Fyron is an open-source Python toolkit for interoperable healthcare data and AI workflows. It provides unified access to FHIR data via REST APIs and relational (SQL-backed) FHIR servers, integrates DICOM imaging sources, and enables semantic exploration of clinical narratives using modern language models.

Project description

Fyron Banner

Python License FHIR DICOM Documents

Fyron is a pragmatic Python toolkit for interoperable healthcare data and AI workflows. It gives developers clean, testable primitives for FHIR (REST + SQL), DICOM imaging, document downloads, Teable integration, and LLM-assisted analysis without hiding the underlying protocols.

Features: FHIR REST + SQL with pagination and caching · DICOMweb with NIfTI export · Document download with auth · Teable read/write · LLM prompts over DataFrames and files · Optional Excel support via fyron[excel]

Table of Contents

Quickstart

Get a first FHIR query running in a few steps.

1. Install (Python 3.10+)

# pip
pip install fyron

# Poetry (add to your project)
poetry add fyron

# uv
uv pip install fyron
# or, in a project with pyproject.toml: uv add fyron

With Excel support: pip install fyron[excel], poetry add fyron[excel], or uv pip install "fyron[excel]".

2. Configure environment
If you use a repo clone: cp .example.env .env and edit. If you installed from PyPI, copy .example.env from the repo or set variables from Setup in your shell or .env.

3. Load env and run a query

from fyron import load_env, FHIRRestClient

load_env()  # optional: path=".env", override=False, warn_if_missing=True

client = FHIRRestClient()
patients = client.query_df(
    resource_type="Patient",
    params={"_count": 25, "_sort": "_id"},
    max_pages=1,
)
print(patients.head())

To try without any server: use base_url="https://hapi.fhir.org/baseR4" (public HAPI server; no auth needed for read-only).

Common tasks

Goal Section / hint
First FHIR query Quickstart
Use a .env file load_env()Quickstart, Setup
Auth with token endpoint Auth.token_env(auth_url=..., refresh_url=...)FHIR REST
Export to Excel write_excel(df, "out.xlsx") — install fyron[excel]Data IO
FHIRPath extraction query_df(..., fhir_paths=[("col", "path")])FHIR REST
Query from a DataFrame query_df_from(df, ...) (REST or SQL) — FHIR REST, FHIR SQL
DICOM from a table download_from_df(df, output_dir=...) — columns: study_instance_uid, series_instance_uidDICOM
LLM over documents agent.prompt_documents(documents=..., prompt=...)LLM Agent

Requirements

  • Python 3.10 or newer.
  • Optional: fhirpathpy for FHIRPath in query_dfpip install fhirpathpy.
  • Optional: Excel (.xlsx) — pip install fyron[excel].
  • Optional: FHIRSQLClient (Postgres) is included if psycopg is available; otherwise it is None and only REST is used.

Who It’s For

  • Data scientists and ML engineers building clinical datasets and cohorts.
  • Clinical informatics and analytics teams working across FHIR, SQL, and imaging.
  • Researchers who need reproducible pipelines and methods-ready documentation.
  • Engineers integrating healthcare data into apps, dashboards, or LLM workflows.

Package Layout

fyron/
  fhir/        # REST + SQL clients, auth, types, utilities
  dicom/       # DICOM downloader
  documents/   # Document downloader
  llm/         # LLM agent
  core/        # IO + integrations

Capabilities

Expand the sections below to see what Fyron supports in each area.

FHIR REST client with pagination, caching, multiprocessing, and FHIRPath extraction. SQL client for Postgres-backed FHIR stores.
DICOM DICOMweb downloads with optional NIfTI conversion and per-series/study manifests.
Documents Deterministic URL downloads with metadata, hashes, and DataFrame helpers.
Teable Read/write DataFrames to Teable tables with pagination, overwrite support, and base/table creation.
LLM Prompting utilities for text, documents, images, and PDFs via configurable providers.

Setup

Fyron uses environment variables for endpoints and credentials. You can set them in your shell or in a .env file (use .example.env as a template).

Minimum for Quickstart (FHIR REST only)
Set FHIR_BASE_URL. For token auth, also set FHIR_AUTH_URL, FHIR_USER, and FHIR_PASSWORD. Call load_env() if you use a .env file.

Full reference — all optional, depending on which features you use:

# FHIR REST
export FHIR_BASE_URL="https://example.org/fhir"
export FHIR_AUTH_URL="https://example.org/auth/token"
export FHIR_REFRESH_URL="https://example.org/auth/refresh"
export FHIR_USER="alice"
export FHIR_PASSWORD="secret"

# DICOMweb
export DICOM_WEB_URL="https://pacs.example.org/dicomweb"
export DICOM_USER="alice"
export DICOM_PASSWORD="secret"

# FHIR SQL (optional; requires psycopg)
export FHIR_DB_HOST="localhost"
export FHIR_DB_PORT="5432"
export FHIR_DB_NAME="fhir"
export FHIR_DB_USER="postgres"
export FHIR_DB_PASSWORD="secret"

# LLM
export LLM_PROVIDER="openai"
export LLM_BASE_URL="https://api.openai.com"
export LLM_API_KEY="your_api_key"
export LLM_MODEL="gpt-4.1-mini"

# Teable
export TEABLE_BASE_URL="https://app.teable.ai"
export TEABLE_TOKEN="your_teable_token"

Notes

  • FHIR auth: Token auth uses FHIR_USER, FHIR_PASSWORD, and FHIR_AUTH_URL. Use Auth.token_env(auth_url=..., refresh_url=...) to build an auth object from env.
  • .env loading: load_env() looks for a .env in the current directory. Options: path (explicit file), override (overwrite existing env vars), warn_if_missing.
  • FHIRPath: For fhir_paths in query_df, install fhirpathpy: pip install fhirpathpy.

Core Workflows

FHIR REST

Query any FHIR resource type; results are returned as a pandas DataFrame. Supports pagination, optional FHIRPath extraction, custom bundle processors, and parallel fetch from a DataFrame.

from fyron import FHIRRestClient

client = FHIRRestClient()
patients = client.query_df(
    resource_type="Patient",
    params={"_count": 25, "_sort": "_id"},
    max_pages=1,
)
print(patients.head())

FHIRRestClient arguments:

Argument Description Values/Defaults
base_url FHIR server base URL Defaults to FHIR_BASE_URL
auth Reuse Auth or requests.Session Optional
num_processes Parallel worker count Default 4
request_timeout Per-request timeout in seconds Default 30
log_requests Log timing and summaries True/False
log_request_urls Print each request URL (with query string) to the console True/False
return_fhir_obj Wrap bundles as FHIRObj True/False

This uses FHIRPath expressions to extract specific fields.

from fyron import FHIRRestClient

client = FHIRRestClient()

df = client.query_df(
    resource_type="Observation",
    params={"_count": 25},
    fhir_paths=[
        ("patient_id", "subject.reference.replace('Patient/', '')"),
        ("code", "code.coding.code"),
        ("value", "valueQuantity.value"),
    ],
    max_pages=1,
)

This applies a custom bundle processor with safe FHIR extraction. safe_get supports dotted paths and list indexes (for example, "code.coding[0].code").

from fyron import FHIRObj, FHIRRestClient, safe_get

client = FHIRRestClient(return_fhir_obj=True)


def process_bundle(bundle):
    bundle = FHIRObj(**bundle) if isinstance(bundle, dict) else bundle
    rows = []
    for entry in bundle.entry or []:
        resource = entry.resource
        rows.append({
            "resourceType": safe_get(resource, "resourceType"),
            "id": safe_get(resource, "id"),
            "subject": safe_get(resource, "subject.reference"),
            "code": safe_get(resource, "code.coding[0].code"),
        })
    return {"Resource": rows}

custom_df = client.query_df(
    resource_type="Patient",
    params={"_count": 10},
    mode="custom",
    process_function=process_bundle,
    max_pages=1,
)

This runs one query per row in a DataFrame and fetches in parallel.

from fyron import Auth, FHIRRestClient

auth = Auth.token_env(
    auth_url="https://example.org/auth/token",
    refresh_url="https://example.org/auth/refresh",
)

client = FHIRRestClient(auth=auth, num_processes=4)

result_df = client.query_df_from(
    df=patients_df,
    resource_type="Observation",
    column_map={"subject": "patient_id"},
    params={"_count": 50},
    parallel_fetch=True,
)

Auth.token_env arguments:

Argument Description Values/Defaults
auth_url Token endpoint URL Required
refresh_url Refresh endpoint Optional
token Pre-issued bearer token Optional

FHIR SQL

Run parameterized SQL against a Postgres-backed FHIR store and get DataFrames. Requires psycopg; if it is not installed, FHIRSQLClient is None (REST-only installs still work).

from fyron import FHIRSQLClient

sql_client = FHIRSQLClient()

patients = sql_client.query_df(
    "SELECT id, resource_type FROM fhir_resources WHERE resource_type = %s",
    params=["Patient"],
)

sql = """
SELECT id, resource_type, subject_id
FROM observation
WHERE subject_id IN ({patient_ids})
"""

obs = sql_client.query_df_from(
    df=patients_df,
    sql=sql,
    column_map={"patient_ids": "patient_id"},
    chunk_size=500,
    parallel=True,
)

FHIRSQLClient arguments:

Argument Description Values/Defaults
dsn Full connection string Optional
host DB host Optional
port DB port Optional
dbname Database name Optional
user DB user Optional
password DB password Optional
connect_timeout Connect timeout (seconds) Default 10
log_queries Log SQL queries True/False

FHIR Utilities

This uses built-in bundle processors to standardize common resources.

from fyron import (
    FHIRRestClient,
    process_patient_bundle,
    process_encounter_bundle,
    process_observation_bundle,
    process_condition_bundle,
    process_procedure_bundle,
    process_imaging_study_bundle,
    process_diagnostic_report_bundle,
)

client = FHIRRestClient(return_fhir_obj=True)

patients = client.query_df(
    resource_type="Patient",
    params={"_count": 25},
    mode="custom",
    process_function=process_patient_bundle,
)

DICOM

Download DICOM series via DICOMweb; optionally convert to NIfTI. Supports single series, batch from a DataFrame, and a CLI (fyron-dicom).

from fyron import DICOMDownloader

loader = DICOMDownloader(output_format="nifti", num_processes=2)

results = loader.download_series(
    study_uid="1.2.3.4.5",
    series_uid="1.2.3.4.5.6",
    output_dir="dicom_out",
)
print(results)

Example auth options:

# Reuse an Auth object
from fyron import Auth, DICOMDownloader

auth = Auth.token_env(auth_url="https://example.org/auth/token")
loader = DICOMDownloader(auth=auth)

# Standalone basic auth
loader = DICOMDownloader(basic_auth=("user", "pass"))

DICOMDownloader arguments:

Argument Description Values/Defaults
dicom_web_url DICOMweb endpoint Defaults to DICOM_WEB_URL
auth Reuse Auth or requests.Session Optional
basic_auth Standalone basic auth tuple (user, password)
output_format Output format "dicom" or "nifti"
num_processes Parallel workers for DataFrame downloads Default 1
use_compression Compress NIfTI output True/False

Download studies or series from a DataFrame. The DataFrame must have columns study_instance_uid and series_instance_uid (or configurable via the method).

from fyron import DICOMDownloader

loader = DICOMDownloader(output_format="nifti", num_processes=2)

downloads = loader.download_from_df(
    df=imaging_df,
    output_dir="dicom_out",
)
print(downloads.head())

Also there is the option to download single series/studies using an CLI:

fyron-dicom --dicom-web-url https://pacs.example.org/dicomweb \
  --study-uid 1.2.3.4.5 --series-uid 1.2.3.4.5.6 --output-dir dicom_out

Documents

Download documents from URLs (or from a DataFrame column) into a deterministic folder layout, with optional auth and metadata/hashes.

from fyron import DocumentDownloader

urls = [
    "https://example.org/reports/report1.pdf",
    "https://example.org/reports/report2.pdf",
]

loader = DocumentDownloader(output_dir="docs_out")
results = loader.download_urls(urls)
print(results.head())

Example auth options:

# Reuse an Auth object
from fyron import Auth, DocumentDownloader

auth = Auth.token_env(auth_url="https://example.org/auth/token")
loader = DocumentDownloader(auth=auth)

# Standalone basic auth
loader = DocumentDownloader(basic_auth=("user", "pass"))

DocumentDownloader arguments:

Argument Description Values/Defaults
base_url Prefix for relative URLs Defaults to FHIR_BASE_URL
output_dir Download folder Default documents_out
timeout Request timeout (seconds) Default 30
skip_existing Skip if file exists True/False
max_workers Thread pool size Default 4
save_mode File mode "auto", "txt", "pdf"
force_extension Force file extension Optional
auth Reuse Auth or requests.Session Optional
basic_auth Standalone basic auth tuple (user, password)

This downloads documents referenced by a DataFrame column.

from fyron import DocumentDownloader

loader = DocumentDownloader(output_dir="docs_out")
results = loader.download_from_df(df=docs_df, url_col="document_url")
print(results.head())

LLM Agent

Send prompts to OpenAI or compatible APIs; run over a list of documents, a DataFrame column, or a single prompt. Supports text, images, and file inputs.

from fyron import LLMAgent

agent = LLMAgent(provider="openai")
response = agent.prompt("Summarize the key findings in this dataset")
print(response)

LLMAgent arguments:

Argument Description Values/Defaults
provider Provider type "openai", "anyllm", "custom"
base_url API base or custom endpoint Defaults to LLM_BASE_URL
api_key API key Optional (required by some providers)
model Model name Defaults to LLM_MODEL
timeout Request timeout (seconds) Default 30
verify_ssl Verify TLS True/False

This runs a prompt across a list of documents or file paths.

from fyron import LLMAgent

agent = LLMAgent(provider="openai")

docs = ["doc1 text", "doc2 text", "/path/to/file.txt"]
summary_df = agent.prompt_documents(
    documents=docs,
    prompt="Summarize clinically relevant findings.",
    output_csv="doc_summaries.csv",
)
print(summary_df.head())

This runs a prompt across a DataFrame column and appends results.

from fyron import LLMAgent

agent = LLMAgent(provider="openai")

out = agent.prompt_dataframe(
    df=notes_df,
    text_col="note_text",
    prompt="Extract key diagnoses.",
    output_col="diagnoses",
    output_csv="notes_with_diagnoses.csv",
)
print(out.head())

This generates a Methods-style description of a Python module.

from fyron import LLMAgent

agent = LLMAgent(provider="openai")
md = agent.describe_python_file(
    file_path="src/project/pipeline.py",
    output_md="docs/methods_pipeline.md",
)
print(md[:400])

Data IO

Shortcut helpers for CSV and Excel: read_csv, write_csv, read_excel, write_excel. For .xlsx files use the Excel extra: pip install fyron[excel].

from fyron import read_csv, write_excel

df = read_csv("patients.csv")
write_excel(df, "patients.xlsx")

Teable

Read and write Teable tables as DataFrames: list spaces/bases/tables, get-or-create base/table, overwrite table.

from fyron import TeableClient

teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")

df = teable.read_table("tblXXXX")
record_ids = teable.write_dataframe("tblXXXX", df)

TeableClient arguments:

Argument Description Values/Defaults
base_url Teable API base Defaults to TEABLE_BASE_URL
token Teable API token Required
timeout Request timeout (seconds) Default 30

This lists spaces, bases, and tables for discovery.

from fyron import TeableClient

teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")

spaces = teable.list_spaces()
space_id = spaces[0]["id"] if spaces else None
bases = teable.list_bases(space_id=space_id) if space_id else []
tables = teable.list_tables(bases[0]["id"]) if bases else []

This ensures a base and table exist (creating them if needed).

from fyron import TeableClient

teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")

base = teable.get_or_create_base(name="Clinical", space_name="My Workspace")
base_id = base["id"]

table = teable.get_or_create_table(
    base_id=base_id,
    name="Observations",
)

This fully replaces a table by deleting then inserting all rows.

from fyron import TeableClient

teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")
record_ids = teable.overwrite_table("tblXXXX", df)

Caching

FHIR REST GET responses can be cached in memory to avoid repeated requests. Control TTL and size (or disable).

from fyron import FHIRRestClient

client = FHIRRestClient(
    cache_ttl_seconds=300,   # default 5 minutes
    cache_max_entries=1000,  # default 1000 responses
)

# Disable caching
client = FHIRRestClient(cache_ttl_seconds=0, cache_max_entries=0)

Logging

Set logging to INFO and use FHIRRestClient(log_requests=True) (or other clients’ logging options) to log request timing and summaries.

import logging
from fyron import FHIRRestClient

logging.basicConfig(level=logging.INFO)
client = FHIRRestClient(log_requests=True)

Development

See CONTRIBUTING.md for guidelines. Install dependencies with Poetry and run tests:

poetry install
poetry run pytest

Optional integration tests (require a reachable FHIR server) run when FYRON_INTEGRATION=1 is set; see tests/test_fhir_rest_integration.py.

Troubleshooting

Issue What to do
No module named 'fyron' Install the package: pip install fyron or, for development, poetry install and run with poetry run pytest / poetry run python.
FHIRSQLClient is None The SQL client is optional. Install psycopg (included in default deps) or use REST-only.
Excel: missing engine / openpyxl Install the Excel extra: pip install fyron[excel].
No .env found / credentials not picked up Call load_env() before creating clients, or set variables in the shell. Use load_env(path=".env") if the file is not in the current directory.
FHIRPath errors or missing columns Install the optional dependency: pip install fhirpathpy.

For bugs and feature requests, open an issue on the repository.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fyron-0.1.4.tar.gz (45.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fyron-0.1.4-py3-none-any.whl (47.0 kB view details)

Uploaded Python 3

File details

Details for the file fyron-0.1.4.tar.gz.

File metadata

  • Download URL: fyron-0.1.4.tar.gz
  • Upload date:
  • Size: 45.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.13.7 Darwin/25.3.0

File hashes

Hashes for fyron-0.1.4.tar.gz
Algorithm Hash digest
SHA256 707072805683b60f9a5ac39784d71720d2d6c0fc7f4006bface0db351a328030
MD5 9d3ccfeb3e08edb99c1c1adcd76ffa48
BLAKE2b-256 a5b57fc19af8bbbd78980b2b3976642eaa284b8d87657b43c46d1c873e4ea9ff

See more details on using hashes here.

File details

Details for the file fyron-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: fyron-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 47.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.13.7 Darwin/25.3.0

File hashes

Hashes for fyron-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 19fef5133e75204901088e7f228b19955a2cd109302af6bf444a882cf1312f10
MD5 bca6b881c2cd3cf402d6ab1b0d1294a8
BLAKE2b-256 488a3c6be8fc0a7d10ce2ea5480678a61fd266e44022fc1182ff3ca887b72bb0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page