Fyron is an open-source Python toolkit for interoperable healthcare data and AI workflows. It provides unified access to FHIR data via REST APIs and relational (SQL-backed) FHIR servers, integrates DICOM imaging sources, and enables semantic exploration of clinical narratives using modern language models.
Project description
Fyron
Fyron is a pragmatic Python toolkit for interoperable healthcare data and AI workflows. It gives developers clean, testable primitives for FHIR (REST + SQL), DICOM imaging, document downloads, Teable integration, and LLM-assisted analysis without hiding the underlying protocols.
Quickstart
Install and run a first FHIR query in minutes.
This installs the package from PyPI.
pip install fyron
Requires Python 3.10+.
This copies the sample environment file so you can fill in your credentials.
cp .example.env .env
# edit .env with your credentials
This loads the .env explicitly (recommended in production or scripts).
from fyron import load_env
load_env()
Options: path (explicit .env), override (replace existing env vars), warn_if_missing.
This runs a minimal REST query and returns a DataFrame.
from fyron import FHIRRestClient
client = FHIRRestClient()
patients = client.query_df(
resource_type="Patient",
params={"_count": 25, "_sort": "_id"},
max_pages=1,
)
print(patients.head())
Who It’s For
- Data scientists and ML engineers building clinical datasets and cohorts.
- Clinical informatics and analytics teams working across FHIR, SQL, and imaging.
- Researchers who need reproducible pipelines and methods-ready documentation.
- Engineers integrating healthcare data into apps, dashboards, or LLM workflows.
Package Layout
fyron/
fhir/ # REST + SQL clients, auth, types, utilities
dicom/ # DICOM downloader
documents/ # Document downloader
llm/ # LLM agent
core/ # IO + integrations
Supported Tabs
Use the tabs below to see what Fyron supports in each area.
FHIR
REST client with pagination, caching, multiprocessing, and FHIRPath extraction. SQL client for Postgres-backed FHIR stores.DICOM
DICOMweb downloads with optional NIfTI conversion and per-series/study manifests.Documents
Deterministic URL downloads with metadata, hashes, and DataFrame helpers.Teable
Read/write DataFrames to Teable tables with pagination, overwrite support, and base/table creation.LLM
Prompting utilities for text, documents, images, and PDFs via configurable providers.Setup
Fyron reads environment variables (see .example.env). Minimal examples:
This shows the smallest set of environment variables for the main features.
# FHIR REST
export FHIR_BASE_URL="https://example.org/fhir"
export FHIR_AUTH_URL="https://example.org/auth/token"
export FHIR_REFRESH_URL="https://example.org/auth/refresh"
export FHIR_USER="alice"
export FHIR_PASSWORD="secret"
# DICOMweb
export DICOM_WEB_URL="https://pacs.example.org/dicomweb"
export DICOM_USER="alice"
export DICOM_PASSWORD="secret"
# FHIR SQL
export FHIR_DB_HOST="localhost"
export FHIR_DB_PORT="5432"
export FHIR_DB_NAME="fhir"
export FHIR_DB_USER="postgres"
export FHIR_DB_PASSWORD="secret"
# LLM
export LLM_PROVIDER="openai"
export LLM_BASE_URL="https://api.openai.com"
export LLM_API_KEY="your_api_key"
export LLM_MODEL="gpt-4.1-mini"
# Teable
export TEABLE_BASE_URL="https://app.teable.ai"
export TEABLE_TOKEN="your_teable_token"
Notes:
- FHIR token auth uses
FHIR_USER/FHIR_PASSWORDplusFHIR_AUTH_URL. - Use
Auth.token_env(...)for the simplest token-based auth flow. - FHIRPath extraction requires
fhirpathpy(pip install fhirpathpy). - Use
load_env()to load a.envfrom the current working directory.
Core Workflows
FHIR REST
This queries a single FHIR resource type and returns rows as a DataFrame.
from fyron import FHIRRestClient
client = FHIRRestClient()
patients = client.query_df(
resource_type="Patient",
params={"_count": 25, "_sort": "_id"},
max_pages=1,
)
print(patients.head())
FHIRRestClient arguments:
| Argument | Description | Values/Defaults |
|---|---|---|
base_url |
FHIR server base URL | Defaults to FHIR_BASE_URL |
auth |
Reuse Auth or requests.Session |
Optional |
num_processes |
Parallel worker count | Default 4 |
request_timeout |
Per-request timeout in seconds | Default 30 |
log_requests |
Log timing and summaries | True/False |
return_fhir_obj |
Wrap bundles as FHIRObj |
True/False |
This uses FHIRPath expressions to extract specific fields.
from fyron import FHIRRestClient
client = FHIRRestClient()
df = client.query_df(
resource_type="Observation",
params={"_count": 25},
fhir_paths=[
("patient_id", "subject.reference.replace('Patient/', '')"),
("code", "code.coding.code"),
("value", "valueQuantity.value"),
],
max_pages=1,
)
This applies a custom bundle processor with safe FHIR extraction.
safe_get supports dotted paths and list indexes (for example, "code.coding[0].code").
from fyron import FHIRObj, FHIRRestClient, safe_get
client = FHIRRestClient(return_fhir_obj=True)
def process_bundle(bundle):
bundle = FHIRObj(**bundle) if isinstance(bundle, dict) else bundle
rows = []
for entry in bundle.entry or []:
resource = entry.resource
rows.append({
"resourceType": safe_get(resource, "resourceType"),
"id": safe_get(resource, "id"),
"subject": safe_get(resource, "subject.reference"),
"code": safe_get(resource, "code.coding[0].code"),
})
return {"Resource": rows}
custom_df = client.query_df(
resource_type="Patient",
params={"_count": 10},
mode="custom",
process_function=process_bundle,
max_pages=1,
)
This runs one query per row in a DataFrame and fetches in parallel.
from fyron import Auth, FHIRRestClient
auth = Auth.token_env(
auth_url="https://example.org/auth/token",
refresh_url="https://example.org/auth/refresh",
)
client = FHIRRestClient(auth=auth, num_processes=4)
result_df = client.query_df_from(
df=patients_df,
resource_type="Observation",
column_map={"subject": "patient_id"},
params={"_count": 50},
parallel_fetch=True,
)
Auth.token_env arguments:
| Argument | Description | Values/Defaults |
|---|---|---|
auth_url |
Token endpoint URL | Required |
refresh_url |
Refresh endpoint | Optional |
token |
Pre-issued bearer token | Optional |
FHIR SQL
This runs a SQL query and returns a DataFrame.
from fyron import FHIRSQLClient
sql_client = FHIRSQLClient()
patients = sql_client.query_df(
"SELECT id, resource_type FROM fhir_resources WHERE resource_type = %s",
params=["Patient"],
)
sql = """
SELECT id, resource_type, subject_id
FROM observation
WHERE subject_id IN ({patient_ids})
"""
obs = sql_client.query_df_from(
df=patients_df,
sql=sql,
column_map={"patient_ids": "patient_id"},
chunk_size=500,
parallel=True,
)
FHIRSQLClient arguments:
| Argument | Description | Values/Defaults |
|---|---|---|
dsn |
Full connection string | Optional |
host |
DB host | Optional |
port |
DB port | Optional |
dbname |
Database name | Optional |
user |
DB user | Optional |
password |
DB password | Optional |
connect_timeout |
Connect timeout (seconds) | Default 10 |
log_queries |
Log SQL queries | True/False |
FHIR Utilities
This uses built-in bundle processors to standardize common resources.
from fyron import (
FHIRRestClient,
process_patient_bundle,
process_encounter_bundle,
process_observation_bundle,
process_condition_bundle,
process_procedure_bundle,
process_imaging_study_bundle,
process_diagnostic_report_bundle,
)
client = FHIRRestClient(return_fhir_obj=True)
patients = client.query_df(
resource_type="Patient",
params={"_count": 25},
mode="custom",
process_function=process_patient_bundle,
)
DICOM
This downloads a single DICOM series and optionally converts to NIfTI.
from fyron import DICOMDownloader
loader = DICOMDownloader(output_format="nifti", num_processes=2)
results = loader.download_series(
study_uid="1.2.3.4.5",
series_uid="1.2.3.4.5.6",
output_dir="dicom_out",
)
print(results)
DICOMDownloader arguments:
| Argument | Description | Values/Defaults |
|---|---|---|
dicom_web_url |
DICOMweb endpoint | Defaults to DICOM_WEB_URL |
output_format |
Output format | "dicom" or "nifti" |
num_processes |
Parallel workers for DataFrame downloads | Default 1 |
use_compression |
Compress NIfTI output | True/False |
This downloads studies or series listed in a DataFrame.
from fyron import DICOMDownloader
loader = DICOMDownloader(output_format="nifti", num_processes=2)
downloads = loader.download_from_df(
df=imaging_df, # must include study_instance_uid and series_instance_uid
output_dir="dicom_out",
)
print(downloads.head())
Documents
This downloads a list of document URLs to a deterministic folder structure.
from fyron import DocumentDownloader
urls = [
"https://example.org/reports/report1.pdf",
"https://example.org/reports/report2.pdf",
]
loader = DocumentDownloader(output_dir="docs_out")
results = loader.download_urls(urls)
print(results.head())
DocumentDownloader arguments:
| Argument | Description | Values/Defaults |
|---|---|---|
base_url |
Prefix for relative URLs | Defaults to FHIR_BASE_URL |
output_dir |
Download folder | Default documents_out |
timeout |
Request timeout (seconds) | Default 30 |
skip_existing |
Skip if file exists | True/False |
max_workers |
Thread pool size | Default 4 |
save_mode |
File mode | "auto", "txt", "pdf" |
force_extension |
Force file extension | Optional |
This downloads documents referenced by a DataFrame column.
from fyron import DocumentDownloader
loader = DocumentDownloader(output_dir="docs_out")
results = loader.download_from_df(df=docs_df, url_col="document_url")
print(results.head())
LLM Agent
This sends a single prompt and returns the model response.
from fyron import LLMAgent
agent = LLMAgent(provider="openai")
response = agent.prompt("Summarize the key findings in this dataset")
print(response)
LLMAgent arguments:
| Argument | Description | Values/Defaults |
|---|---|---|
provider |
Provider type | "openai", "anyllm", "custom" |
base_url |
API base or custom endpoint | Defaults to LLM_BASE_URL |
api_key |
API key | Optional (required by some providers) |
model |
Model name | Defaults to LLM_MODEL |
timeout |
Request timeout (seconds) | Default 30 |
verify_ssl |
Verify TLS | True/False |
This runs a prompt across a list of documents or file paths.
from fyron import LLMAgent
agent = LLMAgent(provider="openai")
docs = ["doc1 text", "doc2 text", "/path/to/file.txt"]
summary_df = agent.prompt_documents(
documents=docs,
prompt="Summarize clinically relevant findings.",
output_csv="doc_summaries.csv",
)
print(summary_df.head())
This runs a prompt across a DataFrame column and appends results.
from fyron import LLMAgent
agent = LLMAgent(provider="openai")
out = agent.prompt_dataframe(
df=notes_df,
text_col="note_text",
prompt="Extract key diagnoses.",
output_col="diagnoses",
output_csv="notes_with_diagnoses.csv",
)
print(out.head())
This generates a Methods-style description of a Python module.
from fyron import LLMAgent
agent = LLMAgent(provider="openai")
md = agent.describe_python_file(
file_path="src/project/pipeline.py",
output_md="docs/methods_pipeline.md",
)
print(md[:400])
Data IO
This provides shortcut CSV and Excel read/write helpers.
from fyron import read_csv, write_excel
df = read_csv("patients.csv")
write_excel(df, "patients.xlsx")
Teable
This reads and writes Teable tables as DataFrames.
from fyron import TeableClient
teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")
df = teable.read_table("tblXXXX")
record_ids = teable.write_dataframe("tblXXXX", df)
TeableClient arguments:
| Argument | Description | Values/Defaults |
|---|---|---|
base_url |
Teable API base | Defaults to TEABLE_BASE_URL |
token |
Teable API token | Required |
timeout |
Request timeout (seconds) | Default 30 |
This lists spaces, bases, and tables for discovery.
from fyron import TeableClient
teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")
spaces = teable.list_spaces()
space_id = spaces[0]["id"] if spaces else None
bases = teable.list_bases(space_id=space_id) if space_id else []
tables = teable.list_tables(bases[0]["id"]) if bases else []
This ensures a base and table exist (creating them if needed).
from fyron import TeableClient
teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")
base = teable.get_or_create_base(name="Clinical", space_name="My Workspace")
base_id = base["id"]
table = teable.get_or_create_table(
base_id=base_id,
name="Observations",
)
This fully replaces a table by deleting then inserting all rows.
from fyron import TeableClient
teable = TeableClient(base_url="https://app.teable.ai", token="YOUR_TOKEN")
record_ids = teable.overwrite_table("tblXXXX", df)
Caching
This configures the in-memory cache for FHIR REST GET responses.
from fyron import FHIRRestClient
client = FHIRRestClient(
cache_ttl_seconds=300, # default 5 minutes
cache_max_entries=1000, # default 1000 responses
)
# Disable caching
client = FHIRRestClient(cache_ttl_seconds=0, cache_max_entries=0)
Logging
This enables request logging and summary output.
import logging
from fyron import FHIRRestClient
logging.basicConfig(level=logging.INFO)
client = FHIRRestClient(log_requests=True)
CLI
This runs a DICOM download from the command line.
fyron-dicom --dicom-web-url https://pacs.example.org/dicomweb \
--study-uid 1.2.3.4.5 --series-uid 1.2.3.4.5.6 --output-dir dicom_out
Example Scripts
These examples connect to public or configured services for quick validation.
python examples/hapi_rest_example.py
python examples/sql_examples.py
Contributing
See CONTRIBUTING.md.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fyron-0.1.0.tar.gz.
File metadata
- Download URL: fyron-0.1.0.tar.gz
- Upload date:
- Size: 40.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.13.7 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ad2229d211e7113ac45bca39973e808d57ba22a6013b6fbb1e209201e354371
|
|
| MD5 |
de4561c449091accad26c8383523faa4
|
|
| BLAKE2b-256 |
78475152dcd94808ce7645c70bb6015a4f3b2cc39d1183426a4997748165c13d
|
File details
Details for the file fyron-0.1.0-py3-none-any.whl.
File metadata
- Download URL: fyron-0.1.0-py3-none-any.whl
- Upload date:
- Size: 43.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.13.7 Darwin/25.2.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3266daf25822a3b59b6d23bed8e1586ccc4fe92c65430ba8128b817f84aa881
|
|
| MD5 |
1b54196b1c199d28d31643d026fa3d97
|
|
| BLAKE2b-256 |
209be02b693cf6f4395c125e0a4ce86fb8022afe57b2fb720112497221c73acc
|