Skip to main content

StruAI Drawing Analysis SDK - AI-powered construction drawing analysis

Project description

StruAI SDK (Python + JavaScript)

Official SDKs for the StruAI Drawing Analysis API.

  • Python package: struai (PyPI)
  • JavaScript package: struai (npm, source in js/)

Install

pip install struai
npm install struai

Environment

export STRUAI_API_KEY=your_api_key
# Optional: defaults to https://api.stru.ai (SDK appends /v1 automatically)
export STRUAI_BASE_URL=https://api.stru.ai

Python Quick Start

import os
from struai import StruAI

client = StruAI(api_key=os.environ["STRUAI_API_KEY"])

# Tier 1: drawings
result = client.drawings.analyze("structural.pdf", page=12)
print(result.id, result.processing_ms)

# Tier 2: projects + docquery
project = client.projects.create(name="Building A", description="Structural set")
job = project.sheets.add(page=12, file_hash=client.drawings.compute_file_hash("structural.pdf"))
sheet = job.wait(timeout=180)

hits = project.docquery.search("beam connection", limit=5)
print(len(hits.hits))

# Tier 3: reviews
review = client.reviews.create(
    file_hash=client.drawings.compute_file_hash("structural.pdf"),
    pages="12,13",
    project_ids=[project.id],
    custom_instructions="Focus on cross-sheet coordination.",
)
final_review = review.wait(timeout=900, poll_interval=5)
print(final_review.status, len(review.issues()))

# Direct upload path also works
upload_review = client.reviews.create(
    file="structural.pdf",
    pages=12,
)
print(upload_review.id)

Real Workflow Examples

Python examples (/examples):

# Drawings-only flow (hash, cache probe, analyze)
python3 examples/test_prod_page12.py --pdf /absolute/path/to/structural.pdf --page 12

# Full projects + docquery workflow
python3 examples/test_prod_page12_full.py --pdf /absolute/path/to/structural.pdf --page 12

# Full workflow + crop demo
python3 examples/test_prod_page12_full.py \
  --pdf /absolute/path/to/structural.pdf --page 12 \
  --crop-output /absolute/path/to/crop.png

# Optional cleanup after full workflow
python3 examples/test_prod_page12_full.py --pdf /absolute/path/to/structural.pdf --cleanup

# Async workflow
python3 examples/async_projects_workflow.py --pdf /absolute/path/to/structural.pdf --page 12

# Review workflow (start + refresh)
python3 examples/review_workflow.py --file-hash your_file_hash --pages 13

# Review workflow (wait for terminal status)
python3 examples/review_workflow.py --file-hash your_file_hash --pages 13 --wait

Page-12 cookbook with all 10 operations (including cypher and crop):

  • examples/PAGE12_COOKBOOK.md
  • examples/REVIEWS_QUICKSTART.md

JavaScript examples (/js/scripts):

cd js
npm install
npm run build

# Drawings-only flow
STRUAI_API_KEY=... STRUAI_BASE_URL=https://api.stru.ai \
STRUAI_PDF=/absolute/path/to/structural.pdf STRUAI_PAGE=12 \
node scripts/drawings_quickstart.mjs

# Full projects + docquery workflow
STRUAI_API_KEY=... STRUAI_BASE_URL=https://api.stru.ai \
STRUAI_PDF=/absolute/path/to/structural.pdf STRUAI_PAGE=12 \
node scripts/projects_workflow.mjs

# Full workflow + crop demo
STRUAI_API_KEY=... STRUAI_BASE_URL=https://api.stru.ai \
STRUAI_PDF=/absolute/path/to/structural.pdf STRUAI_PAGE=12 \
STRUAI_CROP_OUTPUT=/absolute/path/to/crop.png \
node scripts/projects_workflow.mjs

Python API Reference

Async API (AsyncStruAI) mirrors the same resource shape and method names; use await.

Client

  • StruAI(api_key=None, base_url="https://api.stru.ai", timeout=60, max_retries=2)
  • AsyncStruAI(api_key=None, base_url="https://api.stru.ai", timeout=60, max_retries=2)
  • client.drawings
  • client.projects
  • client.reviews

Drawings (client.drawings)

  • analyze(file=None, page=1, file_hash=None) -> DrawingResult
  • check_cache(file_hash) -> DrawingCacheStatus
  • compute_file_hash(file) -> str

Projects Top-Level (client.projects)

  • create(name, description=None) -> ProjectInstance
  • list() -> list[Project]
  • open(project_id, name=None, description=None) -> ProjectInstance
  • delete(project_id) -> ProjectDeleteResult

Reviews Top-Level (client.reviews)

  • create(file=None, pages=1|"1,3,5-7"|"all", file_hash=None, project_ids=None, custom_instructions=None) -> ReviewInstance
    • Pass exactly one of file or file_hash.
    • Raises ValueError if both are missing or both are provided.
  • list(status=None) -> list[Review]
  • get(review_id) -> ReviewInstance
  • open(review_id) -> ReviewInstance

Project Instance (project)

Properties:

  • id, name, description, data
  • sheets, docquery

Methods:

  • delete() -> ProjectDeleteResult

Review Instance (review)

Properties:

  • id, data

Methods:

  • refresh() -> Review
  • status() -> Review
  • wait(timeout=900, poll_interval=5) -> Review
    • Raises ReviewFailedError if the review reaches failed.
    • Raises TimeoutError if the timeout elapses first.
  • questions() -> list[ReviewQuestion]
  • issues() -> list[ReviewIssue]

Sheets (project.sheets)

  • add(file=None, page=1|"1,3,5-7"|"all", file_hash=None, source_description=None, on_sheet_exists=None, community_update_mode=None, semantic_index_update_mode=None) -> Job | JobBatch
  • delete(sheet_id) -> SheetDeleteResult
  • job(job_id, page=None) -> Job

DocQuery (project.docquery)

  • node_get(uuid) -> DocQueryNodeGetResult
  • sheet_entities(sheet_id, entity_type=None, limit=200) -> DocQuerySheetEntitiesResult
  • search(query, index="entity_search", limit=20) -> DocQuerySearchResult
  • neighbors(uuid, mode="both", direction="both", relationship_type=None, radius=200.0, limit=50) -> DocQueryNeighborsResult
  • cypher(query, params=None, max_rows=500) -> DocQueryCypherResult
  • sheet_summary(sheet_id, orphan_limit=10) -> DocQuerySheetSummaryResult
  • sheet_list() -> DocQuerySheetListResult
  • reference_resolve(uuid, limit=100) -> DocQueryReferenceResolveResult
  • crop(output, uuid=None, bbox=None, page_hash=None) -> DocQueryCropResult

CLI parity: project-list maps to client.projects.list(), and the remaining 9 commands map to project.docquery.*, for full 10-command parity.

Python cypher + crop example:

project = client.projects.open("proj_86c0f02e")
rows = project.docquery.cypher(
    "MATCH (n:Entity {project_id:$project_id}) RETURN count(n) AS total",
    params={},
    max_rows=1,
)

crop = project.docquery.crop(
    uuid="entity-uuid-here",
    output="/absolute/path/to/crop.png",
)
print(rows.records[0]["total"], crop.output_path, crop.bytes_written)

Jobs

Job (single-page ingest result):

  • id, page
  • status() -> JobStatus
  • wait(timeout=120, poll_interval=2) -> SheetResult

JobBatch (multi-page ingest result):

  • jobs, ids
  • status_all() -> list[JobStatus]
  • wait_all(timeout_per_job=120, poll_interval=2) -> list[SheetResult]

Reviews

Review:

  • review_id, status, total_pages, pages, progress
  • is_running, is_complete, is_partial, is_failed, is_terminal
  • Status values: running, completed, completed_partial, failed
  • pages and total_pages are populated by POST /v1/reviews; later refresh() / get() calls reflect the slimmer GET /v1/reviews/{review_id} payload and may omit them.
  • progress.specialist.active contains live in-progress specialist rows with question_id, agent, turns_used, max_turns, and updated_at when the server exposes them.

ReviewQuestion:

  • Includes raw_model_output, which is preserved as nested JSON when present.

HTTP Endpoints Covered

Tier 1:

  • POST /v1/drawings
  • GET /v1/drawings/cache/{file_hash}

Tier 2:

  • POST /v1/projects
  • GET /v1/projects
  • DELETE /v1/projects/{project_id}
  • POST /v1/projects/{project_id}/sheets
  • DELETE /v1/projects/{project_id}/sheets/{sheet_id}
  • GET /v1/projects/{project_id}/jobs/{job_id}
  • GET /v1/projects/{project_id}/node-get
  • GET /v1/projects/{project_id}/sheet-entities
  • GET /v1/projects/{project_id}/search
  • GET /v1/projects/{project_id}/neighbors
  • POST /v1/projects/{project_id}/cypher
  • POST /v1/projects/{project_id}/crop
  • POST /v1/reviews
  • GET /v1/reviews
  • GET /v1/reviews/{review_id}
  • GET /v1/reviews/{review_id}/questions
  • GET /v1/reviews/{review_id}/issues

JavaScript Reference

See js/README.md for complete JS method signatures and usage patterns.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

struai-2.4.0.tar.gz (81.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

struai-2.4.0-py3-none-any.whl (27.1 kB view details)

Uploaded Python 3

File details

Details for the file struai-2.4.0.tar.gz.

File metadata

  • Download URL: struai-2.4.0.tar.gz
  • Upload date:
  • Size: 81.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for struai-2.4.0.tar.gz
Algorithm Hash digest
SHA256 c5bd6700815521a056e8739eb675975b5aff03870c3a13887ba8ffbaf003847e
MD5 ced95597cfc6f73e36e3f370d9a3317d
BLAKE2b-256 44b03d3589aca3f0cba46322a1b26caf84567dfc7c65e9ff021fd8aa7f80d62f

See more details on using hashes here.

Provenance

The following attestation bundles were made for struai-2.4.0.tar.gz:

Publisher: release.yml on bhoshaga/struai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file struai-2.4.0-py3-none-any.whl.

File metadata

  • Download URL: struai-2.4.0-py3-none-any.whl
  • Upload date:
  • Size: 27.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for struai-2.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1c521505894f0323f93b96ce3e3c7e25efe26ef862653900cc086f1acfcf8738
MD5 6ec4b347a8b69439542d3f33cd250f10
BLAKE2b-256 2436d38d1a7e5b3aea2028f7a3927b09cb4764a5c0863385ad16a8d22cce9c51

See more details on using hashes here.

Provenance

The following attestation bundles were made for struai-2.4.0-py3-none-any.whl:

Publisher: release.yml on bhoshaga/struai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page