Skip to main content

QTI 3.0 item authoring and upload SDK

Project description

QTI 3.0 Conversion SDK

Build assessment items using Python data models. The SDK converts them into valid, renderable QTI 3.0 XML — handling identifiers, response processing, scoring, and content transformation internally.

Installation

pip install incept-qti-sdk

Or with uv:

uv add incept-qti-sdk

For development (with test/lint tools):

git clone https://github.com/trilogy-group/incept_qti_converter.git
cd incept_qti_converter
pip install -e ".[dev]"

Quickstart

from qti_sdk import (
    QuestionItem, QtiBuilder, ChoiceConfig, Choice,
    SummativeBehavior, ItemPackaging,
)

item = QuestionItem(
    question="What is the capital of France?",
    behavior=SummativeBehavior(),
    packaging=ItemPackaging(difficulty=0.3),
    interactions=[
        ChoiceConfig(choices=[
            Choice(id="A", content="London"),
            Choice(id="B", content="Paris", correct=True),
            Choice(id="C", content="Berlin"),
        ]),
    ],
)

result = QtiBuilder().build(item)
print(result.item_xml)

To upload items to TimeBack, copy .env.example to .env, fill in your credentials, then:

import asyncio
from qti_sdk import TimeBackConfig, upload_qti_package, get_job_status

config = TimeBackConfig.from_env()
upload = asyncio.run(upload_qti_package([result], config))
print(f"Job: {upload.job_id}")

CLI Usage

The SDK ships a qti-sdk command (also available as python -m qti_sdk) for use from any language or shell script. Input is QuestionItem JSON; output is structured JSON on stdout.

Build items into QTI XML

Takes QuestionItem JSON (a single object or an array) and writes QTI 3.0 XML files to an output directory. A manifest JSON is emitted to stdout.

# From a file
qti-sdk build -i items.json -o ./qti_output

# From stdin (e.g. piped from a TypeScript generator)
cat items.json | qti-sdk build -o ./qti_output

Output directory structure:

qti_output/
  items/
    choice_abc123.xml
    text_entry_def456.xml
  stimuli/
    stim_aabbccdd.xml

Manifest JSON on stdout:

{
  "output_dir": "./qti_output",
  "items": [
    {
      "item_id": "choice_abc123",
      "item_xml": "items/choice_abc123.xml",
      "stimulus_xml": "stimuli/stim_aabbccdd.xml",
      "stimulus_id": "stim_aabbccdd",
      "metadata": { "interaction_type": "choiceInteraction" }
    }
  ]
}

Upload items to TimeBack

Builds items, packages into a QTI ZIP, and uploads to TimeBack in one step. Credentials are read from environment variables (or a .env file).

# Set credentials (or put them in .env)
export TIMEBACK_PLATFORM_API_URL="https://api.timeback.example.com"
export TIMEBACK_APPLICATION_CLIENT_ID="your-client-id"
export TIMEBACK_APPLICATION_CLIENT_SECRET="your-client-secret"

# Upload and return immediately with a job ID
qti-sdk upload -i items.json

# Upload and wait for processing to complete
qti-sdk upload -i items.json --poll

Output (without --poll):

{
  "job_id": "abc-123-def",
  "warnings": []
}

Output (with --poll):

{
  "job_id": "abc-123-def",
  "warnings": [],
  "status": "COMPLETED",
  "last_updated": "2026-03-01T12:00:00Z",
  "item_statuses": [
    { "item_id": "uuid-1", "status": "COMPLETED", "last_updated": "...", "message": null }
  ]
}

Additional flags: --media-root DIR (resolve local asset paths), --save-zip PATH (save the assembled ZIP for debugging), --stimulus-mode inline (inline stimuli instead of separate files), --emit-end-attempt (include qti-end-attempt-interaction in adaptive items for renderers without a wrapper Submit button).

Check job status

qti-sdk status JOB_ID

# Poll until the job finishes
qti-sdk status JOB_ID --poll

Dump JSON Schema

# All models
qti-sdk schema

# Specific model
qti-sdk schema --model QuestionItem

Curriculum Standards

Every uploaded item requires a curriculum key and at least one curriculum_standards entry. The SDK resolves human-readable standard labels (e.g. CCSS.MATH.CONTENT.3.OA.A.1) to CASE UUIDs automatically at upload time.

Finding the right curriculum key

The SDK ships with pre-registered curriculum courses. Key naming follows the pattern {DOCUMENT}_{SUBJECT}_{GRADE}:

Key Document Course
COMMON_CORE_STANDARD_MATH_3RD_GRADE Common Core Standard 3rd Grade (Math)
COMMON_CORE_STANDARD_READING_5TH_GRADE Common Core Standard 5th Grade (Reading)
COMMON_CORE_STANDARD_LANGUAGE_7TH_GRADE Common Core Standard 7th Grade (Language)
COMMON_CORE_STANDARD_MATH_ALGEBRA_1 Common Core Standard Algebra 1

Python consumers can browse all keys via CurriculumKey enum autocomplete, or list them at runtime:

from qti_sdk import list_curricula

for key, info in list_curricula().items():
    print(f"{key}: {info.document_title} / {info.course_title}")

CLI / TypeScript consumers can list all available keys with:

qti-sdk list-curricula

This outputs JSON mapping each key to its document and course title:

{
  "COMMON_CORE_STANDARD_MATH_3RD_GRADE": {
    "document_title": "Common Core Standard",
    "course_title": "3rd Grade"
  },
  ...
}

Python usage

Use the CurriculumKey enum for IDE autocomplete and typo protection:

from qti_sdk import CurriculumKey, ItemPackaging, StandardAlignment

packaging = ItemPackaging(
    difficulty=0.5,
    curriculum=CurriculumKey.COMMON_CORE_STANDARD_MATH_3RD_GRADE,
    curriculum_standards=[
        StandardAlignment(label="CCSS.MATH.CONTENT.3.OA.A.1"),
    ],
)

CLI / JSON usage

Pass the curriculum key as a plain string in the packaging object:

{
  "packaging": {
    "difficulty": 0.5,
    "curriculum": "COMMON_CORE_STANDARD_MATH_3RD_GRADE",
    "curriculum_standards": [
      {"label": "CCSS.MATH.CONTENT.3.OA.A.1"}
    ]
  }
}

How standard resolution works

  1. curriculum tells the SDK which course tree to search in TimeBack's CASE API.
  2. curriculum_standards[].label values are matched against standard codes in that tree.
  3. The SDK resolves each label to a CASE UUID and includes it in the uploaded package.

Validation

The SDK validates all items before making any network calls:

  • Items missing curriculum_standards are rejected (TimeBack requires at least one).
  • Items with standard labels but no curriculum key are rejected.
  • Items with an invalid curriculum key are rejected.
  • All errors are collected and reported per-item in a single ValidationError, so you can fix everything in one pass.

Logging and verbosity

Progress logs go to stderr (not stdout), so they never interfere with structured JSON output.

qti-sdk upload -i items.json              # INFO-level logs (default)
qti-sdk upload -i items.json --verbose    # DEBUG-level logs
qti-sdk upload -i items.json --quiet      # WARNING only

Error handling

On failure, the CLI writes a structured JSON error to stdout and exits with a non-zero code:

Exit code Meaning
0 Success
1 Validation error (bad JSON, missing env vars)
2 Build error (SDK internal)
3 Upload / network error
4 Poll timeout
{
  "error": "ValidationError",
  "message": "4 validation error(s)",
  "details": [
    { "loc": ["question"], "msg": "Field required", "type": "missing" }
  ]
}

Input format

The CLI accepts a QuestionItem JSON — either a single object or an array. The QuestionItem schema is available via qti-sdk schema --model QuestionItem.

A minimal example:

{
  "question": "What is 2 + 2?",
  "behavior": { "type": "summative" },
  "interactions": [
    {
      "type": "choice",
      "choices": [
        { "id": "A", "content": "3" },
        { "id": "B", "content": "4", "correct": true },
        { "id": "C", "content": "5" }
      ]
    }
  ],
  "packaging": { "difficulty": 0.2 }
}

Examples

The examples/ directory contains runnable scripts for every interaction type:

Example What it demonstrates
example_mcq.py Multiple-choice (summative, adaptive, formative)
example_text_entry.py Fill-in-the-blank (single and multi-blank)
example_extended_text.py Essay / free-response items
example_match.py Matching and categorization
example_media.py Media interaction (video/audio, play count tracking)
example_order.py Sequencing / ordering (temporarily disabledOrderConfig raises ValidationError at construction)
example_pci.py Portable Custom Interactions (graphs, number lines)
example_composite.py Multi-part items (mixed interaction types)
example_scoring.py Weighted scoring, partial credit, score expressions
example_template.py Randomized item variants with template variables
example_upload.py Build + upload + poll job status

Error Handling

All SDK exceptions inherit from QtiSdkError:

from qti_sdk import QtiSdkError, ValidationError, BuildError, UploadError

try:
    result = builder.build(item)
except ValidationError:
    ...  # invalid model input
except BuildError:
    ...  # XML generation failure
except QtiSdkError:
    ...  # catch-all for any SDK error

Logging

CLI users: Logs go to stderr automatically. Use --verbose for debug output, --quiet for warnings only. See CLI Usage above.

Python users: The SDK uses Python's standard logging module under the qti_sdk namespace:

import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger("qti_sdk").setLevel(logging.DEBUG)
Logger Levels used What it logs
qti_sdk.upload.auth DEBUG Granted OAuth scopes
qti_sdk.upload.uploader INFO, ERROR, WARNING Progress milestones, upload failures, asset download issues
qti_sdk.upload.case_resolver INFO Course tree loading, code-to-UUID mapping counts

Architecture & Data Model

Goal

Provide a mandated set of input data models that generator authors use to produce assessment items. The SDK converts these models into valid, renderable QTI 3.0 XML — handling identifier correlation, response processing, and content transformation internally.

Scope: items, stimulus, and companion materials (no test/section assembly). Adaptive items supported. Composite items (multiple interactions in one item) supported. Template processing (randomized item variants) supported. Presentation attributes deferred. Inter-resource associations (stimulus refs, dependencies) are handled automatically based on field presence.

Quick-reference for generator authors: CONVENTIONS.md Full field-by-field schema mapping, composite semantics, scoring chain, template processing, and validation rules: QTI_SCHEMA_MAPPING.md Accessibility support status and rollout policy: docs/accessibility_support.md Custom grading API contract (for api_scoring): docs/custom_grading_api.md


Core Design Decision

Unified QuestionItem container with an explicit behavior discriminator.

All items are constructed as a single QuestionItem that carries content + assessment semantics. A behavior field declares how the item should behave — which response processing pattern, feedback reveal strategy, and scoring approach to use. An interactions list holds one or more typed interaction configs.

Generator authors:

  1. Create a QuestionItem with a shared question stem
  2. Pick a behavior (how the item scores and gives feedback)
  3. Add one or more interaction configs (what kind of question — ChoiceConfig, TextEntryConfig, etc.)
  4. Fill in optional fields (stimulus, companion materials, accessibility catalog, template, scoring dimensions, feedback)

They never write QTI identifiers, response processing logic, or outcome declarations.


Interaction Types (7 configs)

Interaction-specific fields live on the config. Shared fields (question, stimulus, companion_materials, accessibility_catalog, behavior, feedback, template, scoring_dimensions) live on QuestionItem. Multiple interactions in one item produce a composite item automatically.

Config Covers Key interaction-specific fields
ChoiceConfig MCQ, multi-select choices[], max_selections, shuffle, score_map, score_expression
TextEntryConfig Fill-in-the-blank (single/multi) answers, prompt (with <blank> placeholders), case_sensitive, tolerance, correct_expression
ExtendedTextConfig Essay, SAQ, FRQ expected_length, scoring_mode
MatchConfig Matching pairs, categorization source_set[], target_set[], correct_mapping[], shuffle, score_map
MediaConfig Video/audio with play count tracking media_type, sources[], autostart, min_plays, max_plays, loop
OrderConfig Sequencing (temporarily disabled — raises ValidationError) items[], correct_order[], shuffle, partial_credit
PCIConfig PCI-driven (graph, number-line, etc.) interaction_type, data_attributes, properties, interaction_markup, scoring (match-correct | external)

All scorable configs also support score_expression (typed DSL) and target_dimension (multi-dimensional scoring). Each config has optional prompt and label fields for composite items. PCIConfig does not generate standard QTI interaction elements — unmodeled standard interactions (gap-match, inline-choice, etc.) need first-class configs when needed. PCI modules are resolved via PCI_MODULE_REGISTRY (4 registered platform modules) or explicit module + data_item_path_uri; unresolvable PCIs are rejected at construction time.

For full field definitions: QTI_SCHEMA_MAPPING.md §1. For usage rules per interaction type: CONVENTIONS.md.


Behavior Types (assessment patterns)

Each behavior is a parameterized type (discriminated union) that maps to a pre-built response processing template inside the SDK.

Behavior Use case Parameters
SummativeBehavior High-stakes test, one attempt None. No feedback. SCORE only.
FeedbackEnabled Practice, formative, adaptive adaptive (false=non-adaptive, true=adaptive), policy (FeedbackPolicy). Feedback content via Feedback entries on the item/interaction.
ExternalGraded Essay, FRQ, complex rubric None. No automated scoring.
api_scoring External API-based grading ApiScoringConfig(endpoint, mastery_value?, extra_fields?). See docs/custom_grading_api.md for the API contract.

Feedback content (hints, explanations, solution steps, learning content, answer reveals) lives on Feedback entries attached at three levels: sub-items (Choice, OrderItem, MatchItem), interaction configs, and the QuestionItem. The behavior's FeedbackPolicy controls default reveal timing.

The SDK maps Feedback entries to QTI elements automatically: choice-level → qti-feedback-inline, interaction-level → qti-feedback-block, item-level → qti-modal-feedback or qti-feedback-block.


Shared Building Blocks (optional fields on QuestionItem)

Block Purpose
Stimulus Text, image, audio, or video passage. Union type discriminated by type field. Accepts a single stimulus or a list (multi-document).
CompanionMaterials Tools available during the item: calculator?: basic|standard|scientific, ruler?: bool, protractor?: bool
LearningContent Structured remediation material (text/video). Used as content in Feedback entries.
Feedback First-class feedback entity. type (explanation/hint/solution_steps/learning_content/answer_reveal), content (markdown or LearningContent), optional show_when (ShowCondition).
FeedbackPolicy Default show conditions per feedback type. Lives on FeedbackEnabled behavior.
ScoringDimension Named scoring dimension with max_score and scoring_method (human/machine). Interactions target dimensions via target_dimension.
TemplateConfig Randomized item variants: variables[] (random integers/floats), constraints[], computed_values. correct_expression on TextEntryConfig computes answers at delivery time.
ContextVariable Advanced escape hatch for raw qti-context-declaration elements. Most generators will never use this.
AccessibilityCatalog Per-element accessibility alternatives. List of {target_path, support, content} where target_path points to model fields (for example question, interactions[0].prompt, interactions[0].choices[B]), support is a QTI card type, and content is inline HTML/SSML or a file reference.

Presence-Driven Associations

The SDK auto-generates QTI associations and separate resource files based on which optional fields are populated. Generator authors never deal with identifiers, hrefs, or cross-file references.

Stimulus → separate file or inline (mode-driven)

When stimulus is present on the QuestionItem, the SDK generates stimulus content in one of two modes, selected at build time:

Mode 1: Separate files (default)

  1. Generates a separate qti-assessment-stimulus XML file containing the stimulus body (text/image/audio/video processed through the content pipeline)
  2. Adds a qti-assessment-stimulus-ref element on the item with identifier and href pointing to the stimulus file
  3. Identifier/href correlation is guaranteed by the SDK — a single internal ID produces the stimulus filename, the ref's href, and both identifier values. Generator authors never see or manage these.

Mode 2: Inline

  1. Stimulus content is injected directly into <qti-item-body> before the interaction element
  2. No separate stimulus file generated, no <qti-assessment-stimulus-ref> element
  3. Manifest has no stimulus resources or dependencies

Companion Materials → qti-companion-materials-info

When companion_materials is present on the QuestionItem, the SDK generates qti-companion-materials-info with the appropriate children:

  • companion_materials.calculator = "scientific"<qti-calculator> with type attribute
  • companion_materials.ruler = true<qti-rule>
  • companion_materials.protractor = true<qti-protractor>

No identifier correlation needed — this is a self-contained element on the item.

Accessibility Catalog → qti-catalog-info

When accessibility_catalog is present on the QuestionItem, the SDK generates qti-catalog-info containing one qti-catalog per target element, each with qti-card entries for the supplied support types:

  • {support: "spoken", content: "<speak>...</speak>"}<qti-card support="spoken"> with SSML content
  • {support: "glossary-on-screen", content: "..."}<qti-card support="glossary-on-screen"> with HTML content
  • {support: "sign-language", content: "https://...mp4"}<qti-card support="sign-language"> with <qti-file-href>
  • {support: "long-description", content: "..."}<qti-card support="long-description"> with HTML content

The SDK wires data-catalog-idref attributes on corresponding item-body elements to link each catalog entry to the content it describes. Generator authors supply target_path values (for example question, interactions[0].prompt, interactions[0].choices[B], interactions[0].label) and the builder resolves them to the correct XML element.

Only cards that are supplied get generated. An item with just spoken entries gets a catalog with only spoken cards — no empty placeholder cards for other support types.

Stimulus mode selection

The mode is selected via the builder API:

# At build time
builder = QtiBuilder()
result = builder.build(item, stimulus_mode="separate")   # default
result = builder.build(item, stimulus_mode="inline")

# Or post-processing transform (backwards-compatible)
from qti_sdk.builders.transforms import inline_stimulus, inline_all_stimuli
result = inline_stimulus(result)          # single BuildResult
results = inline_all_stimuli(results)     # list of BuildResults

Items without a stimulus pass through unchanged regardless of mode.

End-attempt interaction (adaptive)

By default, adaptive items do not emit qti-end-attempt-interaction or the RESPONSE_END_ATTEMPT response declaration. This avoids a duplicate Submit button when the renderer (e.g. qti-3-player) provides its own wrapper Submit. To include them for renderers that need an in-body end-attempt control:

# CLI
qti-sdk build -i items.json -o out/ --emit-end-attempt

# Python API
result = builder.build(item, emit_end_attempt=True)

Manifest Dependencies (auto-wired at package time)

When exporting a batch of items, the SDK generates imsmanifest.xml with:

  • Each item as a resource (type imsqti_item_xmlv3p0)
  • Each stimulus as a resource (type imsqti_stimulus_xmlv3p0)
  • dependency elements on items that reference stimuli, wired by matching identifiers
  • qtiMetadata per resource — auto-derived from model type and behavior (interaction type, feedback type, scoring mode)

Generator authors call a single package(items) method. The manifest is fully derived.


Content Processing Pipeline (reusable library)

All text fields that contain student-visible content go through:

  1. LaTeX → MathML$...$ and $$...$$ delimiters converted via MathJax. Existing MathML and SVG protected.
  2. Template variable substitution{{var}} tokens are replaced with <qti-printed-variable> elements (when TemplateConfig is present). Tokens are protected through the Markdown stage via placeholders.
  3. Markdown → HTML — Markdown parsed by mistune with strikethrough + table plugins.
  4. HTML → XML — Void elements self-closed. Entities fixed for XML compliance.

This pipeline ships as a library. Generator authors can pre-process their content through it, or pass raw markdown/LaTeX and let the converter handle it.

Image fields go through a separate chain: probe dimensions, decide inline vs. side placement, generate sized <img> tags.

Markdown Capability Status

Current behavior is implemented in qti_sdk/content/markdown_converter.py (mistune.create_markdown(plugins=["strikethrough", "table"], hard_wrap=True)), then normalized by qti_sdk/content/xml_utils.py.

Capability Typical syntax Current support status Notes / when needed
Headers #, ##, ### Supported Renders to <h1>...<h6>.
Bold **text** Supported Renders to <strong>.
Italic *text* or _text_ Supported Renders to <em>.
Inline code `code` Supported Renders to <code>.
Fenced code blocks Triple backticks Supported Renders to <pre><code>...</code></pre>.
Strikethrough ~~text~~ Supported Enabled via strikethrough plugin.
Tables Pipe table syntax Supported Enabled via table plugin.
Lists / blockquotes / links Standard Markdown Supported Handled by mistune core parser.
Single newline to line break line1 + newline + line2 Supported hard_wrap=True emits <br/>.
Raw inline/block HTML passthrough <sub>, <div>, etc. Not supported (escaped) Tags are escaped as text (&lt;...&gt;) in current pipeline.
Subscript (Markdown syntax) H~2~O Not supported No subscript plugin/extension configured.
Superscript (Markdown syntax) x^2^ Not supported No superscript plugin/extension configured.

This table reflects current behavior, not target behavior. If richer typography is required in future content (chemistry formulas, exponents, semantic inline HTML), treat subscript/superscript and HTML passthrough as explicit backlog capabilities.


How the Converter Works

Generator output (QuestionItem)
    │
    ├── behavior field ──► selects response processing template
    ├── interactions[] ──► selects item body structure (single or composite)
    ├── content fields ──► content processing pipeline ──► XHTML
    │
    ├── stimulus present? ──► YES ──► generate stimulus XML + stimulus-ref
    │                         NO  ──► skip
    │
    ├── template present? ──► YES ──► emit template declarations + processing
    │                         NO  ──► skip
    │
    ├── presence-driven ──► companion_materials, accessibility_catalog,
    │   associations         feedback, scoring_dimensions, context_declarations
    │
    ▼
QtiBuilder.build(item) assembles item XML:
    1. Context declarations (if present)
    2. Response declarations (derived from interactions + correct answers + score maps)
    3. Outcome declarations (SCORE, MAXSCORE, FEEDBACK, per-dimension outcomes)
    4. Template declarations + processing (if template present)
    5. Item body (interactions + stimulus-ref + processed content + feedback blocks)
    6. Response processing (scoring chain: score_expression > score_map > binary)
    7. Feedback elements (if feedback entries present)
    │
    ▼
Output:
    item.xml ─────── QTI 3.0 assessment item (all identifiers correlated)
    stimulus.xml ─── QTI 3.0 stimulus (only if stimulus field present)

All identifier wiring is internal. The generator author never sees RESPONSE, SCORE, FEEDBACK_optionA, or stimulus ref identifiers. For composite items, per-interaction identifiers (RESPONSE_1, SCORE_1, FEEDBACK_1, etc.) are auto-generated and correlated.


Validation Pipeline

Tier What How
Input validation Model shape, required fields for declared behavior Pydantic validators on the mandated models
XSD validation Generated XML conforms to QTI 3.0 schema XSD validation against official schema
Semantic validation Identifier correlation, correct answer references valid choices Programmatic checks (guaranteed by builder, but belt-and-suspenders)
Render validation Item renders correctly in a QTI player Feed to renderer, screenshot, LLM compares source vs. rendered

Tier 4 is recommended during onboarding (first 50 items per generator) and optional for steady-state.


Adaptability Strategy

When behavior needs to change

A generator author needs a new feedback strategy or scoring approach that doesn't match existing behaviors.

Path: Add a new behavior type with its parameters. Write the response processing template (~50-100 lines). Register it in the converter. Ship an SDK update. No model changes needed on QuestionItem — the new behavior type carries its own content.

When a new interaction type is needed

An interaction not covered by the 6 config types (e.g., hotspot on image, slider, drawing).

Path: Define a new interaction config with its specific fields. Write the item body generator and supported behavior templates. Register it in QtiBuilder. Ship an SDK update.

Escape hatch: PCIConfig covers any PCI-based interaction immediately — for registered modules (number-line-question, graph-based-question, graph-plot-points-question, graph-line-inequality-question) the author provides interaction_type + data_attributes and the SDK handles module resolution. Use correct_values for match-correct scoring (format depends on module: space-separated "x y" for point type, decimal for float, string for string). For custom modules, the author provides explicit module + data_item_path_uri. For graph-plot-points-question, use increment/label attributes (data-graph-increment-value, data-graph-label-interval, etc.) instead of data-graph-x-step/data-graph-y-step.

When the model is too restrictive

A generator needs a field or structure the model doesn't support.

Path: Add the field as optional. Existing generators are unaffected. Only the behavior templates that use the new field need updating.

When full custom scoring is needed

Rarely, someone needs score computation logic that no template covers.

Path: Two escape hatches, in order of preference:

  1. Score Expression DSL — a typed, serializable expression tree (SumExpr, ProductExpr, MapResponseExpr, etc.) that compiles 1:1 to QTI expression elements. Covers weighted sums, scaled scores, bonus/penalty math. Lives on the model, round-trips via JSON, LLM-friendly.
  2. Builder hooks — non-serializable build-time extensibility on the builder class (not the model). Override specific builder methods to inject custom XML fragments into response processing. Full power, requires QTI knowledge.

What We Don't Build Yet

  • Test assembly (sections, ordering, branching, time limits)
  • Presentation/style attributes (CSS, ARIA beyond defaults)
  • Accessibility feature rollout gating (new SDK accessibility support is staged behind confirmed player support; see docs/accessibility_support.md)
  • Accessibility content production (the SDK accepts and wires catalog entries supplied by generators, but does not produce SSML, sign language video, braille files, or simplified text itself — generators own that pipeline)
  • Result reporting (QTI results format — delivery-side concern, not authoring)
  • QTI 2.x export (the SDK targets QTI 3.0 only — see QTI_VERSION_EXPORT_GUIDE.md for a full analysis of what legacy version export would require and the implementation path)

These can be layered on later without changing the item-level models.


Summary

Decision Choice Rationale
Item container Unified QuestionItem with interactions[] list Shared fields declared once. Composite items are len > 1.
Interaction types 6 typed configs inside QuestionItem Covers K-12 + SAT + AP. PCIConfig for the rest.
Model vs. template Both — model carries content, behavior selects template Decouples what you say from how it's scored
Behavior variation Parameterized types + pre-built templates New behaviors don't change models. 3 types, expandable.
Scoring Priority chain: score_expression > score_map > binary Weighted, partial-credit, and DSL scoring without raw XML
Associations Presence-driven: stimulus, companion materials, a11y catalog, template, scoring dimensions, feedback Field populated → SDK generates refs, files, declarations, and dependencies
Content processing Reusable library (LaTeX→MathML→{{var}}→Markdown→HTML→XML) Same pipeline for all interaction types
Escape hatches PCIConfig (PCI) + Score Expression DSL + context declarations + builder hooks Handles the 10% without blocking the 90%
Validation 4-tier: input → XSD → semantic → render Catches errors at each level

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

incept_qti_sdk-0.5.2.tar.gz (120.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

incept_qti_sdk-0.5.2-py3-none-any.whl (150.7 kB view details)

Uploaded Python 3

File details

Details for the file incept_qti_sdk-0.5.2.tar.gz.

File metadata

  • Download URL: incept_qti_sdk-0.5.2.tar.gz
  • Upload date:
  • Size: 120.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for incept_qti_sdk-0.5.2.tar.gz
Algorithm Hash digest
SHA256 db03ef6038f437ba0cd193df4d67ab85725ddc9311c4a60d0f04d5c89ee572a3
MD5 a54b4af8391fe1f9e6a80c0e5c3bb655
BLAKE2b-256 5cda1601e3cdf3980738406cf74e80eca0cfbcccddd336a92c8ddb2832d79bb5

See more details on using hashes here.

Provenance

The following attestation bundles were made for incept_qti_sdk-0.5.2.tar.gz:

Publisher: publish.yml on trilogy-group/incept_qti_converter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file incept_qti_sdk-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: incept_qti_sdk-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 150.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for incept_qti_sdk-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e30bda9767b1498ac83efeec99dde35e2f15c147b4e1d4b9e1bd4dde62ca889f
MD5 0033802ce5cfd0474fc561236f7e90cd
BLAKE2b-256 f1cdcd34b87718b2dcf3d77e2a3139cfa16972f63b72a630f1ab6b8250c2cb2a

See more details on using hashes here.

Provenance

The following attestation bundles were made for incept_qti_sdk-0.5.2-py3-none-any.whl:

Publisher: publish.yml on trilogy-group/incept_qti_converter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page