Skip to main content

Natural language interface for quantum computing infrastructure

Project description

text4q Cortex

Natural language interface for quantum computing infrastructure.

from cortex import Cortex

cx = Cortex(backend="ibm_quantum")
result = cx.run("Simulate a Bell state with 2 qubits and measure 1024 times")
print(result.counts)
# {'00': 498, '11': 489, '01': 19, '10': 18}  -- real QPU output with noise

Overview

Cortex is an open-source quantum orchestration platform that translates natural language descriptions into executable quantum circuits, manages QPU resources across providers, and schedules jobs using a quantum-native optimizer.

The core insight: writing OpenQASM circuits by hand is a barrier that keeps most researchers and engineers away from quantum hardware. Cortex removes that barrier without sacrificing access to real QPUs.

Architecture

User (natural language)
        |
  Cortex NLP Engine        -- text4q core: language to OpenQASM 3.0
        |                     pattern-based (v0.1) + LLM-powered (v0.2)
  OQTOPUS Job Queue        -- cloud layer: scheduling, auth, rate limiting
        |
  QAOA Scheduler           -- quantum-native job-to-QPU assignment
        |
  QRMI Resource Manager    -- QPU as HPC node (Slurm-compatible)
        |
  QPU / Simulator          -- IBM Quantum, Google, Qiskit Aer

Status

All modules are implemented and tested. The project is in active development (v0.1, pre-production).

Module Description Status
cortex.nlp Pattern-based NLP engine Stable
cortex.nlp.llm_engine LLM-powered engine (Claude / GPT-4o) Stable
cortex.connectors IBM Quantum + Aer backends Stable
cortex.cloud REST API, async job queue, dashboard Stable
cortex.scheduler QAOA-based QPU assignment Stable
cortex.cli Command-line interface Stable

103 tests passing across Python 3.10, 3.11, and 3.12.

Installation

pip install text4q-cortex

With quantum backends:

pip install "text4q-cortex[qiskit]"      # IBM Quantum + Aer simulator
pip install "text4q-cortex[all]"         # everything including LLM support

Quick Start

Local simulation

from cortex import Cortex

cx = Cortex(backend="aer")
result = cx.run("GHZ state with 3 qubits, 2048 shots")

print(result.counts)
# {'000': 1024, '111': 1024}

print(result.qasm)
# OPENQASM 3.0;
# include "stdgates.inc";
# qubit[3] q;
# ...

IBM Quantum (real hardware)

import os
from cortex import Cortex

cx = Cortex(backend="ibm_quantum", token=os.environ["IBM_QUANTUM_TOKEN"])
result = cx.run("Bell state with 2 qubits, 1024 shots")

# Real QPU output includes noise
print(result.counts)
# {'00': 498, '11': 489, '01': 19, '10': 18}
print(f"Fidelity: {(result.counts.get('00',0) + result.counts.get('11',0)) / result.shots:.2%}")
# Fidelity: 96.19%

LLM-powered engine

Accepts arbitrary circuit descriptions beyond the built-in patterns:

from cortex import Cortex

cx = Cortex(backend="aer", nlp="llm", llm_backend="anthropic")
result = cx.run(
    "Implement QAOA for a Max-Cut problem on a 4-node graph, "
    "p=1 layers, 2048 shots"
)

Cloud API

Start a multi-user job server:

cortex server --port 8000 --workers 4

Submit jobs via HTTP:

curl -X POST http://localhost:8000/jobs \
  -H "x-api-key: your-key" \
  -H "Content-Type: application/json" \
  -d '{"text": "QFT on 4 qubits", "backend": "aer", "shots": 1024}'

Web dashboard available at http://localhost:8000/dashboard.

CLI

cortex run "Bell state with 2 qubits" --qasm
cortex compile "GHZ state, 5 qubits" --output circuit.qasm
cortex submit "VQE for H2 molecule" --backend ibm_quantum --wait
cortex jobs --status done
cortex server

QAOA Scheduler

Assigns jobs to QPU backends using a quantum optimization circuit:

from cortex.scheduler.optimizer import QAOAScheduler
from cortex.scheduler.problem import SchedulingJob, QPUBackend

jobs = [
    SchedulingJob("exp-001", priority=9, estimated_shots=2048),
    SchedulingJob("exp-002", priority=4, estimated_shots=512),
]
backends = [
    QPUBackend("aer",         "Aer Simulator", capacity=1.0, error_rate=1e-6),
    QPUBackend("ibm_quantum", "IBM Eagle",     capacity=0.7, error_rate=0.01),
]

result = QAOAScheduler(backend="aer", p=1, shots=2048).schedule(jobs, backends)
print(result)
# exp-001 -> ibm_quantum   (high priority to low-error QPU)
# exp-002 -> aer            (low priority to simulator)
# cost=-14.2  time=38ms

Noise handling

On real QPUs, results include gate errors, readout errors, and decoherence. Cortex exposes raw measurement counts without post-processing, allowing researchers to apply their own error mitigation:

result = cx.run("Bell state, T1=50us T2=30us noise model, 4096 shots")

counts = result.counts
# {'00': 1923, '11': 1887, '01': 143, '10': 143}

error_rate = (counts.get('01', 0) + counts.get('10', 0)) / result.shots
print(f"Bit-flip error rate: {error_rate:.2%}")
# Bit-flip error rate: 7.00%

Roadmap

  • NLP engine: pattern-based (v0.1)
  • LLM-powered circuit generation (v0.2)
  • IBM Quantum connector + Aer simulator
  • OQTOPUS job queue integration (v0.3)
  • CLI and web dashboard (v0.4)
  • QAOA Scheduler (v0.5)
  • Classical parameter optimization for QAOA (SciPy outer loop)
  • Google Quantum AI connector
  • IonQ and Quantinuum connectors
  • PyPI stable release (v1.0)
  • text4q Cortex Cloud (hosted SaaS)

Contributing

See docs/CONTRIBUTING.md.

Areas where contributions are most welcome:

  • Additional QPU connectors (Google, IonQ, Quantinuum)
  • Error mitigation post-processing utilities
  • QAOA parameter optimization (classical outer loop)
  • Benchmarks on real hardware

License

Apache 2.0. See LICENSE.

Citation

If you use text4q Cortex in academic work, please cite:

@software{text4q_cortex_2024,
  author  = {Sanchez Ferra, Gabriel},
  title   = {text4q Cortex: Natural Language Interface for Quantum Computing Infrastructure},
  year    = {2024},
  url     = {https://github.com/FerraXIDE/text4q-cortex},
  version = {0.1.0}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

text4q_cortex-0.1.7.tar.gz (50.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

text4q_cortex-0.1.7-py3-none-any.whl (57.2 kB view details)

Uploaded Python 3

File details

Details for the file text4q_cortex-0.1.7.tar.gz.

File metadata

  • Download URL: text4q_cortex-0.1.7.tar.gz
  • Upload date:
  • Size: 50.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for text4q_cortex-0.1.7.tar.gz
Algorithm Hash digest
SHA256 ab5a142cf8407054016fa58526669e2ef3adee4b1ffe6b7eeee9e9bea5767555
MD5 dee0dc2b2822a51ca7048e6fb9dd7232
BLAKE2b-256 04f5c911d78f80b5084ea3deff83b4ffedb56ab54b831bd64cf96887bec0d8a0

See more details on using hashes here.

File details

Details for the file text4q_cortex-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: text4q_cortex-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 57.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for text4q_cortex-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 00b6d8cee09c0353ffc78fa8f12b05d1d0661bb458130384122883c9af141cfb
MD5 7ef59b05dbd0fb2834d8ac4a2eb8e08a
BLAKE2b-256 f3369478977eb378f931a38a23f5f15f37e5850b363fefee0e95e9e6e1b29dcd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page