Skip to main content

The official Python SDK for the Cortex AI platform.

Project description

Cortex AI Python SDK - usecortex.ai

The official Python SDK for the Cortex AI platform. Build powerful, context-aware AI applications in your Python applications.

Cortex is your plug-and-play memory infrastructure. It powers intelligent, context-aware retrieval for any AI app or agent. Whether you’re building a customer support bot, research copilot, or internal knowledge assistant.

Learn more about the SDK from our docs

Core features

  • Dynamic retrieval and querying that always retrieve the most relevant context
  • Built-in long-term memory that evolves with every user interaction
  • Personalization hooks for user preferences, intent, and history
  • Developer-first SDK with the most flexible APIs and fine-grained controls

Getting started

Installation

pip install usecortex-ai

Client setup

We provide both synchronous and asynchronous clients. Use AsyncCortexAI when working with async/await patterns, and CortexAI for traditional synchronous workflows. Client initialization does not trigger any network requests, so you can safely create as many client instances as needed. Both clients expose the exact same set of methods.

import os
from usecortex_ai import CortexAI, AsyncCortexAI

api_key = os.environ["CORTEX_API_KEY"]  # Set your Cortex API key in the environment variable CORTEX_API_KEY. Optional, but recommended.

# Sync client
client = CortexAI(token=api_key)

# Async client (for async/await usage)
async_client = AsyncCortexAI(token=api_key)

Create a Tenant

You can consider a tenant as a single database that can have internal isolated collections called sub-tenants. Know more about the concept of tenant here

def create_tenant():
    return client.tenant.create(tenant_id="my-company")

Ingest Your Data

When you index your data, you make it ready for retrieval from Cortex using natural language.

# ingest in your knowledge base
with open("a.pdf", 'rb') as f1, open("b.pdf", 'rb') as f2:
    files = [
        ("a.pdf", f1),
        ("b.pdf", f2)
    ]
    upload_result = client.upload.knowledge(
        tenant_id="tenant_123",
        files=files,
        file_metadata=[
            {
                "id": "doc_a",
                "tenant_metadata": {"dept": "sales"},
                "document_metadata": {"author": "Alice"}
            },
            {
                "id": "doc_b",
                "tenant_metadata": {"dept": "marketing"},
                "document_metadata": {"author": "Bob"}
            }
        ]
    ))

# Ingest user memories
from cortex import CortexClient

client = CortexClient(api_key="your_api_key")

# Simple text memory
result = client.user_memory.add(
    memories=[
        {
            "text": "User prefers detailed explanations and dark mode",
            "infer": True,
            "user_name": "John"
        }
    ],
    tenant_id="tenant-01",
    sub_tenant_id="",
    upsert=True
)

# Markdown content
markdown_result = client.user_memory.add(
    memories=[
        {
            "text": "# Meeting Notes\n\n## Key Points\n- Budget approved",
            "is_markdown": True,
            "infer": False,
            "title": "Meeting Notes"
        }
    ],
    tenant_id="tenant-01",
    sub_tenant_id="",
    upsert=True
)

# User-assistant pairs with inference
conversation_result = client.user_memory.add(
    memories=[
        {
            "user_assistant_pairs": [
                {"user": "What are my preferences?", "assistant": "You prefer dark mode."},
                {"user": "How do I like reports?", "assistant": "Weekly summaries with charts."}
            ],
            "infer": True,
            "user_name": "John",
            "custom_instructions": "Extract user preferences"
        }
    ],
    tenant_id="tenant-01",
    sub_tenant_id="",
    upsert=True
)

For a more detailed explanation of document upload, including supported file formats, processing pipeline, metadata handling, and advanced configuration options, refer to the Ingest Knowledge endpoint.

Search

# Semantic Recall
results = client.recall.full_recall(
    query="Which mode does user prefer",
    tenant_id="tenant_1234",
    sub_tenant_id="sub_tenant_4567",
    alpha=0.8,
    recency_bias=0
)

# Get ingested data (memories + knowledge base)
all_sources = client.data.list_data(
    tenant_id="tenant_1234",
    sub_tenant_id="sub_tenant_4567"
)

For a more detailed explanation of search and retrieval, including query parameters, scoring mechanisms, result structure, and advanced search features, refer to the Search endpoint documentation.

SDK Method Structure & Type Safety

Our SDKs follow a predictable pattern that mirrors the API structure while providing full type safety.

Method Mapping : client.<group>.<function_name> mirrors api.usecortex.ai/<group>/<function_name>

For example: client.upload.upload_text() corresponds to POST /upload/upload_text

The SDKs provide exact type parity with the API specification:

  • Request Parameters : Every field documented in the API reference (required, optional, types, validation rules) is reflected in the SDK method signatures
  • Response Objects : Return types match the exact JSON schema documented for each endpoint
  • Error Types : Exception structures mirror the error response formats from the API
  • Nested Objects : Complex nested parameters and responses maintain their full structure and typing

This means you can rely on your IDE’s autocomplete and type checking. If a parameter is optional in the API docs, it’s optional in the SDK. If a response contains a specific field, your IDE will know about it. Our SDKs are built in such a way that your IDE will automatically provide autocompletion, type-checking, inline documentation with examples, and compile time validation for each and every method.

Just hit Cmd+Space/Ctrl+Space!

Links

Our docs

Please refer to our API reference for detailed explanations of every API endpoint, parameter options, and advanced use cases.

Support

If you have any questions or need help, please reach out to our support team at founders@usecortex.ai.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

usecortex_ai-0.5.6.tar.gz (54.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

usecortex_ai-0.5.6-py3-none-any.whl (101.4 kB view details)

Uploaded Python 3

File details

Details for the file usecortex_ai-0.5.6.tar.gz.

File metadata

  • Download URL: usecortex_ai-0.5.6.tar.gz
  • Upload date:
  • Size: 54.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for usecortex_ai-0.5.6.tar.gz
Algorithm Hash digest
SHA256 d1e9254d371fdd9c974e09242900a36e1e5a7cefc3961b529bd9dc499bee6fa6
MD5 c7a0ee702a20f449a4d7d9170f991d84
BLAKE2b-256 2a4adf9d4003bacac3a1a1cdb657e12473a8d4811e85656ea040738a72881858

See more details on using hashes here.

File details

Details for the file usecortex_ai-0.5.6-py3-none-any.whl.

File metadata

  • Download URL: usecortex_ai-0.5.6-py3-none-any.whl
  • Upload date:
  • Size: 101.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for usecortex_ai-0.5.6-py3-none-any.whl
Algorithm Hash digest
SHA256 fa29ab7b71da906a345f96d77e11a2ee800c6b8ea909c9fdcfe1d89c0527928c
MD5 14c2482ce08d7a76b505b1b02aa0d028
BLAKE2b-256 95a621d7a174e93abfd1af30cf97b62c7e869988d3b27eca4c9ab80a6e663668

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page