Skip to main content

The official Python SDK for the Cortex AI platform.

Project description

Cortex AI Python SDK - usecortex.ai

The official Python SDK for the Cortex AI platform. Build powerful, context-aware AI applications in your Python applications.

Cortex is your plug-and-play memory infrastructure. It powers intelligent, context-aware retrieval for any AI app or agent. Whether you’re building a customer support bot, research copilot, or internal knowledge assistant.

Learn more about the SDK from our docs

Core features

  • Dynamic retrieval and querying that always retrieve the most relevant context
  • Built-in long-term memory that evolves with every user interaction
  • Personalization hooks for user preferences, intent, and history
  • Developer-first SDK with the most flexible APIs and fine-grained controls

Getting started

Installation

pip install usecortex-ai

Client setup

We provide both synchronous and asynchronous clients. Use AsyncCortexAI when working with async/await patterns, and CortexAI for traditional synchronous workflows. Client initialization does not trigger any network requests, so you can safely create as many client instances as needed. Both clients expose the exact same set of methods.

import os
from usecortex_ai import CortexAI, AsyncCortexAI

api_key = os.environ["CORTEX_API_KEY"]  # Set your Cortex API key in the environment variable CORTEX_API_KEY. Optional, but recommended.

# Sync client
client = CortexAI(token=api_key)

# Async client (for async/await usage)
async_client = AsyncCortexAI(token=api_key)

Create a Tenant

You can consider a tenant as a single database that can have internal isolated collections called sub-tenants. Know more about the concept of tenant here

def create_tenant():
    return client.tenant.create(tenant_id="my-company")

Ingest Your Data

When you index your data, you make it ready for retrieval from Cortex using natural language.

# ingest in your knowledge base
with open("a.pdf", 'rb') as f1, open("b.pdf", 'rb') as f2:
    files = [
        ("a.pdf", f1),
        ("b.pdf", f2)
    ]
    upload_result = client.upload.knowledge(
        tenant_id="tenant_123",
        files=files,
        file_metadata=[
            {
                "id": "doc_a",
                "tenant_metadata": {"dept": "sales"},
                "document_metadata": {"author": "Alice"}
            },
            {
                "id": "doc_b",
                "tenant_metadata": {"dept": "marketing"},
                "document_metadata": {"author": "Bob"}
            }
        ]
    ))

# Ingest user memories
from cortex import CortexClient

client = CortexClient(api_key="your_api_key")

# Simple text memory
result = client.user_memory.add(
    memories=[
        {
            "text": "User prefers detailed explanations and dark mode",
            "infer": True,
            "user_name": "John"
        }
    ],
    tenant_id="tenant-01",
    sub_tenant_id="",
    upsert=True
)

# Markdown content
markdown_result = client.user_memory.add(
    memories=[
        {
            "text": "# Meeting Notes\n\n## Key Points\n- Budget approved",
            "is_markdown": True,
            "infer": False,
            "title": "Meeting Notes"
        }
    ],
    tenant_id="tenant-01",
    sub_tenant_id="",
    upsert=True
)

# User-assistant pairs with inference
conversation_result = client.user_memory.add(
    memories=[
        {
            "user_assistant_pairs": [
                {"user": "What are my preferences?", "assistant": "You prefer dark mode."},
                {"user": "How do I like reports?", "assistant": "Weekly summaries with charts."}
            ],
            "infer": True,
            "user_name": "John",
            "custom_instructions": "Extract user preferences"
        }
    ],
    tenant_id="tenant-01",
    sub_tenant_id="",
    upsert=True
)

For a more detailed explanation of document upload, including supported file formats, processing pipeline, metadata handling, and advanced configuration options, refer to the Ingest Knowledge endpoint.

Search

# Semantic Recall
results = client.recall.full_recall(
    query="Which mode does user prefer",
    tenant_id="tenant_1234",
    sub_tenant_id="sub_tenant_4567",
    alpha=0.8,
    recency_bias=0
)

# Get ingested data (memories + knowledge base)
all_sources = client.data.list_data(
    tenant_id="tenant_1234",
    sub_tenant_id="sub_tenant_4567"
)

For a more detailed explanation of search and retrieval, including query parameters, scoring mechanisms, result structure, and advanced search features, refer to the Search endpoint documentation.

SDK Method Structure & Type Safety

Our SDKs follow a predictable pattern that mirrors the API structure while providing full type safety.

Method Mapping : client.<group>.<function_name> mirrors api.usecortex.ai/<group>/<function_name>

For example: client.upload.upload_text() corresponds to POST /upload/upload_text

The SDKs provide exact type parity with the API specification:

  • Request Parameters : Every field documented in the API reference (required, optional, types, validation rules) is reflected in the SDK method signatures
  • Response Objects : Return types match the exact JSON schema documented for each endpoint
  • Error Types : Exception structures mirror the error response formats from the API
  • Nested Objects : Complex nested parameters and responses maintain their full structure and typing

This means you can rely on your IDE’s autocomplete and type checking. If a parameter is optional in the API docs, it’s optional in the SDK. If a response contains a specific field, your IDE will know about it. Our SDKs are built in such a way that your IDE will automatically provide autocompletion, type-checking, inline documentation with examples, and compile time validation for each and every method.

Just hit Cmd+Space/Ctrl+Space!

Links

Our docs

Please refer to our API reference for detailed explanations of every API endpoint, parameter options, and advanced use cases.

Support

If you have any questions or need help, please reach out to our support team at founders@usecortex.ai.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

usecortex_ai-0.5.5.tar.gz (54.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

usecortex_ai-0.5.5-py3-none-any.whl (100.5 kB view details)

Uploaded Python 3

File details

Details for the file usecortex_ai-0.5.5.tar.gz.

File metadata

  • Download URL: usecortex_ai-0.5.5.tar.gz
  • Upload date:
  • Size: 54.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for usecortex_ai-0.5.5.tar.gz
Algorithm Hash digest
SHA256 cc3bea7f8b8fa4371b1ca687bf00865da4ff34069e67bc65bb67d06c42ce3983
MD5 30a8790118ca27428f4f36e58a4d591c
BLAKE2b-256 a8930f61d9dbe35c56fa3112b6af2065e1994063185ec36c19ce047051cebaa2

See more details on using hashes here.

File details

Details for the file usecortex_ai-0.5.5-py3-none-any.whl.

File metadata

  • Download URL: usecortex_ai-0.5.5-py3-none-any.whl
  • Upload date:
  • Size: 100.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for usecortex_ai-0.5.5-py3-none-any.whl
Algorithm Hash digest
SHA256 2d8974fd71895c7dbe5f851d0b1b8924899bb945a43ac45e49058d1beb14c7f8
MD5 50839927744cb7a41f70270f1b9fa015
BLAKE2b-256 ffbb9fb1f7e22f82b8276935abfb00c7a3d5269714f1d2e00608e04c8da4b181

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page